We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter considers lower bounds for empirical processes and statistical estimation problems. We know that upper bounds for empirical processes and empirical risk minimization can be obtained from the covering number analysis. We show that, under suitable conditions, lower bounds can also be obtained using covering numbers.
In online learning, we consider a learning model that is different from that of supervised learning, in that we make predictions sequentially and obtain feedback after predictions are made. In this chapter, we introduce this learning model as well as some first-order online learning algorithms.
This chapter describes some theoretical results of reinforcement learning, and the analysis may be regarded as a natural generalization of techniques introduced for contextual bandit problems. We will consider both model-free and model-based algoithms, and introduce structural results for reinforcemet learning that lead to algorithms with provably efficient statistical complexity.
This chapters presents some known theoretical results for neural networks, including some theoretical analysis that has been developed recently. We show that neural networks can be analyzed using kernel methods and L1 regularization methods that have been studied in previous chapters.
In sequential estimation problems investigated in the next few chapters, we observe a sequence of random variables that are not independent. This requires a generalization of sums of independent variables, called Martingales. This chapter studies probability inequalities and uniform convergence for Martingales, which are essential in analyzing sequential statistical estimation problems.
The mathematical theory of machine learning not only explains the current algorithms but can also motivate principled approaches for the future. This self-contained textbook introduces students and researchers of AI to the main mathematical techniques used to analyze machine learning algorithms, with motivations and applications. Topics covered include the analysis of supervised learning algorithms in the iid setting, the analysis of neural networks (e.g. neural tangent kernel and mean-field analysis), and the analysis of machine learning algorithms in the sequential decision setting (e.g. online learning, bandit problems, and reinforcement learning). Students will learn the basic mathematical tools used in the theoretical analysis of these machine learning problems and how to apply them to the analysis of various concrete algorithms. This textbook is perfect for readers who have some background knowledge of basic machine learning methods, but want to gain sufficient technical knowledge to understand research papers in theoretical machine learning.
Statistical and machine learning methods have many applications in the environmental sciences, including prediction and data analysis in meteorology, hydrology and oceanography; pattern recognition for satellite images from remote sensing; management of agriculture and forests; assessment of climate change; and much more. With rapid advances in machine learning in the last decade, this book provides an urgently needed, comprehensive guide to machine learning and statistics for students and researchers interested in environmental data science. It includes intuitive explanations covering the relevant background mathematics, with examples drawn from the environmental sciences. A broad range of topics is covered, including correlation, regression, classification, clustering, neural networks, random forests, boosting, kernel methods, evolutionary algorithms and deep learning, as well as the recent merging of machine learning and physics. End‑of‑chapter exercises allow readers to develop their problem-solving skills, and online datasets allow readers to practise analysis of real data.
Three areas where machine learning (ML) and physics have been merging: (a) Physical models can have computationally expensive components replaced by inexpensive ML models, giving rise to hybrid models. (b) In physics-informed machine learning, ML models can be solved satisfying the laws of physics (e.g. conservation of energy, mass, etc.) either approximately or exactly. (c) In forecasting, ML models can be combined with numerical/dynamical models under data assimilation.
A good model aims to learn the underlying signal without overfitting (i.e. fitting to the noise in the data). This chapter has four main parts: The first part covers objective functions and errors. The second part covers various regularization techniques (weight penalty/decay, early stopping, ensemble, dropout, etc.) to prevent overfitting. The third part covers the Bayesian approach to model selection and model averaging. The fourth part covers the recent development of interpretable machine learning.
Kernel methods provide an alternative family of non-linear methods to neural networks, with support vector machine being the best known among kernel methods. Almost all linear statistical methods have been non-linearly generalized by the kernel approach, including ridge regression, linear discriminant analysis, principal component analysis, canonical correlation analysis, and so on. The kernel method has also been extended to probabilisitic models, for example Gaussian processes.
Under unsupervised learning, clustering or cluster analysis is first studied. Clustering methods are grouped into non-hierarchical (including K-means clustering) and hierarchical clustering. Self-organizing maps can be used as a clustering method or as a discrete non-linear principal component analysis method. Autoencoders are neural network models that can be used for non-linear principal component analysis. Non-linear canonical correlation analysis can also be performed using neural network models.
NN models with more hidden layers than the traditional NN are referred to as deep neural network (DNN) or deep learning (DL) models, which are now widely used in environmental science. For image data, the convolutional neural network (CNN) has been developed, where in convolutional layers, a neuron is only connected to a small patch of neurons in the preceding layer, thereby greatly reducing the number of model weights. Popular architectures of DNN include the encoder-decoder and U-net models. For time series modelling, the long short-term memory (LSTM) network and temporal convolutional network have been developed. Generative adversarial network (GAN) produces highly realistic fake data.
Principal component analysis (PCA), a classical method for reducing the dimensionality of multivariate datasets, linearly combines the variables to generate new uncorrelated variables that maximize the amount of variance captured. Rotation of the PCA modes is commonly performed to provide more meaningful interpretation. Canonical correlation analysis (CCA) is a generalization of correlation (for two variables) to two groups of variables, with CCA finding modes of maximum correlation between the two groups. Instead of maximum correlation, maximum covariance analysis extracts modes with maximum covariance.
Forecast verification evaluates the quality of the forecasts made by a model, using a variety of forecast scores developed for binary classes, multiple classes, continuous variables and probabilistic forecasts. Skill scores estimate a model’s skill relative to a reference model or benchmark. Problems such as spurious skills and extrapolation with new data are discussed. Model bias in the output predicted by numerical models is alleviated by post-processing methods, while output from numerical models with low spatial resolution is enhanced by downscaling methods, especially in climate change studies.