We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 12 is the conclusion. It presents a discussion of how the components of performance evaluation for learning algorithms discussed throughout the book unify into an overall framework for in-laboratory evaluation. This is followed by a discussion of how to move from a laboratory setting to a deployment setting based on the material covered in the last part of the book. We then discuss the potential social consequences of machine learning technology deployment together with their causes, and advocate for the consideration of these consequences as part of the evaluation framework. We follow this discussion with a few concluding remarks.
Chapter 4 reviews frequently used machine learning evaluation procedures. In particular, it presents popular evaluation metrics for binary and multi-class classification (e.g., accuracy, precision/recall, ROC analysis), regression analysis (e.g., mean squared error, root mean squared error, R-squared error), clustering (e.g., Davies–Bouldin Index). It then reviews popular resampling approaches (e.g.,holdout, cross-validation) and statistical tests (e.g., the t-test and the sign test). It concludes with an explanation of why it is important to go beyond these well-known methods in order to achieve reliable evaluation results in all cases.
Chapter 6 addresses the problem of error estimation and resampling in both a theoretical and practical manner. The holdout method is reviewed and cast into the bias/variance framework. Simple resampling approaches such as cross-validation are also reviewed and important variations such as stratified cross-validation and leave-one-out are introduced. Multiple resampling approaches such as bootstrapping, randomization, and multiple trials of simple resampling approaches are then introduced and discussed.
Chapter 2 reviews the principles of statistics that are necessary for the discussion of machine learning evaluation methods, especially the statical analysis discussion of Chapter 7. In particular, it reviews the notions of random variables, distributions, confidence intervals, and hypothesis testing.
In Chapter 10, the book turns to practical considerations. In particular, it surveys the software engineering discipline with its rigorous software testing methods, and asks how these techniques can be adapted to the subfield of machine learning. The adaptation is not straightforward, as machine learning algorithms behave in non-deterministic ways aggravated by data, algorithm, and platform imperfections. These issues are discussed and some of the steps taken to handle them are reviewed. The chapter then turns to the practice of online testing and addresses the ethics of machine learning deployment. The chapter concludes with a discussion of current industry practice along with suggestions on how to improve the safety of industrial deployment in the future.
Chapter 5 starts with an analysis of the classification metrics presented in Chapter 4, outlining their strengths and weaknesses. It then presents more advanced metrics such as Cohen’s kappa, Youden’s index, and likelihood ratios. This is followed by a discussion about data and classifier complexities such as the class imbalance problem and classifier uncertainty that require particular scrutiny to ensure that the results are trustworthy. The chapter concludes with a detailed discussion of ROC analysis to complement its introduction in Chapter 4, and a presentation of other visualization metrics.
Chapter 3 discusses the field of machine learning from a theoretical perspective. The review will advance the discussion of advanced metrics in Chapter 5 and error estimation methods in Chapter 6. The specific concepts surveyed in this chapter include loss functions, empirical risk, generalization error, empirical and structural risk minimization, regularization, and learning bias. The unsupervised learning paradigm is also reviewed and the chapter concludes with a discussion of the bias/variance tradeoff.
Chapter 9 is devoted to evaluation methods for an important category of classical learning paradigms left out of Chapter 8 so as to receive fuller coverage: unsupervised learning. In this chapter, a number of different unsupervised learning schemes are considered and their evaluation discussed. The particular tasks considered are clustering and hierarchical clustering, dimensionality reduction, latent variable modeling, and generative models including probabilistic PCA, variational autoencoders, and GANs. Evaluation methodology is discussed discussed for each of these tasks.