Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Part I Machine learning and kernel vector spaces
- Part II Dimension-reduction: PCA/KPCA and feature selection
- Part III Unsupervised learning models for cluster analysis
- Part IV Kernel ridge regressors and variants
- Part V Support vector machines and variants
- Part VI Kernel methods for green machine learning technologies
- Part VII Kernel methods and statistical estimation theory
- Part VIII Appendices
- Appendix A Validation and testing of learning models
- Appendix B kNN, PNN, and Bayes classifiers
- References
- Index
Appendix A - Validation and testing of learning models
from Part VIII - Appendices
Published online by Cambridge University Press: 05 July 2014
- Frontmatter
- Dedication
- Contents
- Preface
- Part I Machine learning and kernel vector spaces
- Part II Dimension-reduction: PCA/KPCA and feature selection
- Part III Unsupervised learning models for cluster analysis
- Part IV Kernel ridge regressors and variants
- Part V Support vector machines and variants
- Part VI Kernel methods for green machine learning technologies
- Part VII Kernel methods and statistical estimation theory
- Part VIII Appendices
- Appendix A Validation and testing of learning models
- Appendix B kNN, PNN, and Bayes classifiers
- References
- Index
Summary
Machine learning has successfully led to many promising tools for intelligent data filtering, processing, and interpretation. Naturally, proper metrics will be required in order to objectively evaluate the performance of machine learning tools. To this end, this chapter will address the following subjects.
It is commonly agreed that the testing accuracy serves as a more reasonable metric for the performance evaluation of a learned classifier. Section A.1 discusses several cross-validation (CV) techniques for evaluating the classification performance of the learned models.
Section A.2 explores two important test schemes: the hypothesis test and the significance test.
Cross-validation techniques
Suppose that the dataset under consideration has N samples to be used for training the classfier model and/or estimating the classfication accuracy. Before the training phase starts, a subset of training dataset must be set aside as the testing dataset. The class labels of the test patterns are assumed to be unknown during the learning phase. These labels will be revealed only during the testing phase in order to provide the necessary guideline for the evaluation of the performance.
Some evaluation/validation methods are presented as follows.
(i) Holdout validation. N' (N' < N) samples are randomly selected from the dataset for training a classifier, and the remaining N – N' samples are used for evaluating the accuracy of the classifier. Typically, N' is about two-thirds of N. Holdout validation solves the problem of which biased estimation occurs in re-substitution by completely separating the training data from the validation data.
- Type
- Chapter
- Information
- Kernel Methods and Machine Learning , pp. 539 - 548Publisher: Cambridge University PressPrint publication year: 2014