Book contents
- Frontmatter
- Contents
- Preface
- Acronyms
- 1 Introduction
- 2 Machine Learning and Statistics Overview
- 3 Performance Measures I
- 4 Performance Measures II
- 5 Error Estimation
- 6 Statistical Significance Testing
- 7 Datasets and Experimental Framework
- 8 Recent Developments
- 9 Conclusion
- Appendix A Statistical Tables
- Appendix B Additional Information on the Data
- Appendix C Two Case Studies
- Bibliography
- Index
5 - Error Estimation
Published online by Cambridge University Press: 05 August 2011
- Frontmatter
- Contents
- Preface
- Acronyms
- 1 Introduction
- 2 Machine Learning and Statistics Overview
- 3 Performance Measures I
- 4 Performance Measures II
- 5 Error Estimation
- 6 Statistical Significance Testing
- 7 Datasets and Experimental Framework
- 8 Recent Developments
- 9 Conclusion
- Appendix A Statistical Tables
- Appendix B Additional Information on the Data
- Appendix C Two Case Studies
- Bibliography
- Index
Summary
We saw in Chapters 3 and 4 the concerns that arise from having to choose appropriate performance measures. Once a performance measure is decided upon, the next obvious concern is to find a good method for testing the learning algorithm so as to obtain as unbiased an estimate of the chosen performance measure as possible. Also of interest is the related concern of whether the technique we use to obtain such an estimate brings us as close as possible to the true measure value.
Ideally we would have access to the entire population and test our classifiers on it. Even if the entire population were not available, if a lot of representative data from that population could be obtained, error estimation would be quite simple. It would consist of testing the algorithms on the data they were trained on. Although such an estimate, commonly known as the resubstitution error, is usually optimistically biased, as the number of instances in the dataset increases, it tends toward the true error rate. Realistically, however, we are given a significantly limited-sized sample of the population. Areliable alternative thus consists of testing the algorithm on a large set of unseen data points. This approach is commonly known as the holdout method. Unfortunately, such an approach still requires quite a lot of data for testing the algorithm's performance, which is relatively rare in most practical situation.
- Type
- Chapter
- Information
- Evaluating Learning AlgorithmsA Classification Perspective, pp. 161 - 205Publisher: Cambridge University PressPrint publication year: 2011
- 1
- Cited by