Book contents
- Frontmatter
- Contents
- Preface
- Acronyms
- 1 Introduction
- 2 Machine Learning and Statistics Overview
- 3 Performance Measures I
- 4 Performance Measures II
- 5 Error Estimation
- 6 Statistical Significance Testing
- 7 Datasets and Experimental Framework
- 8 Recent Developments
- 9 Conclusion
- Appendix A Statistical Tables
- Appendix B Additional Information on the Data
- Appendix C Two Case Studies
- Bibliography
- Index
7 - Datasets and Experimental Framework
Published online by Cambridge University Press: 05 August 2011
- Frontmatter
- Contents
- Preface
- Acronyms
- 1 Introduction
- 2 Machine Learning and Statistics Overview
- 3 Performance Measures I
- 4 Performance Measures II
- 5 Error Estimation
- 6 Statistical Significance Testing
- 7 Datasets and Experimental Framework
- 8 Recent Developments
- 9 Conclusion
- Appendix A Statistical Tables
- Appendix B Additional Information on the Data
- Appendix C Two Case Studies
- Bibliography
- Index
Summary
We have discussed different aspects and components pertaining to the evaluation of learning algorithms. Given one or more fixed domains, we have discussed various performance measures, a number of sampling and resampling methods designed to estimate the outcome of these performance measures in a reliable manner and tests allowing us to estimate the statistical significance of the observed results. Many other aspects linked to each of these steps were also surveyed along the line, such as the notion of bias and variance and the debate on the need to practice statistical significance testing. The only aspect of evaluation that was not questioned, so far, is the issue of determining an appropriate testbed for our experiments. All the components that we discussed so far in this book made the implicit assumption expressed in the second sentence of this chapter: “Given one or more fixed domains.” It is now time to expand on this issue because the application, results, and subsequent interpretation of the different components of the evaluation process depend critically on the domains on which these are assessed and quantified. Furthermore, selecting datasets for evaluating the algorithms is certainly a nontrivial issue.
One important result connected to the choice of datasets on which to evaluate learning algorithms is summarized in Wolpert's “No Free Lunch” theorems. These theorems show the importance of evaluating algorithms on a large number of problems, because, if only a small sample of problems is considered, the results could be biased.
- Type
- Chapter
- Information
- Evaluating Learning AlgorithmsA Classification Perspective, pp. 292 - 307Publisher: Cambridge University PressPrint publication year: 2011