Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- 1 Introduction
- Part 1 Foundations
- 2 A Gentle Start
- 3 A Formal Learning Model
- 4 Learning via Uniform Convergence
- 5 The Bias-Complexity Trade-off
- 6 The VC-Dimension
- 7 Nonuniform Learnability
- 8 The Runtime of Learning
- Part 2 From Theory to Algorithms
- Part 3 Additional Learning Models
- Part 4 Advanced Theory
- Appendix A Technical Lemmas
- Appendix B Measure Concentration
- Appendix C Linear Algebra
- References
- Index
8 - The Runtime of Learning
from Part 1 - Foundations
Published online by Cambridge University Press: 05 July 2014
- Frontmatter
- Dedication
- Contents
- Preface
- 1 Introduction
- Part 1 Foundations
- 2 A Gentle Start
- 3 A Formal Learning Model
- 4 Learning via Uniform Convergence
- 5 The Bias-Complexity Trade-off
- 6 The VC-Dimension
- 7 Nonuniform Learnability
- 8 The Runtime of Learning
- Part 2 From Theory to Algorithms
- Part 3 Additional Learning Models
- Part 4 Advanced Theory
- Appendix A Technical Lemmas
- Appendix B Measure Concentration
- Appendix C Linear Algebra
- References
- Index
Summary
So far in the book we have studied the statistical perspective of learning, namely, how many samples are needed for learning. In other words, we focused on the amount of information learning requires. However, when considering automated learning, computational resources also play a major role in determining the complexity of a task: that is, how much computation is involved in carrying out a learning task. Once a sufficient training sample is available to the learner, there is some computation to be done to extract a hypothesis or figure out the label of a given test instance. These computational resources are crucial in any practical application of machine learning. We refer to these two types of resources as the sample complexity and the computational complexity. In this chapter, we turn our attention to the computational complexity of learning.
The computational complexity of learning should be viewed in the wider context of the computational complexity of general algorithmic tasks. This area has been extensively investigated; see, for example, (Sipser 2006). The introductory comments that follow summarize the basic ideas of that general theory that are most relevant to our discussion.
The actual runtime (in seconds) of an algorithm depends on the specific machine the algorithm is being implemented on (e.g., what the clock rate of the machine's CPU is). To avoid dependence on the specific machine, it is common to analyze the runtime of algorithms in an asymptotic sense.
- Type
- Chapter
- Information
- Understanding Machine LearningFrom Theory to Algorithms, pp. 73 - 86Publisher: Cambridge University PressPrint publication year: 2014