Book contents
- Frontmatter
- Contents
- Preface
- 1 Introduction
- Part one Pattern Classification with Binary-Output Neural Networks
- Part two Pattern Classification with Real-Output Networks
- Part three Learning Real-Valued Functions
- Part four Algorithmics
- 22 Efficient Learning
- 23 Learning as Optimization
- 24 The Boolean Perceptron
- 25 Hardness Results for Feed-Forward Networks
- 26 Constructive Learning Algorithms for Two-Layer Networks
- Appendix 1 Useful Results
- Bibliography
- Author index
- Subject index
22 - Efficient Learning
Published online by Cambridge University Press: 26 February 2010
- Frontmatter
- Contents
- Preface
- 1 Introduction
- Part one Pattern Classification with Binary-Output Neural Networks
- Part two Pattern Classification with Real-Output Networks
- Part three Learning Real-Valued Functions
- Part four Algorithmics
- 22 Efficient Learning
- 23 Learning as Optimization
- 24 The Boolean Perceptron
- 25 Hardness Results for Feed-Forward Networks
- 26 Constructive Learning Algorithms for Two-Layer Networks
- Appendix 1 Useful Results
- Bibliography
- Author index
- Subject index
Summary
Introduction
In this part of the book, we turn our attention to aspects of the time complexity, or computational complexity of learning. Until now we have discussed only the sample complexity of learning, and we have been using the phrase ‘learning algorithm’ without any reference to algorithmics. But issues of running time are crucial. If a learning algorithm is to be of practical value, it must, first, be possible to implement the learning algorithm on a computer; that is, it must be computable and therefore, in a real sense, an algorithm, not merely a function. Furthermore, it should be possible to produce a good output hypothesis ‘quickly’.
One subtlety that we have not so far explicitly dealt with is that a practical learning algorithm does not really output a hypothesis; rather, it outputs a representation of a hypothesis. In the context of neural networks, such a representation consists of a state of the network; that is, an assignment of weights and thresholds. In studying the computational complexity of a learning algorithm, one therefore might take into account the ‘complexity’ of the representation output by the learning algorithm. However, this will not be necessary in the approach taken here. For convenience, we shall continue to use notation suggesting that the output of a learning algorithm is a function from a class of hypotheses, but the reader should be aware that, formally, the output is a representation of such a function.
- Type
- Chapter
- Information
- Neural Network LearningTheoretical Foundations, pp. 299 - 306Publisher: Cambridge University PressPrint publication year: 1999