Book contents
- Frontmatter
- Contents
- Preface
- 1 Introduction
- Part one Pattern Classification with Binary-Output Neural Networks
- Part two Pattern Classification with Real-Output Networks
- 9 Classification with Real-Valued Functions
- 10 Covering Numbers and Uniform Convergence
- 11 The Pseudo-Dimension and Fat-Shattering Dimension
- 12 Bounding Covering Numbers with Dimensions
- 13 The Sample Complexity of Classification Learning
- 14 The Dimensions of Neural Networks
- 15 Model Selection
- Part three Learning Real-Valued Functions
- Part four Algorithmics
- Appendix 1 Useful Results
- Bibliography
- Author index
- Subject index
9 - Classification with Real-Valued Functions
Published online by Cambridge University Press: 26 February 2010
- Frontmatter
- Contents
- Preface
- 1 Introduction
- Part one Pattern Classification with Binary-Output Neural Networks
- Part two Pattern Classification with Real-Output Networks
- 9 Classification with Real-Valued Functions
- 10 Covering Numbers and Uniform Convergence
- 11 The Pseudo-Dimension and Fat-Shattering Dimension
- 12 Bounding Covering Numbers with Dimensions
- 13 The Sample Complexity of Classification Learning
- 14 The Dimensions of Neural Networks
- 15 Model Selection
- Part three Learning Real-Valued Functions
- Part four Algorithmics
- Appendix 1 Useful Results
- Bibliography
- Author index
- Subject index
Summary
Introduction
The general upper and lower bounds on sample complexity described in Chapters 4 and 5 show that the VC-dimension determines the sample complexity of the learning problem for a function class H. The results of Chapters 6 and 8 show that, for a variety of neural networks, the VC-dimension grows with the number of parameters. In particular, the lower bounds on the VC-dimension of neural networks described in Section 6.3, together with Theorem 6.5, show that with mild conditions on the architecture of a multi-layer network and the activation functions of its computation units, the VC-dimension grows at least linearly with the number of parameters.
These results do not, however, provide a complete explanation of the sample size requirements of neural networks for pattern classification problems. In many applications of neural networks the network parameters are adjusted on the basis of a small training set, sometimes an order of magnitude smaller than the number of parameters. In this case, we might expect the network to ‘overfit’, that is, to accurately match the training data, but predict poorly on subsequent data. Indeed, the results from Part 1 based on the VC-dimension suggest that the estimation error could be large, because VCdim(H)/m is large. Nonetheless, in many such situations these networks seem to avoid overfitting, in that the training set error is a reliable estimate of the error on subsequent examples. Furthermore, Theorem 7.1 shows that an arbitrarily small modification to the activation function can make the VC-dimension infinite, and it seems unnatural that such a change should affect the statistical behaviour of networks in applications.
- Type
- Chapter
- Information
- Neural Network LearningTheoretical Foundations, pp. 133 - 139Publisher: Cambridge University PressPrint publication year: 1999