Book contents
- Frontmatter
- Contents
- Preface
- 1 Introduction
- Part one Pattern Classification with Binary-Output Neural Networks
- 2 The Pattern Classification Problem
- 3 The Growth Function and VC-Dimension
- 4 General Upper Bounds on Sample Complexity
- 5 General Lower Bounds on Sample Complexity
- 6 The VC-Dimension of Linear Threshold Networks
- 7 Bounding the VC-Dimension using Geometric Techniques
- 8 Vapnik-Chervonenkis Dimension Bounds for Neural Networks
- Part two Pattern Classification with Real-Output Networks
- Part three Learning Real-Valued Functions
- Part four Algorithmics
- Appendix 1 Useful Results
- Bibliography
- Author index
- Subject index
6 - The VC-Dimension of Linear Threshold Networks
Published online by Cambridge University Press: 26 February 2010
- Frontmatter
- Contents
- Preface
- 1 Introduction
- Part one Pattern Classification with Binary-Output Neural Networks
- 2 The Pattern Classification Problem
- 3 The Growth Function and VC-Dimension
- 4 General Upper Bounds on Sample Complexity
- 5 General Lower Bounds on Sample Complexity
- 6 The VC-Dimension of Linear Threshold Networks
- 7 Bounding the VC-Dimension using Geometric Techniques
- 8 Vapnik-Chervonenkis Dimension Bounds for Neural Networks
- Part two Pattern Classification with Real-Output Networks
- Part three Learning Real-Valued Functions
- Part four Algorithmics
- Appendix 1 Useful Results
- Bibliography
- Author index
- Subject index
Summary
Feed-Forward Neural Networks
In this chapter, and many subsequent ones, we deal with feed-forward neural networks. Initially, we shall be particularly concerned with feed-forward linear threshold networks, which can be thought of as combinations of perceptrons.
To define a neural network class, we need to specify the architecture of the network and the parameterized functions computed by its components. In general, a feed-forward neural network has as its main components a set of computation units, a set of input units, and a set of connections from input or computation units to computation units. These connections are directed; that is, each connection is from a particular unit to a particular computation unit. The key structural property of a feed-forward network—the feed-forward condition—is that these connections do not form any loops. This means that the units can be labelled with integers in such a way that if there is a connection from the unit labelled i to the computation unit labelled j then i < j.
Associated with each unit is a real number called its output. The output of a computation unit is a particular function of the outputs of units that are connected to it. The feed-forward condition guarantees that the outputs of all units in the network can be written as an explicit function of the network inputs.
- Type
- Chapter
- Information
- Neural Network LearningTheoretical Foundations, pp. 74 - 85Publisher: Cambridge University PressPrint publication year: 1999