Book contents
- Frontmatter
- Contents
- Preface
- Notation
- 1 Introduction and Examples
- 2 Statistical Decision Theory
- 3 Linear Discriminant Analysis
- 4 Flexible Discriminants
- 5 Feed-forward Neural Networks
- 6 Non-parametric Methods
- 7 Tree-structured Classifiers
- 8 Belief Networks
- 9 Unsupervised Methods
- 10 Finding Good Pattern Features
- A Statistical Sidelines
- Glossary
- References
- Author Index
- Subject Index
10 - Finding Good Pattern Features
Published online by Cambridge University Press: 05 August 2014
- Frontmatter
- Contents
- Preface
- Notation
- 1 Introduction and Examples
- 2 Statistical Decision Theory
- 3 Linear Discriminant Analysis
- 4 Flexible Discriminants
- 5 Feed-forward Neural Networks
- 6 Non-parametric Methods
- 7 Tree-structured Classifiers
- 8 Belief Networks
- 9 Unsupervised Methods
- 10 Finding Good Pattern Features
- A Statistical Sidelines
- Glossary
- References
- Author Index
- Subject Index
Summary
In this chapter we consider the problem of what features should be included when designing our classifier. We should make clear at the outset that this is an impossible problem; there may be no substitute for trying them all and seeing how well the resulting classifier works. However, this may be computationally impracticable, and unless a large test set is available it may be impossible to avoid selection effects, of choosing the best of a large class of classifiers on that particular test set and not for the population.
To illustrate the difficulty, consider a battery of diagnostic tests T1, …, Tm for a fairly rare disease, which perhaps around 5% of all patients tested actually have. Suppose test T1 correctly picks up 99% of the real cases and has a very low false positive rate. However, there is a rare special form of the disease that T1 cannot detect, but T2 can, yet T2 is inaccurate on the normal disease form. If we test the diagnostic tests one at a time, we will never even think of including T2, yet T1 and T2 together may give a nearly perfect classifier by declaring a patient diseased if T1 is positive or T1 is negative and T2 is positive. This illustrates that considering features one at a time may not be sufficient.
Our aim in this chapter is to indicate single features which are likely to have good discriminatory power (feature selection) or linear combinations of features with the same aim (feature extraction).
- Type
- Chapter
- Information
- Pattern Recognition and Neural Networks , pp. 327 - 332Publisher: Cambridge University PressPrint publication year: 1996
- 1
- Cited by