Book contents
- Frontmatter
- Contents
- Preface
- 1 Introduction
- Part I Fundamental tools and concepts
- Part II Tools for fully data-driven machine learning
- 5 Automatic feature design for regression
- 6 Automatic feature design for classification
- 7 Kernels, backpropagation, and regularized cross-validation
- Part III Methods for large scale machine learning
- Part IV Appendices
- References
- Index
6 - Automatic feature design for classification
from Part II - Tools for fully data-driven machine learning
- Frontmatter
- Contents
- Preface
- 1 Introduction
- Part I Fundamental tools and concepts
- Part II Tools for fully data-driven machine learning
- 5 Automatic feature design for regression
- 6 Automatic feature design for classification
- 7 Kernels, backpropagation, and regularized cross-validation
- Part III Methods for large scale machine learning
- Part IV Appendices
- References
- Index
Summary
In Chapter 6 we mirror closely the exposition given in the previous chapter on regression, beginning with the approximation of the underlying data generating function itself by bases of features, and going on to finally describing cross-validation in the context of classification. In short we will see that all of the tools from the previous chapter can be applied to the automatic design of features for the problem of classification as well.
Automatic feature design for the ideal classification scenario
In Fig. 6.1 we illustrate a prototypical dataset on which we perform the general task of two class classification, where the two classes can be effectively separated using a nonlinear boundary. In contrast to those examples given in Section 4.5, where visualization or scientific knowledge guided the fashioning of a feature transformation to capture this nonlinearity, in this chapter we suppose that this cannot be done due to the complexity and/or high dimensionality of the data. At the heart of the two class classification framework is the tacit assumption that the data we receive are in fact noisy samples of some underlying indicator function, a nonlinear generalization of the step function briefly discussed in Section 4.5, like the one shown in the right panel of Fig. 6.1. Akin to regression, our goal with classification is then to approximate this data-generating indicator function as well as we can using the data at our disposal.
In this section we will assume the impossible: that we have clean and complete access to every data point in the space of a two class classification environment, whose labels take on values in ﹛-1, 1﹜, and hence access to its associated indicator function y (x). Although an indicator function is not continuous, the same bases of continuous features discussed in the previous chapter can be used to represent it (near) perfectly.
Approximation of piecewise continuous functions
In Section 5.1 we saw how fixed and adjustable neural network bases of features can be used to approximate continuous functions. These bases can also be used to effectively approximate the broader class of piecewise continuous functions, composed of fragments of continuous functions with gaps or jumps between the various pieces.
- Type
- Chapter
- Information
- Machine Learning RefinedFoundations, Algorithms, and Applications, pp. 166 - 194Publisher: Cambridge University PressPrint publication year: 2016