Book contents
- Frontmatter
- Contents
- Preface
- 1 Introduction
- Part I Fundamental tools and concepts
- Part II Tools for fully data-driven machine learning
- 5 Automatic feature design for regression
- 6 Automatic feature design for classification
- 7 Kernels, backpropagation, and regularized cross-validation
- Part III Methods for large scale machine learning
- Part IV Appendices
- References
- Index
5 - Automatic feature design for regression
from Part II - Tools for fully data-driven machine learning
- Frontmatter
- Contents
- Preface
- 1 Introduction
- Part I Fundamental tools and concepts
- Part II Tools for fully data-driven machine learning
- 5 Automatic feature design for regression
- 6 Automatic feature design for classification
- 7 Kernels, backpropagation, and regularized cross-validation
- Part III Methods for large scale machine learning
- Part IV Appendices
- References
- Index
Summary
As discussed in the end of Section 3.2, rarely can we design perfect or even strongly performing features for the general regression problem by completely relying on our understanding of a given dataset. In this chapter we describe tools for automatically designing proper features for the general regression problem, without the explicit incorporation of human knowledge gained from e.g., visualization of the data, philosophical reflection, or domain expertise.
We begin by introducing the tools used to perform regression in the ideal but extremely unrealistic scenario where we have complete and noiseless access to all possible input feature/output pairs of a regression phenomenon, i.e., a continuous function (as first discussed in Section 3.2). Here we will see how, in the case where we have such unfettered access to regression data, perfect features can be designed automatically by combining elements from a set of basic feature transformations. We then see how this process for building features translates, albeit imperfectly, to the general instance of regression where we have access to only noisy samples of a regression relationship. Following this we describe cross-validation, a crucial procedure to employing automatic feature design in practice. Finally we discuss several issues pertaining to the best choice of primary features for automatic feature design in practice.
Automatic feature design for the ideal regression scenario
In Fig. 5.1 we illustrate a prototypical dataset on which we perform regression, where our input feature and output have some sort of clear nonlinear relationship. Recall from Section 3.2 that at the heart of feature design for regression is the tacit assumption that the data we receive are in fact noisy samples of some underlying continuous function (shown in dashed black in Fig. 5.1). Our goal in solving the general regression problem is then, using the data at our disposal (which we may think of as noisy glimpses of the underlying function), to approximate this data-generating function as well as we can.
In this section we will assume the impossible: that we have complete access to a clean version of every input feature/output pair of a regression phenomenon, or in other words that our data completely traces out a continuous function y (x).
- Type
- Chapter
- Information
- Machine Learning RefinedFoundations, Algorithms, and Applications, pp. 131 - 165Publisher: Cambridge University PressPrint publication year: 2016