9 - Dimension reduction techniques
from Part III - Methods for large scale machine learning
Summary
Large datasets, as well as data consisting of a large number of features, present computational problems in the training of predictive models. In this chapter we discuss several useful techniques for reducing the dimension of a given dataset, that is reducing the number of data points or number of features, often employed in order to make predictive learning methods scale to larger datasets. More specifically, we discuss widely used methods for reducing the data dimension, that is the number of data points, of a dataset including random subsampling and K-means clustering. We then detail a common way of reducing the feature dimension, or number features, of a dataset as explained in Fig. 9.1. A classical approach for feature dimension reduction, principal component analysis (PCA), while often used for general data analysis is a relatively poor tool for reducing the feature dimension of predictive modeling data. However, PCA presents a fundamental mathematical archetype, the matrix factorization, that provides a very useful way of organizing our thinking about a wide array of important learning models (including linear regression, K-means, recommender systems – introduced after detailing PCA in this chapter), all of which may be thought of as variations of the simple theme of matrix factorization.
Techniques for data dimension reduction
In this section we detail two common ways of reducing the data dimension of a dataset: random subsampling and K-means clustering.
Random subsampling
Random subsampling is a simple and intuitive way of reducing the data dimension of a dataset, and is often the first approach employed when performing regression/ classification on datasets too large for available computational resources. Given a set of P points we keep a random subsample of S < P of the entire set. Clearly the smaller we choose S the larger the chance we may loose an important structural characteristic of the underlying dataset (for example the geometry of the separating boundary between two classes of data). While there is no formula or hard rule for how large S should be, a simple guideline used in practice is to choose S as large as possible given the computational resources available so as to minimize this risk.
- Type
- Chapter
- Information
- Machine Learning RefinedFoundations, Algorithms, and Applications, pp. 245 - 262Publisher: Cambridge University PressPrint publication year: 2016