Book contents
- Frontmatter
- Contents
- Preface
- 1 Data Mining
- 2 MapReduce and the New Software Stack
- 3 Finding Similar Items
- 4 Mining Data Streams
- 5 Link Analysis
- 6 Frequent Itemsets
- 7 Clustering
- 8 Advertising on the Web
- 9 Recommendation Systems
- 10 Mining Social-Network Graphs
- 11 Dimensionality Reduction
- 12 Large-Scale Machine Learning
- Index
- References
11 - Dimensionality Reduction
Published online by Cambridge University Press: 05 December 2014
- Frontmatter
- Contents
- Preface
- 1 Data Mining
- 2 MapReduce and the New Software Stack
- 3 Finding Similar Items
- 4 Mining Data Streams
- 5 Link Analysis
- 6 Frequent Itemsets
- 7 Clustering
- 8 Advertising on the Web
- 9 Recommendation Systems
- 10 Mining Social-Network Graphs
- 11 Dimensionality Reduction
- 12 Large-Scale Machine Learning
- Index
- References
Summary
There are many sources of data that can be viewed as a large matrix. We saw in Chapter 5 how the Web can be represented as a transition matrix. In Chapter 9, the utility matrix was a point of focus. And in Chapter 10 we examined matrices that represent social networks. In many of these matrix applications, the matrix can be summarized by finding “narrower” matrices that in some sense are close to the original. These narrow matrices have only a small number of rows or a small number of columns, and therefore can be used much more efficiently than can the original large matrix. The process of finding these narrow matrices is called dimensionality reduction.
We saw a preliminary example of dimensionality reduction in Section 9.4. There, we discussed UV-decomposition of a matrix and gave a simple algorithm for finding this decomposition. Recall that a large matrix M was decomposed into two matrices U and V whose product UV was approximately M. The matrix U had a small number of columns whereas V had a small number of rows, so each was significantly smaller than M, and yet together they represented most of the information in M that was useful in predicting ratings of items by individuals.
In this chapter we shall explore the idea of dimensionality reduction in more detail. We begin with a discussion of eigenvalues and their use in “principal component analysis” (PCA). We cover singular-value decomposition, a more powerful version of UV-decomposition. Finally, because we are always interested in the largest data sizes we can handle, we look at another form of decomposition, called CUR-decomposition, which is a variant of singular-value decomposition that keeps the matrices of the decomposition sparse if the original matrix is sparse.
Eigenvalues and Eigenvectors
We shall assume that you are familiar with the basics of matrix algebra: multiplication, transpose, determinants, and solving linear equations for example. In this section, we shall define eigenvalues and eigenvectors of a symmetric matrix and show how to find them.
- Type
- Chapter
- Information
- Mining of Massive Datasets , pp. 384 - 414Publisher: Cambridge University PressPrint publication year: 2014
References
- 3
- Cited by