Book contents
- Frontmatter
- Contents
- List of Acronyms
- Notation
- Foreword
- 1 Introduction to the World of Sparsity
- 2 The Wavelet Transform
- 3 Redundant Wavelet Transform
- 4 Nonlinear Multiscale Transforms
- 5 Multiscale Geometric Transforms
- 6 Sparsity andNoiseRemoval
- 7 Linear Inverse Problems
- 8 Morphological Diversity
- 9 Sparse Blind Source Separation
- 10 Dictionary Learning
- 11 Three-Dimensional Sparse Representations
- 12 Multiscale Geometric Analysis on the Sphere
- 13 Compressed Sensing
- 14 This Book's Take-Home Message
- Notes
- References
- Index
- Plate section
10 - Dictionary Learning
Published online by Cambridge University Press: 05 October 2015
- Frontmatter
- Contents
- List of Acronyms
- Notation
- Foreword
- 1 Introduction to the World of Sparsity
- 2 The Wavelet Transform
- 3 Redundant Wavelet Transform
- 4 Nonlinear Multiscale Transforms
- 5 Multiscale Geometric Transforms
- 6 Sparsity andNoiseRemoval
- 7 Linear Inverse Problems
- 8 Morphological Diversity
- 9 Sparse Blind Source Separation
- 10 Dictionary Learning
- 11 Three-Dimensional Sparse Representations
- 12 Multiscale Geometric Analysis on the Sphere
- 13 Compressed Sensing
- 14 This Book's Take-Home Message
- Notes
- References
- Index
- Plate section
Summary
INTRODUCTION
A data set can be decomposed in many dictionaries, and we argue in this book that the “best” dictionary is the one providing the sparsest (most economical) representation. In practice, it is convenient to use dictionaries with a fast implicit transform (such as those described in detail in the previous chapters), which allows us to directly obtain the coefficients and reconstruct the signal from these coefficients using fast algorithms running in linear or almost linear time (unlike matrix-vector multiplications). We have also seen in Chapter 8 that fixed dictionaries can be gathered together in order to build a larger dictionary that can describe the data in a more versatile way. All these dictionaries are designed to handle specific contents and are restricted to signals and images that are of a certain type. For instance, Fourier represents stationary and periodic signals well, wavelets are good for analyzing isotropic objects of different scales, curvelets are designed for anisotropic and curvilinear features. Hence, the representation space that we use in our analysis can be seen as a prior we have on our data. Fixed dictionaries, though they have very fast implicit analysis and synthesis operators, which makes them attractive from a practical point of view, cannot guarantee sparse representations of new classes of signals of interest that present more complex patterns and features. What can one do if the data cannot be sufficiently sparsely represented by any of these fixed (or combined) existing dictionaries or if is not known the morphology of features contained in our data? Is there a way to make our data analysis more adaptive by optimizing for a dedicated dictionary? To answer these questions, a new field has emerged called Dictionary Learning (DL). Dictionary learning offers the possibility of learning an adaptive dictionary ɸ directly from the data (or from a set of exemplars that we believe represent the data well). DL is at the interface of machine learning and signal processing.
The problem of dictionary learning in its overdetermined form (that is, when the number of atoms in the dictionary is smaller than or equal to the ambient dimension of the signal) has been studied in depth and can be approached using many viable techniques such as principal component analysis (PCA) and its variants.
- Type
- Chapter
- Information
- Sparse Image and Signal ProcessingWavelets and Related Geometric Multiscale Analysis, pp. 263 - 274Publisher: Cambridge University PressPrint publication year: 2015