Book contents
- Frontmatter
- Contents
- List of contributors
- Preface
- 1 Introduction to compressed sensing
- 2 Second-generation sparse modeling: structured and collaborative signal analysis
- 3 Xampling: compressed sensing of analog signals
- 4 Sampling at the rate of innovation: theory and applications
- 5 Introduction to the non-asymptotic analysis of random matrices
- 6 Adaptive sensing for sparse recovery
- 7 Fundamental thresholds in compressed sensing: a high-dimensional geometry approach
- 8 Greedy algorithms for compressed sensing
- 9 Graphical models concepts in compressed sensing
- 10 Finding needles in compressed haystacks
- 11 Data separation by sparse representations
- 12 Face recognition by sparse representation
- Index
9 - Graphical models concepts in compressed sensing
Published online by Cambridge University Press: 05 November 2012
- Frontmatter
- Contents
- List of contributors
- Preface
- 1 Introduction to compressed sensing
- 2 Second-generation sparse modeling: structured and collaborative signal analysis
- 3 Xampling: compressed sensing of analog signals
- 4 Sampling at the rate of innovation: theory and applications
- 5 Introduction to the non-asymptotic analysis of random matrices
- 6 Adaptive sensing for sparse recovery
- 7 Fundamental thresholds in compressed sensing: a high-dimensional geometry approach
- 8 Greedy algorithms for compressed sensing
- 9 Graphical models concepts in compressed sensing
- 10 Finding needles in compressed haystacks
- 11 Data separation by sparse representations
- 12 Face recognition by sparse representation
- Index
Summary
This chapter surveys recent work in applying ideas from graphical models and message passing algorithms to solve large-scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via 11 penalized least-squares (known as LASSO or BPDN). We discuss how to derive fast approximate message passing algorithms to solve this problem. Surprisingly, the analysis of such algorithms allows one to prove exact high-dimensional limit results for the LASSO risk.
Introduction
The problem of reconstructing a high-dimensional vector x ∈ ℝn from a collection of observations y ∈ ℝm arises in a number of contexts, ranging from statistical learning to signal processing. It is often assumed that the measurement process is approximately linear, i.e. that
where A ∈ ℝm×n is a known measurement matrix, and w is a noise vector.
The graphical models approach to such a reconstruction problem postulates a joint probability distribution on (x, y) which takes, without loss of generality, the form
The conditional distribution p(dy|x) models the noise process, while the prior p(dx) encodes information on the vector x. In particular, within compressed sensing, it can describe its sparsity properties. Within a graphical models approach, either of these distributions (or both) factorizes according to a specific graph structure. The resulting posterior distribution p(dx|y) is used for inferring x given y.
There are many reasons to be skeptical about the idea that the joint probability distribution p(dx, dy) can be determined, and used for reconstructing x.
- Type
- Chapter
- Information
- Compressed SensingTheory and Applications, pp. 394 - 438Publisher: Cambridge University PressPrint publication year: 2012
- 94
- Cited by