Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Introduction
- Part I Inverse Problems
- 1 Bayesian Inverse Problems andWell-Posedness
- 2 The Linear-Gaussian Setting
- 3 Optimization Perspective
- 4 Gaussian Approximation
- 5 Monte Carlo Sampling and Importance Sampling
- 6 Markov Chain Monte Carlo
- Exercises for Part I
- Part II Data Assimilation
- 7 Filtering and Smoothing Problems and Well-Posedness
- 8 The Kalman Filter and Smoother
- 9 Optimization for Filtering and Smoothing: 3DVAR and 4DVAR
- 10 The Extended and Ensemble Kalman Filters
- 11 Particle Filter
- 12 Optimal Particle Filter
- Exercises for Part II
- Part III Kalman Inversion
- 13 Blending Inverse Problems and Data Assimilation
- References
- Index
Exercises for Part I
Published online by Cambridge University Press: 27 July 2023
- Frontmatter
- Dedication
- Contents
- Preface
- Introduction
- Part I Inverse Problems
- 1 Bayesian Inverse Problems andWell-Posedness
- 2 The Linear-Gaussian Setting
- 3 Optimization Perspective
- 4 Gaussian Approximation
- 5 Monte Carlo Sampling and Importance Sampling
- 6 Markov Chain Monte Carlo
- Exercises for Part I
- Part II Data Assimilation
- 7 Filtering and Smoothing Problems and Well-Posedness
- 8 The Kalman Filter and Smoother
- 9 Optimization for Filtering and Smoothing: 3DVAR and 4DVAR
- 10 The Extended and Ensemble Kalman Filters
- 11 Particle Filter
- 12 Optimal Particle Filter
- Exercises for Part II
- Part III Kalman Inversion
- 13 Blending Inverse Problems and Data Assimilation
- References
- Index
Summary
In this chapter we introduce the Bayesian approach to inverse problems in which the unknown parameter and the observed data are viewed as random variables. In this probabilistic formulation, the solution of the inverse problem is the posterior distribution on the parameter given the data. We will show that the Bayesian formulation leads to a form of well-posedness: small perturbations of the forward model or the observed data translate into small perturbations of the posterior distribution. Well-posedness requires a notion of distance between probability measures. We introduce the total variation and Hellinger distances, giving characterizations of them, and bounds relating them, that will be used throughout these notes. We prove well-posedness in the Hellinger distance.
- Type
- Chapter
- Information
- Inverse Problems and Data Assimilation , pp. 91 - 98Publisher: Cambridge University PressPrint publication year: 2023