Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgements
- PART 1 GENESIS OF DATA ASSIMILATION
- PART II DATA ASSIMILATION: DETERMINISTIC/STATIC MODELS
- 5 Linear least squares estimation: method of normal equations
- 6 A geometric view: projection and invariance
- 7 Nonlinear least squares estimation
- 8 Recursive least squares estimation
- PART III COMPUTATIONAL TECHNIQUES
- PART IV STATISTICAL ESTIMATION
- PART V DATA ASSIMILATION: STOCHASTIC/STATIC MODELS
- PART VI DATA ASSIMILATION: DETERMINISTIC/DYNAMIC MODELS
- PART VII DATA ASSIMILATION: STOCHASTIC/DYNAMIC MODELS
- PART VIII PREDICTABILITY
- Epilogue
- References
- Index
8 - Recursive least squares estimation
from PART II - DATA ASSIMILATION: DETERMINISTIC/STATIC MODELS
Published online by Cambridge University Press: 18 December 2009
- Frontmatter
- Contents
- Preface
- Acknowledgements
- PART 1 GENESIS OF DATA ASSIMILATION
- PART II DATA ASSIMILATION: DETERMINISTIC/STATIC MODELS
- 5 Linear least squares estimation: method of normal equations
- 6 A geometric view: projection and invariance
- 7 Nonlinear least squares estimation
- 8 Recursive least squares estimation
- PART III COMPUTATIONAL TECHNIQUES
- PART IV STATISTICAL ESTIMATION
- PART V DATA ASSIMILATION: STOCHASTIC/STATIC MODELS
- PART VI DATA ASSIMILATION: DETERMINISTIC/DYNAMIC MODELS
- PART VII DATA ASSIMILATION: STOCHASTIC/DYNAMIC MODELS
- PART VIII PREDICTABILITY
- Epilogue
- References
- Index
Summary
So far in Chapters 5 through 7, it was assumed that the number m of observations is fixed and is known in advance. This treatment has come to be known as the fixed sample or off-line version of the least squares problem. In this chapter, we introduce the rudiments of the dual problem wherein the data or the observations are not known in advance and arrive sequentially in time. The challenge is to keep updating the optimal estimates as the new observations arrive on the scene. A naive way would be to repeatedly solve a sequence of least squares problems after the arrival of every new observation using the methods described in Chapters 5 through 7. A little reflection will, however, reveal that this is inefficient and computationally very expensive. The real question is: knowing the optimal estimate x(m) based on the m samples, can we compute x(m + 1), the optimal estimate for (m + 1) samples, recursively by computing an increment or a correction to x(m) that reflects the new information contained in the new (m + 1)th observation? The answer is indeed “yes”, and leads to the sequential or recursive method for least squares estimation which is the subject of this chapter.
Section 8.1 provides an introduction to the deterministic recursive linear least squares estimation.
A recursive framework
Let x ∈ ℝn denote the state of the system under observation where n is fixed.
- Type
- Chapter
- Information
- Dynamic Data AssimilationA Least Squares Approach, pp. 141 - 146Publisher: Cambridge University PressPrint publication year: 2006