Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgements
- PART 1 GENESIS OF DATA ASSIMILATION
- PART II DATA ASSIMILATION: DETERMINISTIC/STATIC MODELS
- PART III COMPUTATIONAL TECHNIQUES
- 9 Matrix methods
- 10 Optimization: steepest descent method
- 11 Conjugate direction/gradient methods
- 12 Newton and quasi-Newton methods
- PART IV STATISTICAL ESTIMATION
- PART V DATA ASSIMILATION: STOCHASTIC/STATIC MODELS
- PART VI DATA ASSIMILATION: DETERMINISTIC/DYNAMIC MODELS
- PART VII DATA ASSIMILATION: STOCHASTIC/DYNAMIC MODELS
- PART VIII PREDICTABILITY
- Epilogue
- References
- Index
9 - Matrix methods
from PART III - COMPUTATIONAL TECHNIQUES
Published online by Cambridge University Press: 18 December 2009
- Frontmatter
- Contents
- Preface
- Acknowledgements
- PART 1 GENESIS OF DATA ASSIMILATION
- PART II DATA ASSIMILATION: DETERMINISTIC/STATIC MODELS
- PART III COMPUTATIONAL TECHNIQUES
- 9 Matrix methods
- 10 Optimization: steepest descent method
- 11 Conjugate direction/gradient methods
- 12 Newton and quasi-Newton methods
- PART IV STATISTICAL ESTIMATION
- PART V DATA ASSIMILATION: STOCHASTIC/STATIC MODELS
- PART VI DATA ASSIMILATION: DETERMINISTIC/DYNAMIC MODELS
- PART VII DATA ASSIMILATION: STOCHASTIC/DYNAMIC MODELS
- PART VIII PREDICTABILITY
- Epilogue
- References
- Index
Summary
Recall from Chapters 5 and 6 that the optimal linear estimate x is given by the solution of the normal equation
(HTH)x = HTz when m > n
and
where H ∈ ℝm×n and is of full rank. In either case HTH ∈ ℝn×n and HHT ∈ ℝm×m, called the Grammian, is a symmetric and positive definite matrix. In the opening Section 9.1, we describe the classical Cholesky decomposition algorithm for solving linear systems with symmetric and positive definite matrices. This algorithm is essentially an adaptation of the method of LU decomposition for general matrices. This method of solving the normal equations using the Cholesky decomposition is computationally very efficient, but it may exhibit instability resulting from finite precision arithmetic. To alleviate this problem, during the 1960s a new class of methods based directly on the orthogonal decomposition of the (rectangular) measurement matrix H have been developed. In this chapter we describe two such methods. The first of these is based on the QR-decomposition in Section 9.2 and the second, called the singular value decomposition(SVD) is given in Section 9.3. Section 9.4 provides a comparison of the amount of work measured in terms of the number of floating point operations (FLOPs) to solve the linear least squares problem by these methods.
Cholesky decomposition
We begin by describing the classical LU-decomposition.
- Type
- Chapter
- Information
- Dynamic Data AssimilationA Least Squares Approach, pp. 149 - 168Publisher: Cambridge University PressPrint publication year: 2006