Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgements
- PART 1 GENESIS OF DATA ASSIMILATION
- PART II DATA ASSIMILATION: DETERMINISTIC/STATIC MODELS
- PART III COMPUTATIONAL TECHNIQUES
- PART IV STATISTICAL ESTIMATION
- PART V DATA ASSIMILATION: STOCHASTIC/STATIC MODELS
- PART VI DATA ASSIMILATION: DETERMINISTIC/DYNAMIC MODELS
- 22 Dynamic data assimilation: the straight line problem
- 23 First-order adjoint method: linear dynamics
- 24 First-order adjoint method: nonlinear dynamics
- 25 Second-order adjoint method
- 26 The 4DVAR problem: a statistical and a recursive view
- PART VII DATA ASSIMILATION: STOCHASTIC/DYNAMIC MODELS
- PART VIII PREDICTABILITY
- Epilogue
- References
- Index
25 - Second-order adjoint method
from PART VI - DATA ASSIMILATION: DETERMINISTIC/DYNAMIC MODELS
Published online by Cambridge University Press: 18 December 2009
- Frontmatter
- Contents
- Preface
- Acknowledgements
- PART 1 GENESIS OF DATA ASSIMILATION
- PART II DATA ASSIMILATION: DETERMINISTIC/STATIC MODELS
- PART III COMPUTATIONAL TECHNIQUES
- PART IV STATISTICAL ESTIMATION
- PART V DATA ASSIMILATION: STOCHASTIC/STATIC MODELS
- PART VI DATA ASSIMILATION: DETERMINISTIC/DYNAMIC MODELS
- 22 Dynamic data assimilation: the straight line problem
- 23 First-order adjoint method: linear dynamics
- 24 First-order adjoint method: nonlinear dynamics
- 25 Second-order adjoint method
- 26 The 4DVAR problem: a statistical and a recursive view
- PART VII DATA ASSIMILATION: STOCHASTIC/DYNAMIC MODELS
- PART VIII PREDICTABILITY
- Epilogue
- References
- Index
Summary
In the variational approach, the dynamic data assimilation problem is recast as a minimization of the least squares performance criterion subject to the dynamic constraints. The first-order adjoint methods described in Chapters 22–24 enable us to compute the gradient of this objective function. Since the convergence of the gradient algorithm can be slow, especially in nonlinear problems of interest in geophysical applications, the gradient obtained using the first-order adjoint method is often used in conjunction with the quasi-Newton methods (Chapter 12) to obtain faster convergence. The strength of the quasi-Newton methods lies in their ability to extract the approximate Hessian of the objective function which in turn is used in a Newton-like algorithm. It is well known that minimization algorithms using the Hessian information perform better. Thus it behooves us to ponder the following question: in addition to the gradient, can we directly compute the Hessian related information, namely the Hessian-vector product? If this information can be obtained, we can then use it in conjunction with the conjugate gradient algorithm to obtain faster convergence. A framework for using the Hessian-vector product within the conjugate gradient algorithm framework is described in Section 12.3.
In this chapter we derive the so-called second-order adjoint method for computing simultaneously the gradient and the Hessian-vector product. The derivation for the scalar case is given in Section 25.1 and its extension to include the vector case is given in 25.2. Section 25.3 describes an application of the second-order adjoint method for computing the sensitivity of a response function. First-order adjoint sensitivity computations are given in Section 24.5.
- Type
- Chapter
- Information
- Dynamic Data AssimilationA Least Squares Approach, pp. 422 - 444Publisher: Cambridge University PressPrint publication year: 2006