Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgements
- PART 1 GENESIS OF DATA ASSIMILATION
- PART II DATA ASSIMILATION: DETERMINISTIC/STATIC MODELS
- PART III COMPUTATIONAL TECHNIQUES
- 9 Matrix methods
- 10 Optimization: steepest descent method
- 11 Conjugate direction/gradient methods
- 12 Newton and quasi-Newton methods
- PART IV STATISTICAL ESTIMATION
- PART V DATA ASSIMILATION: STOCHASTIC/STATIC MODELS
- PART VI DATA ASSIMILATION: DETERMINISTIC/DYNAMIC MODELS
- PART VII DATA ASSIMILATION: STOCHASTIC/DYNAMIC MODELS
- PART VIII PREDICTABILITY
- Epilogue
- References
- Index
12 - Newton and quasi-Newton methods
from PART III - COMPUTATIONAL TECHNIQUES
Published online by Cambridge University Press: 18 December 2009
- Frontmatter
- Contents
- Preface
- Acknowledgements
- PART 1 GENESIS OF DATA ASSIMILATION
- PART II DATA ASSIMILATION: DETERMINISTIC/STATIC MODELS
- PART III COMPUTATIONAL TECHNIQUES
- 9 Matrix methods
- 10 Optimization: steepest descent method
- 11 Conjugate direction/gradient methods
- 12 Newton and quasi-Newton methods
- PART IV STATISTICAL ESTIMATION
- PART V DATA ASSIMILATION: STOCHASTIC/STATIC MODELS
- PART VI DATA ASSIMILATION: DETERMINISTIC/DYNAMIC MODELS
- PART VII DATA ASSIMILATION: STOCHASTIC/DYNAMIC MODELS
- PART VIII PREDICTABILITY
- Epilogue
- References
- Index
Summary
It was around 1660 Newton discovered the method for solving nonlinear equations that bears his name. Shortly thereafter – around 1665 – he also developed the secant method for solving nonlinear equations. Since then these methods have become a part of the folklore in numerical analysis (see Exercises 12.1 and 12.2). In addition to solving nonlinear equations, these methods can also be applied to the problem of minimizing a nonlinear function. In this chapter we provide an overview of the classical Newton's method and many of its modern relatives called quasi-Newton methods for unconstrained minimization. The major advantage of the Newton's method is its quadratic convergence (Exercise 12.3) but in finding the next descent direction it requires solution of a linear system which is often a bottleneck. Quasi-Newton methods are designed to preserve the good convergence properties of the Newton's method while they provide considerable relief from this computational bottleneck. Quasi-Newton methods are extensions of the secant method. Davidon was the first to revive the modern interest in quasi-Newton methods in 1959 but his work remained unpublished till 1991. However, Fletcher and Powell in 1963 published Davidon's ideas and helped to revive this line of approach to designing efficient minimization algorithms.
The philosophy and practice that underlie the design of quasi-Newton methods underscore the importance of the trade - off between rate of convergence and computational cost and storage.
- Type
- Chapter
- Information
- Dynamic Data AssimilationA Least Squares Approach, pp. 209 - 224Publisher: Cambridge University PressPrint publication year: 2006