Book contents
- Frontmatter
- Contents
- Preface
- 1 Numerical error
- 2 Direct solution of linear systems
- 3 Eigenvalues and eigenvectors
- 4 Iterative approaches for linear systems
- 5 Interpolation
- 6 Iterative methods and the roots of polynomials
- 7 Optimization
- 8 Data fitting
- 9 Integration
- 10 Ordinary differential equations
- 11 Introduction to stochastic ODEs
- 12 A big integrative example
- Appendix A Mathematical background
- Appendix B Sample codes
- Solutions
- References
- Index
7 - Optimization
Published online by Cambridge University Press: 05 June 2014
- Frontmatter
- Contents
- Preface
- 1 Numerical error
- 2 Direct solution of linear systems
- 3 Eigenvalues and eigenvectors
- 4 Iterative approaches for linear systems
- 5 Interpolation
- 6 Iterative methods and the roots of polynomials
- 7 Optimization
- 8 Data fitting
- 9 Integration
- 10 Ordinary differential equations
- 11 Introduction to stochastic ODEs
- 12 A big integrative example
- Appendix A Mathematical background
- Appendix B Sample codes
- Solutions
- References
- Index
Summary
The goal of optimization is to find a vector solution x that minimizes some scalar function f (x). We will assume that the function is at least C2 (i.e., is continuous, has continuous derivatives and has continuous second derivatives). The solution space may be subject to constraints, e.g., the positivity of some elements of x.
We have seen already one method, conjugate gradients (Section 4.1), which assumes a specific quadratic function f with a minimum that solves the linear equation Ax = b. We are now interested in more complicated functions, and for many such functions the solution is notoriously dificult. For example, the function may possess any number of local minima. In the neighborhood of a local minimum, quasi-Newton type methods will converge to the local minimum, which may be distant from the desired global minimum. None of the approaches described here can cope with that problem – the success of the methods relies on smoothness of the function f (x) and the quality of the starting guess x(0).
We will examine separately the case of nonlinear functions f(x) without constraints (variable metric methods), and linear constraints applied to linear functions (linear programming). Methods for constrained nonlinear problems (nonlinear programming) are built on these simpler methods.
The variable metric methods for n-dimensional problems rely on accurate and stable methods for simpler one-dimensional (1D) problems. The problem of finding a local minimum in 1D is broken down into two parts. Problem I is called bracketing.
- Type
- Chapter
- Information
- Numerical Analysis for Engineers and Scientists , pp. 182 - 212Publisher: Cambridge University PressPrint publication year: 2014