Book contents
- Frontmatter
- Contents
- Preface
- 1 Numerical error
- 2 Direct solution of linear systems
- 3 Eigenvalues and eigenvectors
- 4 Iterative approaches for linear systems
- 5 Interpolation
- 6 Iterative methods and the roots of polynomials
- 7 Optimization
- 8 Data fitting
- 9 Integration
- 10 Ordinary differential equations
- 11 Introduction to stochastic ODEs
- 12 A big integrative example
- Appendix A Mathematical background
- Appendix B Sample codes
- Solutions
- References
- Index
8 - Data fitting
Published online by Cambridge University Press: 05 June 2014
- Frontmatter
- Contents
- Preface
- 1 Numerical error
- 2 Direct solution of linear systems
- 3 Eigenvalues and eigenvectors
- 4 Iterative approaches for linear systems
- 5 Interpolation
- 6 Iterative methods and the roots of polynomials
- 7 Optimization
- 8 Data fitting
- 9 Integration
- 10 Ordinary differential equations
- 11 Introduction to stochastic ODEs
- 12 A big integrative example
- Appendix A Mathematical background
- Appendix B Sample codes
- Solutions
- References
- Index
Summary
Data fitting can be viewed as a generalization of polynomial interpolation to the case where we have more data than is needed to construct a polynomial of specified degree.
C.F. Gauss claims to have first developed solutions to the least squares problem, and both Gaussian elimination and the Gauss-Seidel iterative method were developed to solve these problems [52, 79]. In fact, interest in least squares by Galileo predates Gauss by over 200 years – a comprehensive history and analysis is given by Harter [97]. In addition to Gauss’ contributions, the Jacobi iterative method [118] and the Cholesky decomposition method [13] were developed to solve least squares problems. Clearly, the least squares problem was (and continues to be) a problem of considerable importance. All these methods were applied to the normal equations, which represent an overdetermined system as a square and symmetric positive definite matrix. Despite the astounding historical importance of the normal equations, the argument will be made that you should not ever use them. Extensions of least squares to nonlinear problems, and linear problems with normal error, are described.
Least squares refers to a best fit in the L2 norm, and this is by far the most commonly used norm. However, other norms are important for certain applications. Covariance-weighting leads to minimization in the Mahalanobis norm. L1 is commonly used in financial modeling, and L∞ may be most suitable when the underlying error distribution is uniform, versus normal.
- Type
- Chapter
- Information
- Numerical Analysis for Engineers and Scientists , pp. 213 - 242Publisher: Cambridge University PressPrint publication year: 2014