Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Direct Solution Methods
- 2 Theory of Matrix Eigenvalues
- 3 Positive Definite Matrices, Schur Complements, and Generalized Eigenvalue Problems
- 4 Reducible and Irreducible Matrices and the Perron-Frobenius Theory for Nonnegative Matrices
- 5 Basic Iterative Methods and Their Rates of Convergence
- 6 M-Matrices, Convergent Splittings, and the SOR Method
- 7 Incomplete Factorization Preconditioning Methods
- 8 Approximate Matrix Inverses and Corresponding Preconditioning Methods
- 9 Block Diagonal and Schur Complement Preconditionings
- 10 Estimates of Eigenvalues and Condition Numbers for Preconditioned Matrices
- 11 Conjugate Gradient and Lanczos-Type Methods
- 12 Generalized Conjugate Gradient Methods
- 13 The Rate of Convergence of the Conjugate Gradient Method
- Appendices
- Index
13 - The Rate of Convergence of the Conjugate Gradient Method
Published online by Cambridge University Press: 05 August 2012
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Direct Solution Methods
- 2 Theory of Matrix Eigenvalues
- 3 Positive Definite Matrices, Schur Complements, and Generalized Eigenvalue Problems
- 4 Reducible and Irreducible Matrices and the Perron-Frobenius Theory for Nonnegative Matrices
- 5 Basic Iterative Methods and Their Rates of Convergence
- 6 M-Matrices, Convergent Splittings, and the SOR Method
- 7 Incomplete Factorization Preconditioning Methods
- 8 Approximate Matrix Inverses and Corresponding Preconditioning Methods
- 9 Block Diagonal and Schur Complement Preconditionings
- 10 Estimates of Eigenvalues and Condition Numbers for Preconditioned Matrices
- 11 Conjugate Gradient and Lanczos-Type Methods
- 12 Generalized Conjugate Gradient Methods
- 13 The Rate of Convergence of the Conjugate Gradient Method
- Appendices
- Index
Summary
In the two previous chapters, it was shown that conjugate gradient-type methods have many advantages, one being simplicity of implementation: No method parameter need be estimated, and the method consists mainly of matrix vector multiplications and vector operations. In the present chapter, we shall analyze the rate of convergence of the methods. This will be done not only for the case of a symmetric, positive definite matrix but also for general matrices. First, it will be shown that if the matrix is normal, then the norm of the residual can be estimated by the error in a best polynomial approximation problem. It will then be shown that the rate of convergence depends on the distribution of eigenvalues and, to some extent, also on the initial residual. If the matrix is not normal, the estimate involves also the condition number of the matrix that transforms the given matrix to Jordan canonical form. In general, we are unable to give exact estimates of the rate of convergence, but we shall derive various upper bounds using different methods of estimation.
The rate of convergence can be measured in various norms. When we use a relative measure—say, the ratio of the norm of the current residual and the initial residual—it will be shown that the condition number of the matrix can frequently be used to give a sufficiently accurate estimate.
- Type
- Chapter
- Information
- Iterative Solution Methods , pp. 558 - 594Publisher: Cambridge University PressPrint publication year: 1994