Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Direct Solution Methods
- 2 Theory of Matrix Eigenvalues
- 3 Positive Definite Matrices, Schur Complements, and Generalized Eigenvalue Problems
- 4 Reducible and Irreducible Matrices and the Perron-Frobenius Theory for Nonnegative Matrices
- 5 Basic Iterative Methods and Their Rates of Convergence
- 6 M-Matrices, Convergent Splittings, and the SOR Method
- 7 Incomplete Factorization Preconditioning Methods
- 8 Approximate Matrix Inverses and Corresponding Preconditioning Methods
- 9 Block Diagonal and Schur Complement Preconditionings
- 10 Estimates of Eigenvalues and Condition Numbers for Preconditioned Matrices
- 11 Conjugate Gradient and Lanczos-Type Methods
- 12 Generalized Conjugate Gradient Methods
- 13 The Rate of Convergence of the Conjugate Gradient Method
- Appendices
- Index
2 - Theory of Matrix Eigenvalues
Published online by Cambridge University Press: 05 August 2012
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Direct Solution Methods
- 2 Theory of Matrix Eigenvalues
- 3 Positive Definite Matrices, Schur Complements, and Generalized Eigenvalue Problems
- 4 Reducible and Irreducible Matrices and the Perron-Frobenius Theory for Nonnegative Matrices
- 5 Basic Iterative Methods and Their Rates of Convergence
- 6 M-Matrices, Convergent Splittings, and the SOR Method
- 7 Incomplete Factorization Preconditioning Methods
- 8 Approximate Matrix Inverses and Corresponding Preconditioning Methods
- 9 Block Diagonal and Schur Complement Preconditionings
- 10 Estimates of Eigenvalues and Condition Numbers for Preconditioned Matrices
- 11 Conjugate Gradient and Lanczos-Type Methods
- 12 Generalized Conjugate Gradient Methods
- 13 The Rate of Convergence of the Conjugate Gradient Method
- Appendices
- Index
Summary
Let us consider first a n × n matrix A as defining a linear mapping in ℂn or ℝn, w.r.t. a fixed coordinate system. A number λ ∈ ℂ, for which Ax = λx where x ≠ 0, is said to be an eigenvalue of A and x is said to be an eigenvector corresponding to λ; hence x is a vector, which is mapped by A onto its own direction. We show that there is at least one such vector for every square matrix. First, some fundamental concepts and properties in the theory of eigenvalues are presented. We prove that the eigenvalues of A are the zeros of φ(λ) = det(A – λI), a polynomial in λ called the characteristic polynomial of A. We prove that φ(A) = 0, and we consider the polynomial m(λ), the polynomial of minimal degree for which m(A) = 0.
Selfadjoint and unitary matrices play an important role in applications, and we derive properties of the eigensolutions of such matrices. If the matrix B of order n defines the same mapping as the matrix A, but with respect to another basis in ℂn or ℝn, we can write B as B = C−1AC, where C is a nonsingular matrix. We prove that B and A have the same eigenvalues (i.e., the eigenvalues are independent of the particular basis) and consider matrices A for which there exists a matrix C such that B is a triangular or even a diagonal matrix.
- Type
- Chapter
- Information
- Iterative Solution Methods , pp. 46 - 83Publisher: Cambridge University PressPrint publication year: 1994
- 1
- Cited by