Book contents
- Frontmatter
- Contents
- Preface
- Nomenclature
- 1 Introduction
- 2 Direct methods
- 3 Iterative methods
- 4 Matrix splitting preconditioners [T1]: direct approximation of An×n
- 5 Approximate inverse preconditioners [T2]: direct approximation of An×n−1
- 6 Multilevel methods and preconditioners [T3]: coarse grid approximation
- 7 Multilevel recursive Schur complements preconditioners [T4]
- 8 Sparse wavelet preconditioners [T5]: approximation of Ãn×n and Ãn×n−1
- 9 Wavelet Schur preconditioners [T6]
- 10 Implicit wavelet preconditioners [T7]
- 11 Application I: acoustic scattering modelling
- 12 Application II: coupled matrix problems
- 13 Application III: image restoration and inverse problems
- 14 Application IV: voltage stability in electrical power systems
- 15 Parallel computing by examples
- Appendix A a brief guide to linear algebra
- Appendix B the Harwell–Boeing (HB) data format
- Appendix C a brief guide to MATLAB®
- Appendix D list of supplied M-files and programs
- Appendix E list of selected scientific resources on Internet
- References
- Author Index
- Subject Index
- Plate section
2 - Direct methods
Published online by Cambridge University Press: 06 January 2010
- Frontmatter
- Contents
- Preface
- Nomenclature
- 1 Introduction
- 2 Direct methods
- 3 Iterative methods
- 4 Matrix splitting preconditioners [T1]: direct approximation of An×n
- 5 Approximate inverse preconditioners [T2]: direct approximation of An×n−1
- 6 Multilevel methods and preconditioners [T3]: coarse grid approximation
- 7 Multilevel recursive Schur complements preconditioners [T4]
- 8 Sparse wavelet preconditioners [T5]: approximation of Ãn×n and Ãn×n−1
- 9 Wavelet Schur preconditioners [T6]
- 10 Implicit wavelet preconditioners [T7]
- 11 Application I: acoustic scattering modelling
- 12 Application II: coupled matrix problems
- 13 Application III: image restoration and inverse problems
- 14 Application IV: voltage stability in electrical power systems
- 15 Parallel computing by examples
- Appendix A a brief guide to linear algebra
- Appendix B the Harwell–Boeing (HB) data format
- Appendix C a brief guide to MATLAB®
- Appendix D list of supplied M-files and programs
- Appendix E list of selected scientific resources on Internet
- References
- Author Index
- Subject Index
- Plate section
Summary
How much of the matrix must be zero for it to be considered sparse depends on the computation to be performed, the pattern of the nonzeros, and even the architecture of the computer. Generally, we say that a matrix is sparse if there is an advantage in exploiting its zeros.
Iain Duff, et al. Direct Methods for Sparse Matrices. Clarendon Press (1986)To be fair, the traditional classification of solution methods as being either direct or iterative methods is an oversimplification and is not a satisfactory description of the present state of affairs.
Michele Benzi. Journal of Computational Physics, Vol. 182 (2002)A direct method for linear system Ax = b refers to any method that seeks the solution x, in a finite number of steps, by simplifying the general matrix A to some special and easily solvable form (1.3), e.g. a diagonal form or triangular form. In the absence of computer roundoff, x will be the exact answer x*; however unless symbolic computing is used, computer roundoff is present and hence conditioning of A will affect the quality of x. Often a direct method is synonymous with the Gaussian elimination method, which essentially simplifies A to a triangular form or equivalently decomposes matrix A into a product of triangular matrices. However one may also choose its closely related variants such as the Gauss–Jordan method, the Gauss–Huard method or the Purcell method especially when parallel methods are sought; refer to [143].
- Type
- Chapter
- Information
- Matrix Preconditioning Techniques and Applications , pp. 66 - 109Publisher: Cambridge University PressPrint publication year: 2005