Published online by Cambridge University Press: 03 July 2015
This paper presents a parallel algorithm for finding the smallest eigenvalue of a family of Hankel matrices that are ill-conditioned. Such matrices arise in random matrix theory and require the use of extremely high precision arithmetic. Surprisingly, we find that a group of commonly-used approaches that are designed for high efficiency are actually less efficient than a direct approach for this class of matrices. We then develop a parallel implementation of the algorithm that takes into account the unusually high cost of individual arithmetic operations. Our approach combines message passing and shared memory, achieving near-perfect scalability and high tolerance for network latency. We are thus able to find solutions for much larger matrices than previously possible, with the potential for extending this work to systems with greater levels of parallelism. The contributions of this work are in three areas: determination that a direct algorithm based on the secant method is more effective when extreme fixed-point precision is required than are the algorithms more typically used in parallel floating-point computations; the particular mix of optimizations required for extreme precision large matrix operations on a modern multi-core cluster, and the numerical results themselves.