ABSTRACT
Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost which quickly becomes intractable with the current explosion of data sizes. In this work we reduce this complexity to quadratic with the synergy of two algorithms that gracefully complement each other and lead to a radically different approach. First, we turned to stochastic estimation of the diagonal. This allowed us to cast the problem as a linear system with a relatively small number of multiple right hand sides. Second, for this linear system we developed a novel, mixed precision, iterative refinement scheme, which uses iterative solvers instead of matrix factorizations. We demonstrate that the new framework not only achieves the much needed quadratic cost but in addition offers excellent opportunities for scaling at massively parallel environments. We based our implementation on BLAS 3 kernels that ensure very high processor performance. We achieved a peak performance of 730 TFlops on 72 BG/P racks, with a sustained performance 73% of theoretical peak. We stress that the techniques presented in this work are quite general and applicable to several other important applications.
- C. Bekas, E. Kokiopoulou, and Y. Saad. An estimator for the diagonal of a matrix. Applied Numerical Mathematics, 57(11--12):1214--1229, November 2007. Google ScholarDigital Library
- G. Bhanot, A. Gara, P. Heidelberger, E. Lawless, J C. Sexton and R. Walkup. Optimizing task layout on the Blue Gene/L supercomputer. IBM Journal of Research and Development, Vol. 49, No. 2/3, p. 489--500, 2005 Google ScholarDigital Library
- L. Blackford, J. Choi, A. Cleary, E. D'Azevedo, J. Demmel, I. Dhillon, J. Dongarra, S. Hammarling, G. Henry, A. Petitet, K. Stanley, D. Walker, and R. Whaley. ScaLAPACK User's Guide. SIAM, Philadelphia, 1997. See also www.netlib.org/scalapack. Google ScholarDigital Library
- A. Buttari, J. Dongarra, J. Langou, J. Langou, P. Luszczek, and J. Kurzak. Mixed precision iterative refinement techniques for the solution of dense linear systems. International Journal of High Performance Computing Applications, 21(4):457--466, November 2007. Google ScholarDigital Library
- J. J. Dongarra, I. Duff, D. C. Sorensen, and H. van der Vorst. Numerical Linear Algebra for High-Performance Computers. SIAM, 1998. Google ScholarDigital Library
- I. Fedulova. Parallel task mapping method for block-diagonal communication matrices. In preparation, 2009.Google Scholar
- D. Girard. Un algorithme simple et rapide pour la validation croissee generalisee sur des problemes de grande taillee. RR 669-M, Grenoble, France: Informatique et Mathématiques Appliquées de Grenoble., 1987.Google Scholar
- G. H. Golub and C. F. V. Loan. Matrix Computations. The Johns Hopkins University Press, Baltimore, 3d edition, 1996. Google ScholarDigital Library
- W. Gropp, S. Huss-Lederman, A. Lumsdaine, E. Lusk, B. Nitzberg, W. Saphir, and M. Snir. MPI - The complete Reference, Volume 2, The MPI Extensions. MIT Press, 1998.Google Scholar
- J. Hartlap, P. Simon, and P. Schneider. Eso 2007 astronomy astrophysics, 2006.Google Scholar
- N. J. Higham. Accuracy and Stability of Numerical Algorithms. SIAM, Philadelphia, 1996. Google ScholarDigital Library
- M. F. Hutchinson. A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines. J. Commun. Statist. Simula., 19(2):433--450, 1990.Google ScholarCross Ref
- IBM. Blue Gene/P Solution. www-03.ibm.com/systems/deepcomputing/bluegene.Google Scholar
- Jugene. Jülich Supercomputer Center. http://www.fz-juelich.de/jsc/jugeneGoogle Scholar
- D. S. Oliver. Calculation of the inverse of the covariance. Mathematical Geology, 30(7):911--933, October 1998.Google ScholarCross Ref
- Y. Saad. Numerical Methods for Large Eigenvalue Problems. Halstead Press, New York, 1992.Google Scholar
- Y. Saad. Iterative Methods for Sparse Linear Systems. SIAM, 2nd edition, 2003. Google ScholarDigital Library
- M. Snir, S. Otto, S. Huss-Lederman, D. Walker, and J. Dongarra. MPI - The complete Reference, Volume 1, The MPI Core. MIT Press, 1998. Google ScholarDigital Library
- G. V. G. Stevens. On the inverse of the covariance matrix in portfolio analysis. The Journal of Finance, 53:1821--1827, 1998.Google ScholarCross Ref
- K. Visweswariah, P. Olsen, R. Gopinath, and S. Axelrod. Maximum likelihood training of subspaces for inverse covariance modeling. In in Proc. ICASSP, 2003.Google ScholarCross Ref
- J. H. Wilkinson. Rounding errors in algebraic processes. Notes on Applied Science No. 32. Her Majesty's Stationary Office, London, 1963. Google ScholarDigital Library
Index Terms
- Low cost high performance uncertainty quantification
Recommendations
Low-cost data uncertainty quantification
The analysis of a huge backload of ever-accumulating data presents a huge challenge in all respects of computing. Inverse covariance matrices in this respect are very important. We target data uncertainty quantification, a very useful measure of which ...
A fast block low-rank dense solver with applications to finite-element matrices
This article presents a fast solver for the dense "frontal" matrices that arise from the multifrontal sparse elimination process of 3D elliptic PDEs. The solver relies on the fact that these matrices can be efficiently represented as a hierarchically ...
On Signed Incomplete Cholesky Factorization Preconditioners for Saddle-Point Systems
Limited-memory incomplete Cholesky factorizations can provide robust preconditioners for sparse symmetric positive-definite linear systems. In this paper, the focus is on extending the approach to sparse symmetric indefinite systems in saddle-point form. A ...
Comments