skip to main content
10.1145/1062261.1062299acmconferencesArticle/Chapter ViewAbstractPublication PagescfConference Proceedingsconference-collections
Article

Sparse matrix storage revisited

Published:04 May 2005Publication History

ABSTRACT

In this paper, we consider alternate ways of storing a sparse matrix and their effect on computational speed. They involve keeping both the indices and the non-zero elements in the sparse matrix in a single data structure. These schemes thus help reduce memory system misses that occur when the usual indexing based storage schemes are used to store sparse matrices and give promising performance improvements.

References

  1. E.-J. Im and K. Yelick. Model-Based Memory Hierarchy Optimizations for Sparse Matrices, October 1998. Workshop on Profile and Feedback-Directed Compilation, Paris, France.Google ScholarGoogle Scholar
  2. J. L. Hennessy and D. A. Patterson. Computer Architecture: A Quantitative Approach (Second Edition). Morgan Kauffman Publishers, San Mateo, California, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. M. Silva and R. Wait. Go for both types of data locality! In L. Jenkins and Ivengar, editors, Proceedings of HPC Asia 2002, Bangalore, volume 2, 2002.Google ScholarGoogle Scholar
  4. N. Goharian, T. El-Ghazawi, and D. Grossman. Enterprise text processing: A sparse matrix approach. In IEEE International Conference on Information Techniques on Coding and Computing (ITCC 2001), Las Vegas, Nevada, Apr. 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. R. Barrett, M. Berry, T. F. Chan, J. Demmel, J. Donato, J. Dongarra, V. Eijkhout, R. Pozo, C. Romine, and H. van der Vorst. Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods. SIAM, Philadelphia, 1994.Google ScholarGoogle ScholarCross RefCross Ref
  6. U.Ruede. Technological trends and their impact on the future of supercomputers. In F. H.J. Bungartz and C.Zenger, editors, High Performance Scientific and Engineering Computing, Proceedings of the International FORTWIHR Conference on HPSEC, Vol. 8 of Lecture Notes in Computational Science and Engineering, pages 459--471. Springer, Mar. 1998.Google ScholarGoogle Scholar
  7. R. Vuduc, J. Demmel, K. Yelick, S. Kamil, R. Nishtala, and B. Lee. Performance optimizations and bounds for sparse matrix-vector multiply. In Proceedings of the IEEE/ACM Conference on Supercomputing, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Sparse matrix storage revisited

    Recommendations

    Reviews

    Timothy R. Hopkins

    Increasing effort is being invested in attempting to squeeze every ounce of performance from the currently available processors; much of this investment is going into minimizing the effect of very slow memory access times on overall computational speed. This requires that each piece of data brought into a cache should be used as much as possible (temporal locality), and that every piece of data brought into a cache should be utilized (spatial locality). This paper looks at the practical advantages of using two simple methods of increasing temporal and spatial locality on the implementation of a sparse matrix/dense vector multiply. Performance comparisons are made against other commonly used schemes using a variety of different sparsity patterns. The storage method showing the most advantage for a general unstructured sparse matrix consists of an array of structures, where each structure consists of a pair of integers giving the row and column indexes and a float, providing the value of the nonzero element. The work reported is obviously very much a work-in-progress and is restricted to a very simple computational use (sparse matrices where there is no requirement to access the elements in any particular order or to select sets of elements, for example, all elements in a particular row). The paper would be a good, gentle introduction for a student wishing to start working in this area. Online Computing Reviews Service

    Access critical reviews of Computing literature here

    Become a reviewer for Computing Reviews.

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CF '05: Proceedings of the 2nd conference on Computing frontiers
      May 2005
      467 pages
      ISBN:1595930191
      DOI:10.1145/1062261

      Copyright © 2005 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 4 May 2005

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • Article

      Acceptance Rates

      Overall Acceptance Rate216of614submissions,35%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader