skip to main content
10.1145/1250662.1250713acmconferencesArticle/Chapter ViewAbstractPublication PagesiscaConference Proceedingsconference-collections
Article

Analysis of redundancy and application balance in the SPEC CPU2006 benchmark suite

Published:09 June 2007Publication History

ABSTRACT

The recently released SPEC CPU2006 benchmark suite is expected to be used by computer designers and computer architecture researchers for pre-silicon early design analysis. Partial use of benchmark suites by researchers, due to simulation time constraints, compiler difficulties, or library or system call issues is likely to happen; but a random subset can lead to misleading results. This paper analyzes the SPEC CPU2006 benchmarks using performance counter based experimentation from several state of the art systems, and uses statistical techniques such as principal component analysis and clustering to draw inferences on the similarity of the benchmarks and the redundancy in the suite and arrive at meaningful subsets.

The SPEC CPU2006 benchmark suite contains several programs from areas such as artificial intelligence and includes none from the electronic design automation (EDA) application area. Hence there is a concern on the application balance in the suite. An analysis from the perspective of fundamental program characteristics shows that the included programs offer characteristics broader than the EDA programs' space. A subset of 6 integer programs and 8 floating point programs can yield most of the information from the entire suite.

References

  1. M. Alt "Performance Modeling Using Compilers" White paper Intel Corp. http://cache--www.intel.com/cd/00/00/22/64/226491_226491.pdfGoogle ScholarGoogle Scholar
  2. D. Citron, "MisSPECulation: partial and misleading use of spec CPU2000 in computer architecture conferences", Proceedings of the 30th Annual International Symposium on Computer Architecture, pp. 52--59, June 9-11, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. D. Citron, J. Hennessy, D. Patterson, G. Sohi, "The Use and Abuse of SPEC: An ISCA Panel," IEEE Micro, vol. 23, no. 4, pp. 73--77, Jul/Aug, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. K. Dixit: Overview of the SPEC Benchmarks. "The Benchmark Handbook", Chapter 9, 1993.Google ScholarGoogle Scholar
  5. J. Dongarra, K. London, S. Moore, P. Mucci, D. Terpstra, "Using PAPI for hardware performance monitoring on Linux Systems" Conference on Linux Clusters: The HPC Revolution, Linux Clusters Institute, June 2001.Google ScholarGoogle Scholar
  6. J. Dujmovic and I. Dujmovic, "Evolution and Evaluation of SPEC benchmarks", ACM SIGMETRICS Performance Evaluation Review, vol. 26, no. 3, pp. 2--9, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. G. Dunteman, Principal Components Analysis, Sage Publications, 1989.Google ScholarGoogle Scholar
  8. L. Eeckhout, H. Vandierendonck, and K. De Bosschere, "Designing computer architecture research workloads", IEEE Computer, 36(2), pp. 65--71, Feb 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. L. Eeckhout, H. Vandierendonck, and K. De Bosschere, "Quantifying the impact of input data sets on program behavior and its applications", Journal of Instruction Level Parallelism, vol 5, pp. 1--33, 2003.Google ScholarGoogle Scholar
  10. R. Giladi and N. Ahituv, "SPEC as a Performance Evaluation Measure", IEEE Computer, Vol. 28, No. 8, Aug 1995, Pages 33--42. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. J. Henning, "SPEC CPU2000: Measuring CPU Performance in the New Millenium", IEEE Computer, July 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. Henning. Performance Counters and Development of SPEC CPU2006. Computer Architecture News. March 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. L. John, P. Vasudevan and J. Sabarinathan, "Workload Characterization: Motivation, Goals and Methodology", In Workload Characterization: Methodology and Case Studies, Edited by L. John and A. M. G. Maynard, IEEE Computer Society, pp. 3--14, November 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. L. John, V. Reddy, P. Hulina, and L. Coraor, "Program Balance and its impact on High Performance RISC Architecture", Proc. of the International Symposium on High Perf Comp Arch, pp.370--379, Jan 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. A. Joshi, A. Phansalkar, L. Eeckhout, L.K. John "Measuring Benchmark Characteristics Using Inherent Program Characteristics", IEEE Transactions on Computers, Jun2006, Vol 55 No. 6 pp 769--782. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. T. Lafage and A. Seznec, "Choosing Representative Slices of Program Execution for Microarchitecture Simulations: A Preliminary Application to the Data Stream", Workshop on Workload Characterization (WWC-2000), Sept 2000.Google ScholarGoogle Scholar
  17. C.K. Luk, R. Cohn, R. Muth, H. Patil, A. Klauser, G. Lowney, S. Wallace, V. J. Reddi, K. Hazelwood, "PIN:Building Customized Program Analysis Tools with Dynamic Instrumentation", Proceedings of 2005 ACM SIPLAN Conference on Programming Language Design and Implementation, pp 190--200, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. H. McGhan, SPEC CPU2006 Benchmark Suite, Microprocessor Report, October 10, 2006.Google ScholarGoogle Scholar
  19. A. Phansalkar, A. Joshi, L. Eeckhout, and L. K. John, "Measuring Program Similarity: Experiments with SPEC CPU Benchmark Suites". IEEE International Symposium on Performance Analysis of Systems and Software. March 2005, pp 10--20. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. A. Phansalkar, Joshi, A. L. Eeckhout, and L.K. John "Four Generations of SPEC CPU Benchmarks: What has changed and what has not?", Technical Report TR-041026-1, Laboratory of Computer Architecture, The University of Texas at Austin. 2004.Google ScholarGoogle Scholar
  21. J. Reilly. Presentation at IEEE International Symposium on Workload Characterization, Oct 2006 http://www.iiswc.org/iiswc2006/IISWC2006S2.1.pdfGoogle ScholarGoogle Scholar
  22. T. Sherwood, E. Perelman, G. Hamerly, and B. Calder, "Automatically Characterizing Large Scale Program Behavior", Proc. of International Conference on Architecture Support for Programming Languages and Operating Systems, pp. 45--57, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. H. Vandierendonck, K. Bosschere, "Many Benchmarks Stress the Same Bottlenecks", Proc. of the Workshop on Computer Architecture Evaluation using Commerical Workloads (CAECW-7), pp. 57--71, 2004.Google ScholarGoogle Scholar
  24. R. Weicker, "An Overview of Common Benchmarks", IEEE Computer, pp. 65--75, Dec 1990. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. T. Wenisch, R. Wunderlich, B. Falsafi, and J. Hoe, "Applying SMARTS to SPEC CPU2000", CALCM Technical Report 2003-1, Carnegie Mellon University, June 2003.Google ScholarGoogle Scholar
  26. J. Yi and D. Lilja, "Simulation of Computer Architectures: Simulators, Benchmarks, Methodologies, and Recommendations," IEEE Transactions on Computers, Vol. 55, No. 3, Mar. 2006, pp. 268--280. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. J. Yi, R. Sendag, L. Eeckhout, A. Joshi, D. Lilja, and L. K. John, "Evaluating Benchmark Subsetting Approaches" International Symposium on Workload Characterization, October 2006, pp 93--104.Google ScholarGoogle ScholarCross RefCross Ref
  28. J. Yi, D. Lilja, and D. Hawkins, "A Statistically Rigorous Approach for Improving Simulation Methodology", Proc. of Intl Conf on High Performance Computer Architecture, Feb 2003, pp 281--291. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Analysis of redundancy and application balance in the SPEC CPU2006 benchmark suite

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          ISCA '07: Proceedings of the 34th annual international symposium on Computer architecture
          June 2007
          542 pages
          ISBN:9781595937063
          DOI:10.1145/1250662
          • General Chair:
          • Dean Tullsen,
          • Program Chair:
          • Brad Calder
          • cover image ACM SIGARCH Computer Architecture News
            ACM SIGARCH Computer Architecture News  Volume 35, Issue 2
            May 2007
            527 pages
            ISSN:0163-5964
            DOI:10.1145/1273440
            Issue’s Table of Contents

          Copyright © 2007 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 9 June 2007

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • Article

          Acceptance Rates

          Overall Acceptance Rate543of3,203submissions,17%

          Upcoming Conference

          ISCA '24

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader