Abstract
A supercomputer evokes images of “big iron“ and speed; it is the Formula 1 racecar of computing. As we venture forth into the new millennium, however, I argue that efficiency, reliability, and availability will become the dominant issues by the end of this decade, not only for supercomputing, but also for computing in general.
- 1. Moore, G. Cramming more components onto integrated circuits, Electronics 38, 8 (April 1965).Google Scholar
- 2. When calculating the price/performance ratio, another metric that is sometimes used in conjunction with the performance metric, price is defined to be the cost of acquisition only and does not account for the cost of operation.Google Scholar
- 3. Bell, G. and Gray, J. What's next in high-performance computing? Communications of the ACM, 45, 2 (Feb. 2002). Google ScholarDigital Library
- 4. Sterling, T., Becker, D. Savarese, D., Dorband, J. Ranawake, U., and Packer, C. Beowulf: A parallel workstation for scientific computation, Proceedings of the International Conference on Parallel Processing (August 1995).Google Scholar
- 5. Feng, W., Warren, M., and Weigle, E. The bladed Beowulf: A cost-effective alternative to traditional Beowulfs, Proceedings of IEEE Cluster 2002 (Sept. 2002). Google ScholarDigital Library
- 6. Warren, M., Weigle, E., and Feng, W. High-density computing: A 240-processor Beowulf in one cubic meter, Proceedings of Supercomputing 2002 (Nov. 2002). Google ScholarDigital Library
- 7. Darling, A., Carey, L., Feng, W. The design, implementation, and evaluation of mpiBLAST, Best Paper: Applications Track, Proceedings of ClusterWorld Conference & Expo (June 2003).Google Scholar
- 8. Karp, A. Speeding up n-body calculations on machines lacking a hardware square root, Scientific Programming, 1, 2 (1992).Google ScholarDigital Library
- 9. The performance of ASCI Q is extrapolated from the measured performance on a smaller version of the machine with the same architecture. The extrapolation is optimistic; actual performance will likely be somewhat smaller. The power and space numbers for Avalon and Green Destiny are actual measurements, whereas the power and space numbers for the ASCI machines are based on personal communications with system administrators and quoted numbers from the World Wide Web.Google Scholar
- 10. LANL researchers outfit the "Toyota Camry" of supercomputing for bioinformatics tasks, BioInform/ GenomeWeb (Feb. 3, 2003).Google Scholar
- 11. Warren, M., Germann, T., Lomdahl, P., Beazley, D., and Salmon, J. Avalon: An Alpha/Linux cluster achieves 10 Gflops for $150K, Proceedings of Supercomputing 1998 (SC'98) (Nov. 1998). Google ScholarDigital Library
- 12. If Green Destiny+ had been specified in a full configuration--that is, 1.125 GB of memory per node and 160 GB of disk per node--the memory density and disk density would have increased by an order of magnitude to 187,500 MB per square foot and 6,400 GB per square foot, respectively. These numbers will have tremendous implications to Web-server farms and search-engine farms like Yahoo and Google.Google Scholar
- 13. LINPACK, the benchmark used to rank supercomputers in the Top 500 Supercomputer List (http: //www.top500.org), was chosen because it is the only common benchmark result that we have that has been run across the two different machines--Green Destiny and Japanese Earth Simulator.Google Scholar
- 14. We note again that if Green Destiny+ had been specified in a full configuration--1.125 GB of memory per node and 160 GB of disk per node--the memory density and disk density would have increased by an order of magnitude to 187,500 MB per square foot and 6,400 GB per square foot, respectively.Google Scholar
- 15. The Japanese Earth Simulator actually occupies two floors, each 50 meters by 60 meters (or 35,145 square feet) in dimension. Thus, its footprint is effectively 2* 35,145 = 70,290 square feet.Google Scholar
- 16. Bell, Gordon. Letter to Los Alamos National Laboratory, 2003.Google Scholar
- 17. Lakhman, K. Craig Venter goes shopping for bioinformatics to fill his new sequencing center, GenomeWeb, Oct. 16, 2002; http://www.genomeweb.com/articles/ view-article.asp?Article=2002101693617.Google Scholar
Index Terms
- Making a Case for Efficient Supercomputing: It is time for the computing community to use alternative metrics for evaluating performance.
Recommendations
DEISA--Distributed European Infrastructure for Supercomputing Applications
The paper presents an overview of the current research and achievements of the DEISA project, with a focus on the general concept of the infrastructure, the operational model, application projects and science communities, the DEISA Extreme Computing ...
Hungarian Supercomputing Grid
ICCS '02: Proceedings of the International Conference on Computational Science-Part IIThe main objective of the paper is to describe the main goals and activities within the newly formed Hungarian Supercomputing Grid (H-SuperGrid) which will be used as a high-performance and highthroughput Grid. In order to achieve these two features ...
UNICORE deployment within the DEISA supercomputing grid infrastructure
Euro-Par'06: Proceedings of the CoreGRID 2006, UNICORE Summit 2006, Petascale Computational Biology and Bioinformatics conference on Parallel processingDEISA is a consortium of leading national supercomputing centers that is building and operating a persistent distributed supercomputing environment with continental scope in Europe. To integrate their resources, the DEISA partners have adopted the most ...
Comments