skip to main content
10.1145/3093338.3093341acmotherconferencesArticle/Chapter ViewAbstractPublication PagespearcConference Proceedingsconference-collections
research-article

We Have an HPC System: Now What?

Published:09 July 2017Publication History

ABSTRACT

If you build it, will they come? Not necessarily. A critical need exists for knowledge in managing and properly utilizing supercomputing at mid-level and smaller research institutions. Simply having HPC hardware and some software is not enough. This paper relates the administrative experience of the first several months of a mid-level doctoral university providing a new enterprise XSEDE [15] Compatible Basic Cluster (XCBC) [3,4,5] high performance computing cluster to faculty and other researchers, including the experiences of first-day urgencies, initial problems in the first few weeks, and establishing an ongoing management system.

References

  1. Greg Bruno, Mason J. Katz, Frederico D. Sacerdoti, Philip M. Papadopoulos. Rolls: Modifying a Standard System Installer to Support User-Customizable Cluster Frontend Appliances. In IEEE International Conference on Cluster Computing. Washington, DC, September 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Chui-hui Chiu, Nathan Lewis, Dipak Kumar Singh, Arghya Kusum Das, Mohammad M Jalazai, Richard Platania, Sayan Goswami, Kisung Lee, Seung-Jong Park. BIC-LSU: Big Data Research Integration with Cyberinfrastructure for LSU. XSEDE16. July, 2016. Miami. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Eric Coulter, Jeremy Fischer, Barbara Hallock, Richard Knepper, and Craig Steward. 2016. Implementation of Simple XSEDE-Like Clusters: Science Enabled and Lessons Learned. XSEDE16 (July 17-21, 2016), Miami. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Jeremy Fischer, Richard Knepper, Matthew Standish, Craig A. Stewart, Barbara Hallock, Resa Alvord, Victor Hazlewood, David Lifka. Methods for Creating XSEDE Compatible Clusters. XSEDE14, July, 2014. Atlanta, GA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Jeremy Fischer, Richard Knepper, Eric Coulter, Charles Peck, and Craig A. Stewart. XCBC and XNIT -- Tools for Cluster Implementation and Management in Research and Training. In Proceedings of the 2015 IEEE International Conference on Cluster Computing. September 8-11, 2015, pp. 857--864. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Ian Foster, Rajkumar Kettimuthu, Stuart Martin, Steve Tuecke, Thomas Hauser, Daniel Milroy, Brock Palen, and Jazcek Braden, Campus Bridging Made Easy via Globus Services, XSEDE 2012, Chicago, IL, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. M. J. Katz, P. M. Papadopoulos, and G. Bruno. Leveraging Standard Core Technologies to Programmatically Build Linux Cluster Appliances. In Proceedings of 2002 IEEE International Conference on Cluster Computing, Chicago, IL, October 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Rajkumar Kettimuthu, Lukasz Lacinski, Mike Link, Karl Pickett, Steve Tuecke, and Ian Foster, Instant GridFTP, 9th Workshop on High Performance Grid and Cloud Computing, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Tom Madden. The BLAST Sequence Analysis Tool. 2013.Google ScholarGoogle Scholar
  10. Matt Massie, Bernard Li, Brad Nicholes, Vladimir Vuksan, Robert Alexander, Jeff Buchbinder, Frederiko Costa, Alex Dean, Dave Josephsen, Peter Phaal, Daniel Pocock. Monitoring with Ganglia. 2012. O'Reilly Media, Inc. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. P. M. Papadopoulos, M. J. Katz, and G. Bruno. NPACI Rocks: Tools and Techniques for Easily Deploying Manageable Linux Clusters. In Proceedings of 2001 IEEE International Conference on Cluster Computing, Newport, CA, October 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Philip M. Papadopoulos, Mason J. Katz, and Greg Bruno. NPACI Rocks: Tools and Techniques for Easily Deploying Manageable Linux Clusters. In Concurrency, Practice and Experience, 15(7-8):707--725, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  13. Semir Sarajlic, Neranjan Edirisinghe, Yuriy Lukinov, Michael Walters, Brock Davis, and Gregori Faroux. Orion: Discovery Environment for HPC Research and Bridging XSEDE Resources. XSEDE16, July, 2016. Miami, FL. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Craig A. Stewart, Richard Knepper, James Ferguson, Felix Bachmann, Victor Hazlewood, Ian Foster, Andrew Grimshaw, and David Lifka. What is Campus Bridging and What is XSEDE Doing About It? XSEDE12, July, 2012. Chicago, IL. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. John Towns, Timothy Cockerill, Maytal Dahan, Ian Foster, Kelly Gaither, Andrew Grimshaw, Victor Hazlewood, Scott Lathrop, Dave Lifka, Gregory D. Peterson, Ralph Roskies, J. Ray Scott, Nancy Wilkins-Diehr, "XSEDE: Accelerating Scientific Discovery", Computing in Science & Engineering, vol.16, no. 5, pp. 62--74, Sept.-Oct. 2014.Google ScholarGoogle Scholar

Index Terms

  1. We Have an HPC System: Now What?

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      PEARC '17: Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact
      July 2017
      451 pages
      ISBN:9781450352727
      DOI:10.1145/3093338
      • General Chair:
      • David Hart

      Copyright © 2017 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 9 July 2017

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      PEARC '17 Paper Acceptance Rate54of79submissions,68%Overall Acceptance Rate133of202submissions,66%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader