skip to main content
10.1145/2792745.2792775acmotherconferencesArticle/Chapter ViewAbstractPublication PagesxsedeConference Proceedingsconference-collections
research-article

Bridges: a uniquely flexible HPC resource for new communities and data analytics

Published:26 July 2015Publication History

ABSTRACT

In this paper, we describe Bridges, a new HPC resource that will integrate advanced memory technologies with a uniquely flexible, user-focused, data-centric environment to empower new research communities, bring desktop convenience to HPC, connect to campuses, and drive complex workflows. Bridges will differ from traditional HPC systems and support new communities through extensive interactivity, gateways (convenient web interfaces that hide complex functionality and ease access to HPC resources) and tools for gateway building, persistent databases and web servers, high-productivity programming languages, and virtualization. Bridges will feature three tiers of processing nodes having 128GB, 3TB, and 12TB of hardware-enabled coherent shared memory per node to support memory-intensive applications and ease of use, together with persistent database and web nodes and nodes for logins, data transfer, and system management. State-of-the-art Intel® Xeon® CPUs and NVIDIA Tesla GPUs will power Bridges' compute nodes. Multiple filesystems will provide optimal handling for different data needs: a high-performance, parallel, shared filesystem, node-local filesystems, and memory filesystems. Bridges' nodes and parallel filesystem will be interconnected by the Intel Omni-Path Fabric, configured in a topology developed by PSC to be optimal for the anticipated data-centric workload. Bridges will be a resource on XSEDE, the NSF Extreme Science and Engineering Discovery Environment, and will interoperate with other advanced cyberinfrastructure resources. Through a pilot project with Temple University, Bridges will develop infrastructure and processes for campus bridging, consisting of offloading jobs at periods of unusually high load to the other site and facilitating cross-site data management. Education, training, and outreach activities will raise awareness of Bridges and data-intensive science across K-12 and university communities, industry, and the general public.

References

  1. Jockers, M. L. 2013. Macroanalysis: Digital Methods and Literary History. University of Illinois Press, Urbana, IL. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Koedinger, K. R., Baker, R. S. J.d., Cunningham, K., Skogsholm, A., Leber, B., Stamper, J. 2010. A Data Repository for the EDM community: The PSLC DataShop. In Romero, C., Ventura, S., Pechenizkiy, M., Baker, R. S. J.d. (Eds.) Handbook of Educational Data Mining. CRC Press, Boca Raton, FL.Google ScholarGoogle Scholar
  3. Ramsey, J. D., Sanchez-Romero, R., and Glymour, C. 2014. Non-Gaussian Methods and High-Pass Filters in the Estimation of Effective Connections. NeuroImage 84 (Jan. 2014), 986--1006, ISSN 1053-8119. DOI=http://dx.doi.org/10.1016/j.neuroimage.2013.09.062.Google ScholarGoogle ScholarCross RefCross Ref
  4. Center for Causal Discovery, 2015. Retrieved April 6, 2015, from http://www.ccd.pitt.edu/.Google ScholarGoogle Scholar
  5. Jewell, P. R. 2000. The Green Bank Telescope. Proc. SPIE 4015, Radio Telescopes, 136 (July 3, 2000). DOI=http://dx.doi.org/10.1117/12.390406.Google ScholarGoogle Scholar
  6. Stewart, C. A., et al. 2015. Jetstream: A Self-Provisioned, Scalable Science and Engineering Cloud Environment. In Proceedings of the 2015 Annual Conference on Extreme Science and Engineering Discovery Environment (St. Louis, MO, July 26--30, 2015). XSEDE15. ACM, New York, NY, USA. DOI= http://dx.doi.org/10.1145/2792745.2792774. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Moore, R. L., et al. 2014. Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science. In Proceedings of the 2014 Annual Conference on Extreme Science and Engineering Discovery Environment (Atlanta, July 13--18, 2014). XSEDE14. ACM, New York, NY, USA. DOI=http://dl.acm.org/10.1145/2616498.2616540. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Stampede. Retrieved from https://www.tacc.utexas.edu/systems/stampede.Google ScholarGoogle Scholar
  9. Wrangler. Retrieved from https://www.tacc.utexas.edu/systems/wrangler.Google ScholarGoogle Scholar
  10. The Data Exacell. Retrieved from http://www.psc.edu/index.php/data-exacell.Google ScholarGoogle Scholar
  11. Nystrom, N., Welling, J., Blood, P., and Goh, E. L. 2013. Blacklight: Coherent Shared Memory for Enabling Science. In Contemporary High Performance Computing: From Petascale Toward Exascale, J. S. Vetter, Ed., CRC Computational Science Series, Taylor and Francis, Boca Raton, 2013, 431--450.Google ScholarGoogle Scholar
  12. Nowoczynski, P., Sommerfield, S., Yanovich, J., Scott, J. R., Zhang, Z., and Levine, M. 2012. The Data Supercell. In Proceedings of the 1st Conference of the Extreme Science and Engineering Discovery Environment: Bridging from the Extreme to the Campus and Beyond (Chicago, July 16--20, 2012). XSEDE12. ACM, New York, NY, USA. DOI= http://dl.acm.org/10.1145/2335755.2335805. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Nowoczynski, P., Stone, N., Yanovich, J., and Sommerfield, J., "High efficiency, high performance system for writing data from applications to a safe file system," U.S. Patent 8,316,288, Nov. 20, 2012.Google ScholarGoogle Scholar
  14. Nowoczynski, P., Stone, N., Yanovich, J., and Sommerfield, J. 2008. Zest Checkpoint Storage System for Large Supercomputers. In Proceedings of the 3rd International Petascale Data Storage Workshop (Austin, November 17, 2008). PDSW '08. IEEE, Piscataway, NJ, USA. DOI= http://dx.doi.org/10.1109/PDSW.2008.4811883.Google ScholarGoogle Scholar
  15. Gropp, W., Lusk, E, and Skjellum, A. 1996. A High-Performance, Portable Implementation of the MPI Message Passing Interface. Parallel Computing 22 (Sept. 1996), 789--828. DOI=http://dx.doi.org/10.1016/0167-8191(96)00024-5. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Velusamy, V. and Rao, C. 2003. Programming the Infiniband Network Architecture for High Performance Message Passing Systems. In Proceedings of the 30th International Symposium on Computer Architecture (San Diego, June 9--11, 2003). ISCA '03. ACM, New York, NY, USA. Science DMZ: A Scalable Network Design Model for Optimizing Science Data Transfers. http://fasterdata.es.net/science-dmz/.Google ScholarGoogle Scholar

Index Terms

  1. Bridges: a uniquely flexible HPC resource for new communities and data analytics

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          XSEDE '15: Proceedings of the 2015 XSEDE Conference: Scientific Advancements Enabled by Enhanced Cyberinfrastructure
          July 2015
          296 pages
          ISBN:9781450337205
          DOI:10.1145/2792745

          Copyright © 2015 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 26 July 2015

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          XSEDE '15 Paper Acceptance Rate49of70submissions,70%Overall Acceptance Rate129of190submissions,68%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader