Wolff's World: Sustaining America's High-Performance Computing Infrastructure – Part 1
Background on the National Research Council's Computer Science and Technology Board Interim Report
The National Science Foundation (NSF) launched the Supercomputing Centers program in 1985, and NSFNET — initially a component of that program — provided a nationwide network interconnecting the first five centers as a way to broaden the accessibility of the computers to researchers across the country. Since that time, both networks and computers have become more capable, in step with Moore's Law1. But while the nation's R&E networks – Internet2 and the family of state and regional networks - have flourished as a collaborative enterprise of the higher education community, the funding and the foci of America's open high-performance computing infrastructure have pitted U.S. universities against one another in recurrent competitions, recounted in detail in an NSF Special Report on Cyberinfrastructure.
Despite its manifest advantages, the competitive funding model is not without its drawbacks, as discussed below. In 2014, this and other factors relating to NSF as the provider of advanced computing resources led the then Office of Cyberinfrastructure at NSF to commission the Computer Science and Telecommunications Board (CSTB) of the National Research Council (NRC), the operating arm of the National Academy of Sciences and the Institute of Medicine, to perform a broad study of priorities and trade-offs in advanced computing to support NSF programs.
The CSTB formed a study committee which met, reviewed written materials, received testimony and comments from community members, and produced Future Directions for NSF Advanced Computing Infrastructure to Support U.S. Science and Engineering in 2017-2020: Interim Report4, which was published by the National Academy Press and upon which the study committee requested public comment to inform its deliberations for a final report.
Report Findings on Funding for Advanced Computing
Although the Interim Report addresses a spectrum of challenges and possible NSF responses, here I will speak only to issues surrounding the funding model for advanced computing and its implications for the Internet2 community. Negative effects of NSF’s variable funding model over the past 30 years are succinctly summarized by these quotes from the Interim Report:
"NSF has long supported leading-edge cyberinfrastructure via a series of solicitations and open competitions. Although this has stimulated intellectual competition and increased NSF's financial leverage, it has also made deep and sustainable collaboration difficult among frequent competitors. Individual awardees, quite rationally, often focus more on maximizing their long-term probability of continued funding, rather than adapting and responding to community needs."5
"Frequent competitions can also make it more difficult for NSF-funded service providers to recruit and retain talented staff when the horizon for funding is only 2-5 years. This is especially true when the competition for information technology and computational science expertise with industry is so great."6
Potential Funding Model Alternatives
There is ample NSF precedent for long-term commitments of funding; the Interim Report suggests the vehicle known as "Major Research Equipment and Facilities Construction" (MREFC), which has been used to fund, for example, such major instruments as the IceCube neutrino detector at the South Pole, the Atacama Large Millimeter Array (ALMA) on the Chilean Altiplano, and the Ocean Observatory Initiative.
Another model, not cited in the Interim Report, is exemplified by the NSF Industry/University Cooperative Research Centers (I/UCRCs), which are collaborative affiliations of universities and industrial firms formed to address a specific research area. I/UCRCs are funded for an initial five-year period, with up to two five-year renewals based on performance and impact on the scientific field addressed.
What about Networks?
Although the charge to the study committee contains the wording:
"Advanced computing capabilities are used to tackle a rapidly growing range of challenging science and engineering problems, many of which are compute-, communications-, and data- intensive as well…,"7
and the committee was charged to consider, i.a.:
"3. Complementarities and trade-offs that arise among investments in supporting advanced computing ecosystems; software, data, communications,"8
the word "communications" is never again used in the Interim Report. The "complementarities and trade-offs" involving communications and networks are addressed in full by the statement that
"Analysis that uses data distributed across multiple locations requires costly, high-capacity network links, …"9
In part 2 of this blog series, I shall address further the funding models suggested in the Interim Report, discuss some of the requested community feedback for the Final Report, and conclude with possible implications and outcomes for Internet2 – the organization and the community.
COMMITTEE ON FUTURE DIRECTIONS FOR NSF ADVANCED COMPUTING INFRASTRUCTURE TO SUPPORT U.S. SCIENCE IN 2017-2020
WILLIAM D. GROPP, University of Illinois, Urbana-Champaign, Co-Chair
ROBERT HARRISON, Stony Brook University, Co-Chair
MARK R. ABBOTT, Oregon State University
DAVID ARNETT, University of Arizona
ROBERT L. GROSSMAN, University of Chicago
PETER M. KOGGE, University of Notre Dame
PADMA RAGHAVAN, Pennsylvania State University
DANIEL A. REED, University of Iowa
VALERIE TAYLOR, Texas A&M University
KATHERINE A. YELICK, University of California, Berkeley
JON EISENBERG, Director, CSTB, and Study Director
SHENAE BRADLEY, Senior Program Assistant
REVIEWERS OF THE INTERIM REPORT
Amy W. Apon, Clemson University
Daniel E. Atkins III, University of Michigan
Thom H. Dunning, Northwest Institute for Advanced Computing
Susan L. Graham, University of California, Berkeley
Laura Haas, IBM Research
Tony Hey, Microsoft Research / University of Washington
Michael L. Klein, Temple University
Linda Petzold, University of California, Santa Barbara
The review was overseen by Elsa M. Garmire, Professor of Engineering, Dartmouth College.
1San Diego Supercomputing Center (SDSC), the National Center for Supercomputing Applications (NCSA), the Pittsburgh Supercomputing Center (PSC), the Cornell Theory Center (CCTC), and the John von Neumann Center (JVNC). Though not a part of the Centers program, the supercomputer at the National Center for Atmospheric Research (NCAR) was also connected to NSFNET.
2Report of the Panel on Large Scale Computing in Science and Engineering, Peter D., Lax, Chairman, December 1982.
- From Desktop to Teraflop, NSF Blue Ribbon Panel on High Performance Computing, August 1993
- Report of the Task Force on the Future of the NSF Supercomputer Centers Program, Edward F. Hayes, Chairman, September 1995
- Information Technology Research: Investing in Our Future, President's Information Technology Advisory Committee Report to the President February 1999
- Revolutionizing Science and Engineering through Cyberinfrastructure: Report of the National Science Foundation Advisory Panel on Cyberinfrastructure National Academy Press 2015, ISBN 978-0-309-31379-7
4Future Directions for NSF Advanced Computing Infrastructure to Support U.S. Science and Engineering in 2017-2020:
Interim Report (2014), Committee on Future Directions for NSF Advanced Computing Infrastructure to Support U.S. Science in 2017-2020.
5National Academy Press, op. cit., p. 21
6National Academy Press, op. cit., p. 21
7National Academy Press, op. cit., p. viii
8National Academy Press, op. cit., p. viii
9National Academy Press, op. cit., p. 14
Steve Wolff is principal scientist at Internet2. Wolff brings more than 40 years of leadership in the development, management, and operations of network technologies and is regarded as one of the original visionaries and architects behind the Internet of today, including leading the development of NSFNET, a key precursor to the commodity Internet.