Building a Science-based Case for Large-scale Simulation



Download 14.02 Kb.
Date31.01.2017
Size14.02 Kb.
#14074

title:

Building a Science-based Case for Large-scale Simulation

byline:

David Keyes (david.keyes@columbia.edu)


text: (~1200 words)
What would you do with one hundred times greater computer power and storage than you have available today? One thousand? What algorithmic and software technologies would be needed, in addition?
These questions were posed to more than 300 of the nation’s leading computational scientists by the Office of Science of the U.S. Department of Energy (DOE) at a two-day June workshop in Arlington, Virginia. Their answers, captured by a team of about fifty contributing authors, are now appearing in a two-volume report, which is expected to strengthen the commitment of the DOE to large-scale simulation research. Volume 1 of A Science-based Case for Large-scale Simulation (informally dubbed “SCaLeS”) was delivered to Raymond L. Orbach, Director of the Office of Science, on July 30, 2003, and is available for public download. Orbach, who delivered the charge for the report at the June workshop, has established a reputation as an advocate for the role of large-scale simulation as an important complement to theory and experiment in the conduct of the scientific mission of the DOE.
Eight recommendations appear in the SCaLeS report, six of which echo themes familiar from earlier federal reports on supercomputing dating at least as far back as the so-called “Lax Report” of 1982. (The inter-agency Lax report is credited with spurring the launch of the supercomputer centers of National Science Foundation.) The SCaLeS report recommends:


  • Extensive investment in new computational facilities, striking a balance between capability computing for those “heroic simulations” that cannot be performed any other way and capacity computing for “production” simulations that contribute to the steady stream of progress




  • Sustained collateral investment in software infrastructure, which, together with the hardware, are the “engines of scientific discovery” across a broad portfolio of scientific applications




  • Algorithm research and theoretical development, since improvements in basic theory and algorithms have contributed as much to increases in computational simulation capability as improvements in hardware and software over the first six decades of scientific computing




  • Proactive recruitment of computational scientists as early as possible in the educational process, so that the number of trained computational science professionals is sufficient to meet present and future demands.




  • Investments in network infrastructure for access and resource sharing as well as in the software needed to support collaborations among distributed teams of scientists




  • A federal complement to commercial research and development of innovative, high-risk computer architectures that are suited to the special requirements scientific and engineering simulations

To justify these investments, dozens of scientific goals spanning DOE’s mission are tied in the report to enhanced simulation capability – either as the only means of achieving the goals, or as a way of reducing the expense and shortening the lead time of research campaigns in which simulation is combined with theory and experiment. The authors noted that leadership in computational science is easily lost to other countries, since the know-how is in the public domain and the cost of simulation is continually dropping.


The remaining two recommendations could not be anticipated as clearly from earlier reports, and appear to mark the beginning of a new era of computational science. The scientists argued that a “phase transition” is occurring in which multidisciplinary research teams are forming to systematically exploit a natural fusion of advances in scientific models, mathematical algorithms, computer architecture, and scientific software engineering. Just as research in many branches of experimental physics evolved from individual investigators or small teams to large groups centered around billion-dollar facilities, such as accelerators, lasers, telescopes, and tokamaks, engaging not only physicists, but also statisticians, engineers, support technicians, etc., so computational science is spawning multidisciplinary teams of scientists and engineers, mathematicians, computer scientists, and support personnel centered around large computers offering teraflop/s of processing power, petabytes of storage, visualization facilities, and high bandwidth networking. The United States, with its enviable collection of multi-program research laboratories and its tradition of university-laboratory collaboration, is well positioned to induce such “phase transitions” throughout the sciences and engineering, from plasma physics to biotechnology.
DOE’s current initiative Scientific Discovery through Advanced Computing (SciDAC) was viewed by the scientists as paradigmatic of the multidisciplinary future of large-scale computational science research. A simple factor of a hundred or a thousand in raw simulation capability, without a concurrent improvement in algorithms, does not go very far for three-dimensional time-dependent problems. Simply doubling the resolution of a problem uniformly in each of these four dimensions eats up a factor of sixteen in computational complexity, and scientists need many such doublings. Therefore, better models and better adaptive strategies will be required along with bigger computers. Under SciDAC sponsorship, scientists, mathematicians, and computer scientists are already joining forces to investigate and demonstrate such gains. Computational astrophysicist Tony Mezzacappa of Oak Ridge National Laboratory, who is directing a multidisciplinary group simulating supernovae collapse, has stated that he “would never go back” to working without mathematicians and computer scientists on his team. While some groups under SciDAC, like Mezzacappa’s, are developing the next generation of community codes for users, other groups are building tools for the developers, themselves, so that the latest algorithmic technology migrates into not just one application, but is available across a common interface for many applications.
Volume 2 of the SCaLeS report contains 27 technical chapters in various areas of science, mathematics, and computer science central to the Department of Energy’s scientific mission. Scientific areas covered include: accelerator design, astrophysics, biology, chemistry, climate, combustion, environment, materials, nanoscience, plasma physics, and quantum chromodynamics (elementary particle physics). In each of these areas, experts from DOE laboratories, universities, industry, and other federal agencies projected what scientific questions could be addressed when the next two or three orders of magnitude of computational power and storage become available.
Mathematical methods common to simulation in many or all of these areas of computational science were also studied for their projected impact, including multiphysics modeling, multiscale modeling, uncertainty quantification, computational fluid dynamics, transport and kinetic methods, meshing methods, solvers and “fast” algorithms, and discrete mathematics and algorithms. Computer science research deemed critical to progress in computational science at the scale envisioned by the DOE includes: visual data exploration, data management and analysis, programming models and component technology, software engineering and management, computer performance engineering, network access and resource sharing, systems software, and advanced architecture.
In addition to the plenary from Orbach, Peter Lax of the Courant Institute gave a plenary retrospective on the report that his panel had created two decades earlier. John Grosh of the Department of Defense, who is co-directing the federal High End Computing Revitalization Task Force (HECRTF), also addressed the SCaLeS workshop and urged the scientists to concentrate on the implications of high-end computing on science, while his group (which conducted a workshop the previous week of June and is also scheduled to report this summer) concentrated more on how to deliver the cycles that scientists and other users require.
In the aftermath of the success of the Japanese Earth Simulator, which has begun to attract US scientists as users, the national supercomputing community finds itself in a state of retrospection. Several panels and workshops met in Spring 2003 to seek to define the future of various aspects of federally sponsored high performance computing. The National Academy of Sciences convened a panel on the “Future of Supercomputing” and the JASONS met during the same week as the SCaLeS workshop to evaluate the use of supercomputers in the Advanced Simulation and Computing (ASCI) initiative of the National Nuclear Security Administration (NNSA) wing of the DOE.
An expanded version of the SCaLeS report, with more room for coverage outside of DOE’s immediate mission areas and for bibliographic information, will appear as a SIAM book, the first in a new series on Computational Science & Engineering.

Note: Volume 1 of the 72-page SCaLeS report is available at http://www.pnl.gov/scales.


Note: David Keyes is Professor of Applied Mathematics at Columbia University and the Acting Director of the Institute for Scientific Computing Research (ISCR) at Lawrence Livermore National Laboratory. Together with Phil Colella of Lawrence Berkeley National Laboratory, Thom Dunning, Jr. of the University of Tennessee and Oak Ridge National Laboratory, and William Gropp of Argonne National Laboratory, he co-edited the SCaLeS report.
Download 14.02 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page