Simulation-based engineering and science



Download 0.9 Mb.
Page7/26
Date20.10.2016
Size0.9 Mb.
#5576
1   2   3   4   5   6   7   8   9   10   ...   26

CONCLUSIONS

At the time of the WTEC panel visit, the RIKEN team had finalized the detailed hardware design of the 10 Petaflops NGSC in Japan, but no details were given us regarding the systems software, which is apparently the responsibility of the participating companies (Fujitsu, Hitachi, and NEC). The National Institute of Informatics, a partner in the NGSC project, has proposed a grid infrastructure that will connect the new supercomputer to other existing supercomputers. Unlike the Earth Simulator, the new computer will be open to all, with 10% of the total time allocated to use by universities. The new supercomputer center at RIKEN will also serve as an HPC education center and will offer a series of lectures on computer-science-related issues as well as on applications. The staff in center will work closely with universities in trying to affect the curricula in computational science for petaflop scale algorithms, tools, and applications.

The budget of the NGSC project includes funding for applications on life sciences at RIKEN and nanosciences at IMS. There are no plans for validation at this point, although input from experimentalists will be sought for future validations. The PIs of the project appreciate the importance and the difficulties of validating simulations of such complex systems, like the living cell. The emphasis is on applications software, which will be open source, available to all—“There are no borders in science,” Dr. Himeno told us. However, no systematic efforts will be undertaken at this point for middleware development, because the Japanese government does not have strong motivation to develop software. The RIKEN PIs see the development of compilers, for example, as a next step after the demonstration of Petaflop performance in the key seven applications selected.

Finally, MEXT is committed to continuous development of supercomputers to advance computational science for competitiveness in R&D and for developing novel low-power CPU components that will eventually find their way to accommodating the increased needs of information technology for consumers.

Site: Shanghai Supercomputer Center

585 Guoshoujing Road

Shanghai Zhangjiang Hi-Tech Park

Shanghai 201203 P.R. China

http://www.ssc.net.cn/en/index.asp
Date Visited: December 6, 2007.
WTEC Attendees: M. Head-Gordon (report author) and S. Glotzer
Hosts: Jun Yuan, Vice Director
Tel.: +86 21 50270953; Fax: +86 21 50801265
Email: jyuan@ssc.net.cn

Bo Liu, Scientific Computing Specialist

Dr. Tao Wang, Vice-Manager, Scientific Computing Department

Jiancheng Wu, Vice-Manager, Engineering Computing Department

Ether Zhang, Marketing and International Collaboration Manager

Background

The Shanghai Supercomputer Center (SSC) is China’s first supercomputer center that accepts applications from the general public. It opened in December 2000 as the result of an initiative of the Shanghai Municipal Government. Faced with the request to acquire a supercomputer for weather-forecasting purposes, local leaders reasoned that the purchase of such a major piece of infrastructure should be used to benefit as broad a range of society as possible. As a result, the first general purpose supercomputer center in China was established. Today it serves some 270 groups and roughly 2000 users from many areas of scientific and engineering research, both in universities and industries. Seventy staff members at present manage the functioning of the center, which is housed in a dedicated building in the Zhangjiang Hi-Tech Park.

The visiting panel met with 5 members of the SSC for approximately 2 hours, during which Vice Director Yuan gave a presentation and there was much discussion in which the three SSC technical representatives also actively participated. The main points covered in the presentation and discussion are summarized below.

R&D/Services

Computing Hardware

The main supercomputer, installed in 2004, is a 532-node Dawning 4000A system, with peak observed performance of 10 TFlops, containing 4 processors (2.4 GHz AMD Opteron 850) per node. Aggregate memory is 4 TBytes, with aggregate disk storage of 95 TBytes, controlled by 16 storage nodes, each with 4 processors. The system interconnect is Myrinet. This machine was designed and assembled in China and represented an investment of approximately 100 million RMB.8 Upon installation, it was ranked 10th in the “Top 500” list (June 2004). Today it is ranked 3rd within China itself, of the machines for which information is available (the top machine is for meteorology, and the 2nd machine is for oil industry modeling). CPU usage shortly after installation was in the 40% range, but it was over 80% for all of 2006 and 2007.

Future hardware plans call for the installation of a new Chinese-sourced machine by the end of 2008, which will yield peak performance of over 200 TFlops This machine will be 50% financed by the national government through a large grant from the “863 program.” In the longer run, perhaps in 2011 or 2012, a larger machine is anticipated under the 863 program.

User Community

The roughly 2000 users are diverse in background and interests. Approximately 53% are from natural science research institutes and universities and consume about 80% of the computer time; approximately 47% are from industry and consume about 20% of the computer time. Most users employ the center for capacity computing (many relatively small jobs), while only a few employ it for very large massively parallel jobs. This is believed to be more due to limitations on the available resources than to intrinsic user demand. Thus, approximately 75% of recent jobs employ 1–4 CPUs, consuming about 20% of the computer time, with roughly similar proportionate CPU use for 5–8, 9–16, and 16–32 CPUs. Most users are not expert simulators and are driven by the need to solve particular applications problems.

Highlights of the results produced at the SSC include


  1. Modeling the aerodynamics of the first commercial regional jet produced in China (about 10 million cells)

  2. 3D nonlinear analysis of the behavior of a tunnel in an earthquake

  3. Flood prediction

  4. Specific drug design accomplishments (related to the work discussed separately at Dalian University)

  5. First-principles atomistic modeling for a wide range of problems in materials, science and condensed matter physics.

Funding Model

The long-term vision is to be a profitable high-performance computing (HPC) applications service provider (HPC-ASP). This situation is far from a reality at present, because the Shanghai Supercomputer Center is a trail-blazer in China and must first help to create the market for such services, and also because SSC now provides most of its computing resources to the nonprofit natural science research communities. At the moment, the SSC provides users with computer resources and services in exchange for money that covers approximately 30% of operational costs.



Organization and Staff Activities

The technical staff members are organized into 5 technical departments: (1) scientific computing, (2) engineering computing, (3) research and development, (4) technical support, and (5) IT (responsible for networking, PC and server management, and internal information management). Their activities are wide-ranging, including computing infrastructure maintenance, direct support of users, porting and tuning software, providing training to small groups of users, and some software development. The latter can either be independent or in partnership with universities, government, or industry. Center management finds it challenging to recruit good people with appropriate experience, partly because of competitive pressure from corporations. Other possible reasons include the fact that the history of HPC development and application is relatively short in China, and not every university has been prepared to give HPC education to graduates. Therefore, the people the SSC hires are often experts in application areas who must learn about high-performance computing on the job.



U.S. Export Restrictions

In the view of the WTEC team’s SSC hosts, HPC is an international activity. However, employees of the SSC are subject to entry restrictions to the United States, which prevents them from visiting U.S. unclassified supercomputing centers or even participating in supercomputing conferences or visiting IT companies and universities in the United States. There are no such restrictions with regard to visiting Japan or Europe. In light of this situation, the WTEC visiting team was grateful for the access and information that our hosts at the Shanghai Supercomputer Center provided us.

Site: Shanghai University

Shanghai, 200072 P.R. China

http://www.shu.edu.cn/en/indexEn.htm
Date of Visit: December 6, 2007.
WTEC Attendees: S. Kim (report author), J. Warren, P. Westmoreland, and G. Hane
Hosts: Prof. Qijie Zhai, Assistant President, Shanghai University
and Director, Center for Advanced Solidification Technology
Email: qjzhai@mail.shu.edu.cn

Prof. Wu Zhang, School of Computer Science and Engineering


Email: wzhang@mail.shu.edu.cn

Prof. Li Lin, Institute of Material Science & Engineering,


Head, Metal Materials Section

Changjiang Chair Prof. YueHong Qian,


Institute of Applied Mathematics & Mechanics
Email: qian@shu.edu.cn

Prof. Jinwu Qian, Dean, Sino-European School of Technology



Background

Shanghai University (SHU) arrived at its present form in 1994 by the merger/combination of four other universities in the city region: Shanghai University of Technology, Shanghai University of Science & Technology, Shanghai Institute of Science & Technology, and the former Shanghai University. The Shanghai University of Science & Technology was formed by the East China Branch of the Chinese Academy of Sciences (CAS) in 1958 and had close historical ties to the CAS; the president and the deans of that university were the head and institute directors of the Shanghai branch of the CAS. Today, Shanghai University has a faculty of over 5,400; that number includes 380 at the rank of Professor and 780 Associate Professors. There are 7 Academicians of the CAS and Chinese Academy of Engineering (CAE). Student enrollment is just above 30,000, including 4,400 PhD and Master’s degree students. The university research budget is almost RMB 300 million9.



SBES Research

An overview of SBES research activities was delivered by brief presentations (our itinerary allowed only two hours at this site) by selected faculty leaders in the engineering and computer science departments. Most notable and helpful for our report was the tour de force presentation by Prof. Wu Zhang of the School of CSE in the form of a 68-slide PowerPoint presentation delivered in under 30 minutes! The size and scale of HPC at this site suggest that our brief visit just barely touched the highlights of the research activities here at the SHU campus. The talks also brought out the high quality of international collaborations in applied research, most notably with industrial firms in Europe.



Overview of HPC Research and Education at SHU

Prof. Wu Zhang (Computer Science and Engineering) gave a comprehensive overview of HPC research and education at SHU; after an introduction to the history of HPC at Shanghai University, he covered three topics: infrastructure, algorithms, and applications.



  • History and Background: SHU has a long history in HPC up to the present under the leadership of President and CAS Academician Weichang Qian. The role of simulation and modeling is appreciated not only in science and technology but beyond including the social sciences, fine arts, economics, and management. Accordingly, there are five university-level centers involved in HPC: (1) Center for Advanced Computing and Applications; (2) Center of CIMS; (3) Center of Multimedia (Visualization); (4) E-Institute of Grid Technology; and (5) High-Performance Computing Center. Significant HPC activities are also found in the School of Computer Science and Engineering, the College of Science and the College of Engineering—which in this talk was nicely grouped into three slides: numerical methods (Lattice-Boltzmann method, wavelet and spectral methods, mesh-free methods); computational physics and chemistry (transport-diffusion-reaction modeling, geophysics, meteorology, earthquake modeling); and life sciences and material sciences (bioinformatics, control theory, materials simulations with grid and parallel computing). SHU is also active in hosting international conferences in HPC.

  • HPC Infrastructure, Past, Present and Future: The 450-GFlops ZQ-2000, built in 2000, was the first cluster at SHU. It has 218 processors (109 nodes), 26 GB total memory, and is connected with PIII 800/Myrinet switches. In 2004, this system was eclipsed by the 2.15 TFlops ZQ-3000 with 392 GB of total memory and 352 (3.06-GHz Xeon) processors connected by Infiniband. The ZQ-3000’s LINPACK benchmark is 1.51 TFlops. For grid computing, SHU/CSE has embraced OGSA (Open Grid Services Architecture), an evolving standard managed by the Global Grid Forum and the WSRF (Web Services Resource Framework) protocol and middleware of the Globus Toolkit (GT4). In terms of future infrastructure, there is planned activity and research on optical computing.

  • Algorithms: There is significant algorithm activity at SHU. Given the time constraints, the talk focused on some key projects in numerical linear algebra and its impact on parallelization of CFD codes. Their researchers’ hybrid combination of highly scalable PPD (Parallel Partitioning Diagonal) with K. V. Fernando’s BABE (Burn at Both Ends, NAG 1996) factorization forms the basis of their tri diagonal solvers. Achievements are illustrated with several benchmark problems in computational fluid dynamics.

  • Applications: Dr. Zhang matched the list of HPC activities in departments and schools. A notable example highlighted in his talk is CFD/aircraft design. Many other examples ranged from chip cooling design to wind flow around power plant environments. Examples were summarized quickly due to time constraints.

Other HPC Activities

  • Prof. Li Lin presented his CALPHAD application to metallic systems. This work is in close collaboration with top practitioners and automotive industrial firms in Europe and the United States.

  • Dr. Z. M. Lu (his own research interests are in turbulence models and particle transport in air passages of the lung) shared a document that gave an overview of the SBES research in Computational Mechanics at the Shanghai Institute of Applied Mathematics and Mechanics, featuring the research projects of Profs. Peifeng Weng, Peng Zhang, Yuehong Qian, Wei Feng, and Yuming Chen, whose foci are summarized below:

  • Prof. Weifeng Chen: Numerical simulation of low-Reynolds number flow with applications to micro-craft flow control; B-B turbulence model, mixed method of LES and RANS for separated flows; aerodynamics of low aspect ratio vehicles; numerical simulations of unsteady viscous flow around rotating helicopter wings

  • Prof. Peng Zhang: CFD (theory of characteristics, the Riemann problem, simulation of hyperbolic conversation laws) and simulation of traffic flow by high-order model equations

  • Prof. Yuehong Qian: Lattice-Boltzmann method (LBM) and applications; hypersonic flow simulation; particle flow simulation; nano-fluid flow simulation; LES (large eddy simulation) with recent variants of the Smagorisky models; LES-LBM implemented with the 19-velocity (D3Q19) lattice model

  • Prof. Wei Feng and Prof. Yuming Chen: Finite element method (hybrid elements); boundary element method, mathematical theory of meshless methods; nanomechanics (molecular dynamics and the effect of vacancy defects and impurities on elastic moduli); applications to elasticity, elastodynamics, and fracture

Computing Facilities

The facility details are as described in Prof. Wu Zhang’s discussion of HPC infrastructure.



Discussion

A good part of our discussion focused on SBES education:



  • A course has been developed to instruct parallel programming to graduate students in computer science, but the course is considered too advanced for undergraduate students, e.g. non-CS undergraduate science and engineering majors. The course text is from the University of Minnesota (translated).

  • A short course has been made available, as needed, for non-CS students and researchers interested in HPC resources and HPC usage.

  • There is a biweekly seminar series on high-tech algorithms.

  • There is frustration over the disparity of funding for software vs. hardware, and most software packages were viewed as too expensive.

Conclusions

SHU has comprehensive resources that span the entire spectrum of HPC resources from machines to algorithmic activity.



References

[1] Current Trends in High Performance Computing and Its Applications, Edited by W. Zhang, Z. Chen, R. Glowinski, W. Tong, Proceedings: 185-196, 2005. Springer-Verlag Berlin.

[2] W. Cai and W. Zhang, An adaptive SW-ADI method for 2-D reaction diffusion equations, J. Comput. Phys., Vol.139, No.1(1998),92-126.

[3] D. Zhou, W. Cai and W. Zhang, An adaptive wavelet method for nonlinear circuit simulation, IEEE Trans. Circuits and Systems I. Fundamental Theory and Appls., Vol.46, No.8 (1999), 931-939

[4] Sun Xian-He and Zhang Wu, A parallel two-level hybrid method for tridiagonal systems and its application to fast Poisson solvers, IEEE Trans. Parallel and Distributed Systems, Vol.15, No.2 (2004), 97-106

Comments from Dr. Qi-Jie Zhai, Professor of Material Science, Assistant President, SHU

The report, Simulation Based on Engineering Science, released by NSF in 2006, was very impressive. However, it focuses much on computer simulation rather than experimental simulation, which might present current American, even global, academic trend in simulation application. Indeed, the deep understanding on internal mechanism of simulated problem is fundamental to the numerical simulation while computing only contributes technically to it. We have undoubtedly a long way to go, especially on fields of non-equilibrium, dissipation and multi-scale complexity, before our knowledge and realization about many problems could greatly satisfy the simulation condition, though we already achieved in some special cases.

With regard to grand challenge problems, simulation would be the most efficient way. I like to say that numerical or called computer simulation should work together with experimental simulation. Unfortunately, there exists obvious divarication between academic and industry. The latter, in fact, pays little attention to numerical simulations in China. For instance, so far we have experimentally simulated the solidification process of tons of steel by only hundreds grams of steel with an experimental simulation device. When enterprises had benefited from our equipment for cost-saving and time reducing, they seemed to ignore the fact that successful experimental simulation partly relied on numerical simulation.

Of course, the limitations of effective experimental simulation appear to be relating to its diversification and customization compared with numerical simulation. For this reason, experimental simulation is comparatively difficult to develop and necessary to receive high attention from researchers.

Consequently, in order to develop simulation, experimental simulation should be encouraged while developing numerical simulation. As two wheels of one bike, the two sides of simulation need persuading from each other and develop coordinately.
Site: The Systems Biology Institute (SBI)

Department of Systems Biology, Cancer Institute

Japan Foundation for Cancer Research

Room 208, 3-10-6, Ariake, Koto-Ku

Tokyo 135-8550, Japan

http://www.sbi.jp/index.htm
Date Visited: December 3, 2007
WTEC Attendees: L. Petzold (report author), P. Cummings, G. Karniadakis, T. Arsenlis, C. Cooper D. Nelson
Hosts: Dr. Hiroaki Kitano, Director, SBI

Email: kitano@symbio.jst.go.jp

Dr. Yuki Yoshida, Researcher, SBI

Email: yoshhida@symbio.jst.go.jp

Kazunari Kaizu, Researcher, SBI

Email: kaizu@symbio.jst.go.jp

Koji Makanae, SBI

Email: makanae@symbio.jst.go.jp

Yukiko Matsuoka, SBI

Email: myukiko@symbio.jst.go.jp

Hisao Moriya, Researcher, Japan Science and Technology Agency


Department of Systems Biology, Cancer Institute

Email: hisao.moriya@jfcr.or.jp



BACKGROUND

The Systems Biology Institute, headed by Dr. Hiroaki Kitano, is a nonprofit institute funded mainly by the Japanese government, with the headquarters office in Harajuku in up-town Tokyo as well as having experimental laboratories in the Cancer Institute of the Japan Foundation for Cancer Research (http://www.jfcr.or.jp/english/) and the RIKEN Genome Science Center. Dr. Kitano started working on systems biology in 1993, on embryogenesis. Those models were based on ordinary differential equations (ODE) and partial differential equations (PDE). Dr. Kitano is widely recognized as one of the early pioneers of the emerging field of Systems Biology.

The budget for the institute is roughly $2 million per year, including indirects. The bulk of the funding expires in September 2008. The institute funds roughly 10 people, including 7 researchers. Approximately half of the budget goes to research; the other half goes into infrastructure, including rent, power, etc. Most computation is done on a 50-CPU cluster. Researchers also have access to clusters elsewhere. Dr. Kitano is a member of the research priority board of RIKEN, which is responsible for the forthcoming RIKEN supercomputer project; institute researchers are most likely to have access to that computer. There is one software professional on this team, and some software is outsourced competitively to contractors. The Systems Biology Institute owns the CellDesigner software. The source code is made available to some collaborators.

The current research plan focuses on the development of experimental data and software infrastructure. The software infrastructure includes Systems Biology Markup Language (SBML), Systems Biology Graphical Notation (SBGN), CellDesigner, and Web 2.0 Biology, designed for the systematic accumulation of biological knowledge. Biological systems under investigation include cancer robustness, type 2 diabetes, immunology, infectious diseases, metabolic oscillation, cell cycle robustness, and signaling network analysis. The experimental infrastructure under development includes the gTOW assay described below, microfluidics, and tracking microscopy.



R&D ACTIVITIES

The PAYAO Web 2.0 Community Tagging System is being developed for the tagging of SBML models. CellDesigner is the gateway, and simulation technology is the back end. This group is focusing on the gateway. Interaction with the system is via a control panel, SBW menu, or by saving to another other format to run the simulation, for example MATLAB or Mathematica. The System can register models, restrict access to all or part of a model, search the tags, tag the models, and link to PubMed IDs. It provides a unified gateway to the biological information network. Dr. Kitano is talking to publishers about sending personalized collections of tags. The alpha release of this system is currently underway.

Why the investment in software infrastructure? Dr. Kitano believes that software is critical to the development of systems biology as a field. Recognizing that it is difficult to publish software, the merit system in this lab values software contributions as well as publications. Funding for the software comes from a Japan Science and Technology and the New Energy Development Organization (NEDO) grant for international software standards formation, which is also funding efforts at Caltech and EBI. There is also funding from various projects in the Ministry of Education, Sports, Culture, Science, and Technology (MEXT).

Computational cellular dynamics efforts are also focusing on techniques for obtaining the data needed to develop and validate the computational models. In particular, microfluidics is used as a way to better control conditions to grow yeast.

Key issues in cellular architecture that are being investigated include the implications of robustness vs. fragility on cellular architecture. Some systems have to be unstable to be robust (for example, cancer). Points of fragility in a system are important to the determination of drug effectiveness. The robustness profile is used to reveal principles of cellular robustness, to refine computer models, and to find therapeutic targets. Yeast (budding yeast and fission yeast) is used as the model organism, and cell cycle is used as the model system. The models are currently ODE models. The limits of parameters are used as indicators of robustness, for example, how much can you increase or decrease parameters without the disrupting the cell cycle? This is a different type of uncertainty analysis than most of the current work in that area in the United States or elsewhere.

Experimental methods are required that can comprehensively and quantitatively measure parameter limits. SBI researchers have developed Genetic Tug of War (gTOW), an experimental method to measure cellular robustness. The gTOW method introduces specially designed plasmid with genes of interest that amplifies itself during growth to see how much it changes the cell cycle. There are implications of this technology for drug development.

The WTEC team’s hosts at SBI observed that the problems that will need the RIKEN supercomputer or equivalent computers are likely to be systems biology for drug discovery and docking simulations; redundancy in biological systems, which makes drug targeting difficult; and combinatorial targeting, which will be very computationally intensive.

The WTEC team’s hosts also addressed the barriers in systems biology. They noted that researchers in this field still don’t have a good methodology for inferring the network from the data. Experimentalists need to understand the needs and limits of computation. Computationalists need to understand what experimentalists can do. The significant questions include how can we mathematically characterize what kinds of experiments can actually be done? Can experiments be created with the computational needs and capabilities in mind? Important milestones will include 3D models of cells, heterogeneous multiscale models, the effects of confined space, and parametric uncertainty with thermodynamic constraints.



Download 0.9 Mb.

Share with your friends:
1   2   3   4   5   6   7   8   9   10   ...   26




The database is protected by copyright ©ininet.org 2024
send message

    Main page