Simulation-based engineering and science



Download 0.9 Mb.
Page13/26
Date20.10.2016
Size0.9 Mb.
#5576
1   ...   9   10   11   12   13   14   15   16   ...   26

Simulation Software

The amount of in-house development is probably normal here; usually packages are modified and supplemented rather than written from scratch. There is plenty of in-house development, however.

Issues to address in the coming years: Black boxes used to solve PDEs, where probability densities are fed in to obtain risk range of solution. Black boxes need to be re-engineered to solve today’s problems; you can’t use what you use for single inputs. Needed are a multilevel discretization, and adaptivity of the black boxes. High dimensionality will be become insufficient. Transparency of black boxes must be increased.

At the science level there is not enough expertise in computer science embedded within science disciplines, and this is hurting our ability to develop new simulation software.

Algorithms and software have provided new tools for many disciplines and problems, such as strongly correlated materials. Examples include density of plutonium. Such problems could not be solved by MC before but can be now done on a laptop using continuous time MC solvers by Troyer. These now can be used for many materials that we couldn’t do before. Plutonium is the most strongly correlated simple metal around. So, there are huge opportunities and progress in faster MC methods for QM. QMC has no problem dealing with bosons, but there are big problems with fermions. The problem is NP-hard, so, you cannot use classical computers to solve problems with fermions. We have new methods, but these don’t scale well on vector and other machines, but they are already millions of times faster than previous methods due to algorithmic advances.

We have problems with code longevity when students who developed the code leave.



Big Data/Visualization

So much of science is now data-driven. We have so much data, we’re not looking at it all…less than 1% of all data gathered is looked at. There are not enough PhD students, and those we do have are too weak in terms of computational training, throughout the field. This will require a major effort in data-driven simulations.

Big data is very important to high-energy physics. There is an extremely good visualization group at CSCS, Manno. Jean Favre is first rate and has a highly competent team.

Engineering Design

The ABB collaboration and Fichtner's organization are much more knowledgeable (to answer these questions).



Next-Generation and HPC

The WTEC team’s hosts pointed out that we (the field) don’t always exploit the full power of our computers. Algorithms and hardware are equally important. “Of 12 orders of magnitude in speed up, 6 came from algorithms.” This was a notion emphasized by Norm Schryer at least as far back as 1980. It’s the people that matter.

When asked what they would do with increased computing power, our hosts commented that error increases as well, and this must be considered. For example, will 64-bit precision be enough for grand challenge problems? For some problems, yes, for many, no.

Problems that could be attacked with petascale computing include blood perfusion, cancer, socioeconomic systems (e.g., can model entire countries with customers down to individual levels), traffic flow (e.g., in Switzerland), problems in neurobiology (e.g., reengineering the structure of the brain), treating molecules embedded in solvents, and problems in seismology (including mantle dynamics).

Giardini mentioned the CGI project in the United States. The EU has a roadmap workshop on e-infrastructure for seismology. A 40k processor machine is used at SDSU for visualization. Seismology is a data driven discipline. Lots of simulations are behind the design of the Yucca Mountain facility. The next challenge is real-time data streams—before, for example, the location and time of an earthquake. Now we want to run scenarios, using data-driven dynamic simulations. The goal is a 5-second warning of an event (e.g., earthquake). This is driving all seismology simulations. In seismology, it is now half modeling, half experiment, which is a big change from the past.

The bottleneck to advances in SBE&S remains programming. Software still lags behind hardware, and as long as the emphasis is on buying expensive machines rather than on people, this problem will persist.



Education and Training

To fully leverage investment in large computers, we need a matching effort in human resource development, to train students and researchers to use machines and to develop algorithms and software. We need to give more breadth, at expense of depth; this is always a source of tension.

Would joint postdocs between science/engineering and computing sciences be a solution? ETH had a program that did this; it was quite successful and produced a high percentage of future faculty.

ETH has several programs for SBE&S education and training at ETH. For example, RW is a good program but needs more people resources. ETH students who seem best prepared for programming tasks come from Physics, in the view of one of the faculty, who gives Informatik low marks in preparing students.

Incoming graduate students are poorly prepared, but not worse than in any other comparable institution. In the Mathematics Department, there remains a high degree of “chauvinism”: analysis is highly regarded, whereas programmers remain only programmers. Alas, we need to do both extremely well. Interdisciplinary work is desirable until hiring/respect are demanded, then we revert to bad old habits. It has been so other places I've been: Bell Labs and ETH are similar in this regard. My students from Physics have a much more balanced perspective, it seems to me. My Chemistry students seem to be weak programmers. Both Physics and Chemistry students are of very high quality, but in programming, Physics does better.

It's become evident that in recent years, the number of foreign students has increased dramatically. This is also true in Swiss Mittelschule and Gymnasium. The quality of the students, however, remains very high. We now get many Asian students, some of whom are extremely good. The Russian students here are first rate in mathematics skills. Furthermore, many of the Russians have good practical experience.

Where do the CSE students go when they graduate? Banks, insurance, software, pharmaceutical, oil companies, universities. They have absolutely no problem finding a job. The problem is of losing students to industry and master’s level.

Students should NOT be comfortable using codes that exist. They should be capable and critical.

What is the publication standard in SBE&S? To what extent can the computational experiments be replicated? It is important to train the students to be skeptical and question the results of the computation and their relevance to the scientific or engineering discipline.

We should put more weight on proper software engineering in the CSE curriculum. Open source is an inducement to producing good software that students and other researchers can add to. Maintenance of code is a big problem in the academic environment. There is a bigger role for Computer Science in CSE in the future.

Evaluation of scientific performance is now completely geared towards disciplinary indicators. We should give more value to integration, just as we have always done with depth. How do you assign credit in an interdisciplinary research project?

References

A report on the CSE educational program is available online at http://www.cse.ethz.ch.

SIAM Working Group on CSE Education. 2001. Graduate Education in Computational Science and Engineering. SIAM Rev.43:163–177.

Site: Fraunhofer Institute for the Mechanics of Materials (IWM)



Wöhlerstrasse 11

79108 Freiburg, Germany

http://www.iwm.fraunhofer.de/englisch/e_index.html
Date Visited: February 29, 2008.
WTEC Attendees: S. Glotzer (report author), L. Petzold, C. Cooper, J. Warren
Hosts: Professor Dr. Peter Gumbsch, Director of the IWM
Email: peter.gumbsch@iwm.fraunhofer.de

Prof-Dr. Hermann Riedel, Leader, Materials-Based Process and Components Simulation; Email: hermann.riedel@iwm.fraunhofer.de



Background

Professor Gumbsch is head of the Fraunhofer Institute for Mechanics of Materials (Institut Werkstoffmechanik, IWM), with locations in Freiburg and Halle/Saale, Germany. He is a full professor in Mechanics of Materials and head of the Institute for Reliability of Systems and Devices (IZBS) at the University of Karlsruhe (TH). Before that he was head of the Research Group for Modelling and Simulation of Thin Film Phenomena at the Max-Planck-Institut für Metallforschung in Stuttgart, Germany. His research interests include materials modeling, mechanics and physics of materials, defects in solids, and failure of materials.

The Fraunhofer Institutes comprise the largest applied research organization in Europe, with a research budget of €1.3 billion and 12,000 employees in 56 institutes. The institutes perform their research through “alliances” in Microelectronics, Production, Information and Communication Technology, Materials and Components, Life Sciences, Surface Technology and Photonics, and Defense and Security. Financing of contract research is by three main mechanisms: institutional funding, public project financing (federal, German Länder, EU, and some others), and contract financing (industry).

The Freiburg-based Fraunhofer Institute for the Mechanics of Materials, hereafter referred to as IWM, has 148 employees (with another 75 based at its other campus in Halle), and a €15.5 million budget (€10.7 million for Freiburg, €4.7 million for Halle). A remarkable 44% of the budget comes from industry. Another 25–30% base funding derives from the government. It is crucial to the funding model of the IWM that base funding is a fixed percentage of the industrial funding. The IWM has seen significant growth in recent years (10% per year), a figure that is currently constrained by the buildings and available personnel. At the IWM fully 50% of the funding supports modeling and simulation (a number that has grown from 30% five years ago).

The IWM has 7 business units: (1) High Performance Materials and Tribological Systems, (2) Safety and Assessment of Components, (3) Components in Microelectronics, Microsystems and Photovoltaics, (4) Materials- Based Process and Components Simulation, (5) Components with Functional Surfaces, (6) Polymer Applications, and (7) Microstructure-Based Behavior of Components.

SBES Research

The IWM has a world-class effort in applied materials modeling. The WTEC team’s hosts noted that a canonical example illustrated how automobile crash simulations could be improved. The IWM has developed microscopic, physics-based models of materials performance and then inserted these subroutines into FEM codes such as LSDYNA, ABAQUS, and PEMCRASH. The IWM enjoys strong collaborations with German automobile companies, which support this research.

Research areas that the WTEC team’s hosts discussed with us included modeling of materials (micromechanical models for deformation and failure, damage analysis), simulation of manufacturing processes (pressing, sintering, forging, rolling, reshaping, welding, cutting), and simulation of components (prediction of behavior, upper limits, lifetime, virtual testing).

The IWM has a strong program in metronomy at small scales, developing models of failure/fracture, coupling data from nano-indentation with engineering- and physics-based models of same. Models of dislocation-structure evolution seem competitive with state-of-the-art efforts at other institutions. The IWM has a substantial simulation effort in powder processing/sintering, and demonstrated the full process history from filling to densification, and subsequent prediction of materials properties. Not surprisingly, given its strong affiliation with the automobile industry, the IWM has sophisticated program in modeling springback in stamped components, rolling, forming, and friction.

A detailed, so-called “concurrent,” multiscale model of diamond-like-carbon thin-film deposition was presented. This model is truly multiscale and presents an alternative to some of the other integrated approaches in the literature. The model integrates models from Schrödinger’s equation up to classical models of stress-strain (FEM) and captures such properties as surface topography, internal structure, and adhesion.

Computing Facilities

The IWM employs mostly externally developed codes for its simulations (with some customized in-house codes that couple to the externally developed codes). All the codes run on parallel machines. It was noted that privately and/or industry-funded projects cannot use the national supercomputers. Thus, industry-funded projects are done with small clusters in-house, and the researcher also have access to two shared (among five Freiburg-based institutes) clusters with 256 and 480 nodes. At the largest (non-national) scale, the Fraunhofer institutes have a shared 2000-node cluster, of which the IWM uses approximately 25% of the capacity. Because of its disproportionate demand for computational power, the IWM plans to purchase its own 300 node machine soon.



Education/Staffing

In our conversations it was observed that it is challenging to find materials scientists with a theoretical background strong enough to make able simulators. The typical training is in Physics, and materials knowledge must then be acquired through on-site training. Atomistic simulations are uniformly performed by physicists and theoretical chemists. Currently it is difficult to obtain staff skilled in microstructure-level simulation, as they are in high demand by industry.

A particularly novel method for developing students into potential hires was discussed. A one-week recruitment workshop in the Black Forest was held where twenty €50,000 projects were awarded through competition to fresh PhDs to do the work at the institute of their choosing. It is expected that once these students begin developing in their new environments they will then craft new research proposals in-house.

Site: IBM Zurich Laboratory, Deep Computing



Säumerstrasse 4

CH-8803 Rüschlikon, Switzerland

http://www.zurich.ibm.com/

http://www.zurich.ibm.com/deepcomputing/
Date Visited: February 28, 2008
WTEC Attendees: L. Petzold (report author), S. Glotzer, J. Warren, C. Cooper, V. Benokraitis
Hosts: Prof. Dr. Wanda Andreoni, Program Manager, Deep Computing Applications
Email: and@zurich.ibm.com

Dr. Alessandro Curioni, Manager Computational Science

Dr. Costas Bekas

BACKGROUND

Deep Computing (DC) aims at solving particularly complex technological problems faced by IBM, its customers, and its partners by making use of (large-scale) advanced computational methods applied to large data sets. In 1999, an organization called the Deep Computing Institute was created in IBM Research. Its task was to promote and coordinate DC activities, which imply advances in hardware, software, and development of innovative algorithms, as well as the synergy of all three components.

The group that the WTEC visiting team visited at IBM Zurich is Deep Computing Applications. Its head, Dr. Wanda Andreoni, is also a member of the IBM Deep Computing Council, which coordinates all Deep Computing activities at IBM.

R&D ACTIVITIES

The work of this DC research group is problem-driven. It is aimed at patents of new materials or processes. Its researchers do a considerable amount of work with other IBM organizations and with outside companies, and they take a pragmatic vision in these collaborations. Where do the topics and collaborations come from? Many are addressing internal needs where they can make a difference. Industry comes to them for their unique expertise on electronic structure calculation and molecular dynamics. Sometimes they seek out industrial contacts. Now there is an interest in computational biology, and there are collaborations with Novartis and Merck. The contacts usually come from scientist to scientist, but sometimes they come from higher-level management. Many industries have realized that computational science can help them, and they approach IBM for help. There is also an Industry Solution Lab (ISL) to put the customers in contact with them. Most collaborations, however, come from direct interaction between scientists. The information generated by the ISL has been helpful in identifying some of the computational trends and needs of the future. Members of the group remarked that to really put a method to the test, big problems from industry are invaluable. The applications to work on are selected on the basis that they need to of real research interest to the group, and also to the business of IBM. The money comes from the companies, or from IBM, on a case-by-case basis. This group has 60% core funding from IBM.

There is a big success story here. This group went from a situation where the IBM technology group was coming to it with a supercomputer and asking the team to exploit it, to a situation where the computer architecture is now driven by the needs of important applications. The Blue Gene project is an example. How was this transition accomplished? The key was to first build trust and credibility. They now work directly with the computer architects. They have direct communication with the architects regarding parameters across the architecture. There has been an enormous change in the culture at IBM regarding Computational Science and Engineering (CSE). In the beginning, this DC group was a “luxury group.” Now it is well-integrated. The group believes that the key to their success in the long term is their flexibility and careful choice of projects. They are now resting on a foundation of credibility built in previous years. The group currently consists of 4 permanent researchers and 2-6 students and postdocs. Substantial growth is planned in the near future. The biggest bottleneck to growth is in finding appropriately educated people who will meet their needs.

The group regularly works with students, coming from the areas of physics, chemistry and bioinformatics. Geographically, they come mainly from Switzerland and Germany.

They have found that many of this generation of students do not have a good foundation in programming, programming for performance, or massively parallel computing. To find very good people for permanent positions is not easy at all. Sometimes good people come there, and then they get big money offers from the financial industry and leave. Potential employees need to be able to work well as a team and to love science.

How does this group interact with their counterparts at IBM in the US? In the US, the major DP research efforts are in systems biology, protein folding, bioinformatics, and machine learning. The computational biology center (around 10 permanent staff plus postdocs) in Yorktown is a virtual center of which this group is a partner. There is also a group in dislocation dynamics in Yorktown.

With regard to future trends in CSE, the group remarked that massively parallel is here to stay, unless a completely new technology comes along. The software dilemma is: where do you focus your attention – general programming methodologies for parallel computers, or getting all the speed you can for a particular application? Algorithmic research is needed in many areas; most current algorithms are not built to scale well to 1000+ processors. One of the problems mentioned was graph partitioning on irregular grids, for massively parallel processors.

With regard to CSE education, the group believes that the educational focus in the US is sometimes too broad, and traditionally the European focus has been too narrow. Something in the middle seems to required. Students need to know more than just how to use a code. They have no basis for judging the quality of the results produced by the code. Students do not know about the method they are using, and are ill-equipped to write their own codes. They do not know the limitations of the codes they are using. The application scientist needs to understand the simulation methods and the theory, and be capable of modifying or extending a code. At the same time, many of the codes are poorly written and not well documented. Students need to know the science, the algorithms and theory, and programming and software development techniques.

During this visit we also found out about an expanded mission of CECAM++ (see related CECAM trip report) from Andreoni, who represents Switzerland in the CECAM organization. Our IBM hosts argued that coordination is needed to solve the big problems facing science and engineering, and that the focus of the community should not be to simply produce publications. CECAM has for several decades provided important education and training opportunities in modeling and simulation. A new incarnation of CECAM will return to its initial mission as a coordinating body within the European community. Funds are provided by currently 10 countries - roughly a million Euros per country per year. The dominant community of CECAM has been molecular dynamics, but it will broaden to include computational engineering and applied mathematics. An analogy with KITP was mentioned.

CONCLUSIONS

The big story here is the influence of important applications on computer architecture, and the direct collaboration between these two groups. This group has also been very successful in collaborating both internally, and with outside industries. They derive a significant fraction of their funding from projects with outside industries. Their success rests on a foundation of credibility, and a well-developed area of expertise. They are slated for considerable growth in the near future. The bottleneck to that growth is the availability of properly educated people who will meet their needs.



Site: Imperial College London and Thomas Young Centre
(London Centre for Theory & Simulation of Materials)


South Kensington Campus, London SW7 2AZ, UK

http://www3.imperial.ac.uk/materials

http://www.thomasyoungcentre.org/index.html

http://www3.imperial.ac.uk/materials/research/centres/tyc
Date Visited: February 29, 2008.
WTEC Attendees: M. Head-Gordon (report author), P. Cummings, S. Kim, K Chong
Hosts: Prof. Mike Finnis, Dept. of Materials, Dept. of Physics
Email: m.finnis@imperial.ac.uk

Prof. Adrian Sutton, Dept. of Materials, Dept. of Physics


Email: a.sutton@imperial.ac.uk

Dr. Peter Haynes, Dept. of Materials, Dept. of Physics


Email: p.haynes@imperial.ac.uk

Dr. Nicholas Harrison, Dept. of Chemistry


Email: nicholas.harrison@imperial.ac.uk

Dr. Patricia Hunt, Dept. of Chemistry


Email: p.hunt@imperial.ac.uk

Dr. Andrew Horsfield, Dept. of Materials


Email: a.horsfield@imperial.ac.uk

Dr. Arash Mostofi, Dept. of Materials, Dept. of Physics


Email: a.mostafi@imperial.ac.uk

Dr. Paul Tangney, Dept. of Materials, Dept. of Physics


Email: p.tangney@imperial.ac.uk

Prof. Robin Grimes, Dept. of Materials


Email: r.grimes@imperial.ac.uk

Background

Imperial College has a very strong tradition in materials science research, through the Department of Materials (in Engineering), the Department of Physics, and to a lesser extent, the Department of Chemistry. The Chair of Materials Theory and Simulation is held by Professor Mike Finnis, who has joint appointments in Materials and Physics. A recent significant development for materials modeling and simulation community at Imperial, and indeed in London generally is the establishment of the Thomas Young Centre (TYC), which is an umbrella organization for London-based activities in this area.



Download 0.9 Mb.

Share with your friends:
1   ...   9   10   11   12   13   14   15   16   ...   26




The database is protected by copyright ©ininet.org 2024
send message

    Main page