Table of Contents Executive Summary 3



Download 337.55 Kb.
Page1/20
Date08.01.2017
Size337.55 Kb.
#7901
  1   2   3   4   5   6   7   8   9   ...   20
OSG–doc–932

January 15, 2010

www.opensciencegrid.org



Report to the US Department of Energy

December 2009

Miron Livny University of Wisconsin PI, Technical Director

Ruth Pordes Fermilab Co-PI, Executive Director

Kent Blackburn Caltech Co-PI, Council co-Chair

Paul Avery University of Florida Co-PI, Council co-Chair



Table of Contents

1.Executive Summary 3

1.1What is Open Science Grid? 3

1.2Science enabled by Open Science Grid 4

1.3OSG cyberinfrastructure research 5

1.4Technical achievements in 2009 6

1.5Challenges facing OSG 7



2.Contributions to Science 9

2.1ATLAS 9

2.2CMS 13

2.3LIGO 17

2.4ALICE 19

2.5D0 at Tevatron 19

2.6CDF at Tevatron 21

2.7Nuclear physics 27

2.8MINOS 32

2.9Astrophysics 33

2.10Structural Biology 33

2.11Multi-Disciplinary Sciences 36

2.12Computer Science Research 36

3.Development of the OSG Distributed Infrastructure 37

3.1Usage of the OSG Facility 37

3.2Middleware/Software 39

3.3Operations 41

3.4Integration and Site Coordination 42

3.5Virtual Organizations Group 43

3.6Engagement 45

3.7Campus Grids 47

3.8Security 48

3.9Metrics and Measurements 50

3.10Extending Science Applications 51

3.11Scalability, Reliability, and Usability 52

3.12Workload Management System 55

3.13Internet2 Joint Activities 56

3.14ESNET Joint Activities 57

4.Training, Outreach and Dissemination 59

4.1Training and Content Management 59

4.2Outreach Activities 60

4.3Internet dissemination 62



5.Participants 64

5.1Organizations 64

5.2Partnerships and Collaborations 64

6.Cooperative Agreement Performance 67

7.Publications Using OSG Infrastructure 68


Sections of this report were provided by: the scientific members of the OSG Council, OSG PI-s and Co-PIs, and OSG staff and partners. Paul Avery and Chander Sehgal acted as the editors.

1.Executive Summary

1.1What is Open Science Grid?


Open Science Grid (OSG) aims to transform processing and data intensive science by operating and evolving a cross-domain, self-managed, nationally distributed cyber-infrastructure (Error: Reference source not found). OSG’s distributed facility, composed of laboratory, campus and community resources, is designed to meet the current and future needs of scientific Virtual Organizations (VOs) at all scales. It provides a broad range of common services and support, a software platform, and a set of operational principles that organizes and supports users and resources in Virtual Organizations. OSG is jointly funded, until 2011, by the Department of Energy and the National Science Foundation.

Figure : Sites in the OSG Facility

OSG does not own any computing or storage resources. Rather, these are contributed by the members of the OSG Consortium and used both by the owning VO and other VOs. OSG resources are summarized in Error: Reference source not found.

Table : OSG computing resources



Number of Grid interfaced processing resources on the production infrastructure

91

Number of Grid interfaced data storage resources on the production infrastructure

58

Number of Campus Infrastructures interfaced to the OSG

8 (Clemson, FermiGrid, Purdue, Wisconsin, Buffalo, Nebraska, Oklahoma, SBGrid)

Number of National Grids interoperating with the OSG

3 (EGEE, NGDF, TeraGrid)

Number of processing resources on the Integration infrastructure

21

Number of Grid interfaced data storage resources on the integration infrastructure

9

Number of Cores accessible to the OSG infrastructure

~54,000

Size of Tape storage accessible to the OSG infrastructure

~23 Petabytes @ LHC Tier1s

Size of Disk storage accessible to the OSG infrastructure

~15 Petabytes

CPU Wall Clock usage of the OSG infrastructure

Average of 32,000 CPU days/ day during Oct. 2009

The overall usage of OSG continues to grow though utilization by each stakeholder varies depending on its needs during any particular interval. Overall use of the facility in 2009 was approximately 240M hours, a 70% increase from the previous year! (Detailed usage plots can be found in the attached document.) During stable normal operations, OSG provides approximately 700,000 CPU wall clock hours a day (~30,000 cpu days a day) with peaks occasionally exceeding 900,000 CPU wall clock hours a day; approximately 100,000 to 200,000 opportunistic wall clock hours are available on a daily basis for resource sharing. The efficiency of use is based on the ratio of CPU to I/O in the applications as well as existing demand. Based on our transfer accounting (which is in its early days and ongoing in depth validation), we see over 300TB of data movement (both intra- and inter-site) on a daily basis with peaks of 600TB/day; of this, we estimate 25% is GridFTP based transfers and the rest is via LAN-optimized protocols.




Download 337.55 Kb.

Share with your friends:
  1   2   3   4   5   6   7   8   9   ...   20




The database is protected by copyright ©ininet.org 2024
send message

    Main page