1.Executive Summary 3
1.1What is Open Science Grid? 3
1.2Science enabled by Open Science Grid 4
1.3OSG cyberinfrastructure research 5
1.4Technical achievements in 2009 6
1.5Challenges facing OSG 7
2.Contributions to Science 9
2.1ATLAS 9
2.2CMS 13
2.3LIGO 17
2.4ALICE 19
2.5D0 at Tevatron 19
2.6CDF at Tevatron 21
2.7Nuclear physics 27
2.8MINOS 32
2.9Astrophysics 33
2.10Structural Biology 33
2.11Multi-Disciplinary Sciences 36
2.12Computer Science Research 36
3.Development of the OSG Distributed Infrastructure 37
3.1Usage of the OSG Facility 37
3.2Middleware/Software 39
3.3Operations 41
3.4Integration and Site Coordination 42
3.5Virtual Organizations Group 43
3.6Engagement 45
3.7Campus Grids 47
3.8Security 48
3.9Metrics and Measurements 50
3.10Extending Science Applications 51
3.11Scalability, Reliability, and Usability 52
3.12Workload Management System 55
3.13Internet2 Joint Activities 56
3.14ESNET Joint Activities 57
4.Training, Outreach and Dissemination 59
4.1Training and Content Management 59
4.2Outreach Activities 60
4.3Internet dissemination 62
5.Participants 64
5.1Organizations 64
5.2Partnerships and Collaborations 64
6.Cooperative Agreement Performance 67
7.Publications Using OSG Infrastructure 68
OSG does not own any computing or storage resources. Rather, these are contributed by the members of the OSG Consortium and used both by the owning VO and other VOs. OSG resources are summarized in Error: Reference source not found.
Number of Grid interfaced processing resources on the production infrastructure
|
91
|
Number of Grid interfaced data storage resources on the production infrastructure
|
58
|
Number of Campus Infrastructures interfaced to the OSG
|
8 (Clemson, FermiGrid, Purdue, Wisconsin, Buffalo, Nebraska, Oklahoma, SBGrid)
|
Number of National Grids interoperating with the OSG
|
3 (EGEE, NGDF, TeraGrid)
|
Number of processing resources on the Integration infrastructure
|
21
|
Number of Grid interfaced data storage resources on the integration infrastructure
|
9
|
Number of Cores accessible to the OSG infrastructure
|
~54,000
|
Size of Tape storage accessible to the OSG infrastructure
|
~23 Petabytes @ LHC Tier1s
|
Size of Disk storage accessible to the OSG infrastructure
|
~15 Petabytes
|
CPU Wall Clock usage of the OSG infrastructure
|
Average of 32,000 CPU days/ day during Oct. 2009
|
The overall usage of OSG continues to grow though utilization by each stakeholder varies depending on its needs during any particular interval. Overall use of the facility in 2009 was approximately 240M hours, a 70% increase from the previous year! (Detailed usage plots can be found in the attached document.) During stable normal operations, OSG provides approximately 700,000 CPU wall clock hours a day (~30,000 cpu days a day) with peaks occasionally exceeding 900,000 CPU wall clock hours a day; approximately 100,000 to 200,000 opportunistic wall clock hours are available on a daily basis for resource sharing. The efficiency of use is based on the ratio of CPU to I/O in the applications as well as existing demand. Based on our transfer accounting (which is in its early days and ongoing in depth validation), we see over 300TB of data movement (both intra- and inter-site) on a daily basis with peaks of 600TB/day; of this, we estimate 25% is GridFTP based transfers and the rest is via LAN-optimized protocols.