SlideShare ist ein Scribd-Unternehmen logo
1 von 19
Teragrid and Physics Research

                Ralph Roskies,
               Scientific Director
      Pittsburgh Supercomputing Center
               roskies@psc.edu
                March 20,2009


                                         1
High Performance Computing is
    Transforming Physics Research

• TeraGrid ties together the high end computational
  resources (supercomputing, storage, visualization,
  data collections, science gateways) provided by
  NSF for the nation’s researchers,
• Supported by computing and technology experts,
  many who have science PhDs and speak the users’
  language.
• World-class facilities, on a much larger scale than
  ever before, present major new opportunities for
  physics researchers to carry out computations that
  would have been infeasible just a few years ago.


                                                 2
TeraGrid Map




               3
Hardware must be heterogeneous
• Different capabilities
• Different vendors

• Potential for great burden on people trying to use
  more than one system.




                                               4
Integrated View for Users

• Single signon,
• Single application form for access (it’s free- more later)
• Single ticket system (especially useful for problems
 between systems)
• Coordinated user support (find experts at any site)
• Simplified data movement; (e.g. compute in one place,
 analyze in another)
• Makes data sharing easy




                                                          5
Diversity of Resources (not exhaustive)
• Very Powerful Tightly Coupled Distributed Memory
  – Trk2a-Texas (TACC)- Ranger (62,976 cores, 579 Teraflops, 123 TB memory)
  – Trk2b-Tennessee (NICS)- Kraken (Cray XT5, 66,048 cores, 608 teraflops, over
    1 petaflop later in 2009).
• Shared Memory
  – NCSA- Cobalt, Altix, 8 TF, 3 TB shared memory
  – PSC- Pople- Altix, 5 Tflop, 1.5 TB shared memory
• Clusters with Infiniband
  – NCSA-Abe- 90 Tflops
  – TACC-Lonestar- 61 Tflops
  – LONI-Queen Bee 51 Tflops
• Condor Pool (Loosely Coupled)
  – Purdue- up to 22,000 cpus
• Visualization Resources
  – Purdue-TeraDRE-48 node nVIDIA GPUs
  – TACC-Spur- 32 nVIDIA GPUs
• Various Storage Resources
                                                                    6
Resources to come

• Recognize that science is being increasingly data
  driven (LHC, LSST, …)
• PSC- large shared memory system
• Track2D being competed
  – A data-intensive HPC system
  – An experimental HPC system
  – A pool of loosely coupled grid computing resources
  – An experimental, high-performance grid test-bed
• Track1 System at NCSA- 10 Pflop peak, 1 Pflop
  sustained on serious applications in 2011




                                                         7
Some Example Impacts on Physics

(not overlapping with the presentations to follow)




                                                     8
Lattice QCD- MILC collaboration

• Improved precision on “standard model”,
  required to uncover new physics.
• Need larger lattices, lighter quarks
• Frequent algorithmic improvements
• UseTeraGrid resources at NICS, PSC,
  NCSA, TACC; DOE resources at Argonne,
  NERSC, specialized QCD machine at
  Brookhaven, cluster at Fermilab


Store results with The International Lattice Data Grid (ILDG), an
international organization which provides standards, services, methods
and tools that facilitates the sharing and interchange of lattice QCD
gauge configurations among scientific collaborations (US, UK, Japan,
Germany, Italy, France, and Australia) .http://www.usqcd.org/ildg/

                                                                         9
Astrophysics-Mike Norman et al UCSD

• Small (1 part in 105) spatial
  inhomogeneities 380,000 years after the
  Big Bang, as revealed by WMAP
  Satellite data, get transformed by
  gravitation into the pattern of severe
  inhomogeneities (galaxies, stars, voids
  etc.) that we see today.
• Uniform meshes won’t do, must zoom in
  on dense regions to capture the key
  physical processes- gravitation
  (including dark matter), shock heating
  and radiative cooling of gas. So need an
  adaptive mesh refinement scheme (they
  use 7 levels of mesh refinement).




                                             The filamentary structure in this simulation in a
                                             cube 1.5 billion light years on a side is also
                                             seen in real life observations such as the
                                             Sloan Digital Sky Survey.


                                                                             10
Astrophysics (cont’d)

• Need large shared memory capabilities for generating initial
  conditions, (adaptive mesh refinement is very hard to load-
  balance on distributed memory machines); then the largest
  distributed memory machines (Ranger & Kraken) for the
  simulation; shared memory again for data analysis and
  visualization; and need long term archival storage for
  configurations – so lots of data movement between sites.
• TeraGrid helped make major improvements in the scaling
  and efficiency of the code (ENZO), and in the visualization
  tools which are being stressed at these volumes.




                                                          11
Nanoscale Electronic Structure
             (nanoHUB, Klimeck, Purdue)

• Challenge of designing microprocessors and other
  devices with nanoscale components. Need quantum
  mechanics for quantum dots, resonant tunneling diodes,
  and nanowires.
• Largest codes operate at the petascale (NEMO-3D,
  OMEN), using 32,768 cores of Ranger, and generally use
  resources at NCSA, PSC, IU,ORNL and Purdue.
• Developing modeling and simulation tools and a simple
  user interface (Gateways) for non-expert users.
  nanoHUB.org hosts more than 90 tools, had >6200 users,
  ran>300,000 simulations, supported 44 classes, in 2008.
• Will benefit from improved metascheduling capabilities to
  be implemented this year in TeraGrid because want
  interactive response for the simple calculations.
• Communities develop the Gateways- TG helps interface
  that to TG resources.
                                                         12
Aquaporins - Schulten group,UIUC

• Aquaporins are proteins which conduct large
  volumes of water through cell walls while
  filtering out charged particles like hydrogen
  ions.
• Start with known crystal structure, simulate
  12 nanoseconds of molecular dynamics of
  over 100,000 atoms, using NAMD
• Water moves through aquaporin channels in
  single file. Oxygen leads the way in. At the
  most constricted point of channel, water
  molecule flips. Protons can’t do this.
Aquaporin Mechanism



            Animation pointed to by 2003 Nobel
            chemistry prize announcement for
            structure of aquaporins (Peter Agre)

            The simulation helped explain how
            the structure led to the function
Users and Usage




                  15
2008 TeraGrid Usage By Discipline




                               16
If you’re not yet a TeraGrid user and
constraining your research to fit into
       your local capabilities…

• Consider TeraGrid. Getting time is easy.
• It’s free
• We’ll even help you with coding and optimization
• See www.teragrid.org/userinfo/getting_started.php?
• Don’t be constrained by what appears possible today.
  Think about your problem and talk to us.




                                                     17
Training (also free)

March 12 - 13, 2009 Parallel Optimization and Scientific
  Visualization for Ranger
March 19 - 20, 2009 OSG Grid Site Administrators Workshop
March 23 - 26, 2009 PSC/Intel Multi-core Programming and
  Performance Tuning Workshop
March 24, 2009 C Programming Basics for HPC (TACC)
April 13 - 16, 2009 2009 Cray XT5 Quad-core Workshop
  (NICS)
April 21, 2009 Fortran 90/95 Programming for HPC (TACC)
June 22 - 26, 2009 TeraGrid '09

For fuller schedule see: http://www.teragrid.org/eot/workshops.php



                                                               18
Campus Champions Program

•   Campus advocate for TeraGrid and CI
•   TeraGrid ombudsman for local users
•   Training program for campus representatives
•   Quick start-up accounts for campus
•   TeraGrid contacts for problem resolution
•   Over 31 campuses signed on, more in discussions
•   We’re looking for interested campuses!
     –See Laura McGinnis

Weitere ähnliche Inhalte

Was ist angesagt?

ACM HPDC 2010参加報告
ACM HPDC 2010参加報告ACM HPDC 2010参加報告
ACM HPDC 2010参加報告
Ryousei Takano
 
Computing Challenges at the Large Hadron Collider
Computing Challenges at the Large Hadron ColliderComputing Challenges at the Large Hadron Collider
Computing Challenges at the Large Hadron Collider
inside-BigData.com
 
The World Wide Distributed Computing Architecture of the LHC Datagrid
The World Wide Distributed Computing Architecture of the LHC DatagridThe World Wide Distributed Computing Architecture of the LHC Datagrid
The World Wide Distributed Computing Architecture of the LHC Datagrid
Swiss Big Data User Group
 

Was ist angesagt? (19)

ACM HPDC 2010参加報告
ACM HPDC 2010参加報告ACM HPDC 2010参加報告
ACM HPDC 2010参加報告
 
Computing Challenges at the Large Hadron Collider
Computing Challenges at the Large Hadron ColliderComputing Challenges at the Large Hadron Collider
Computing Challenges at the Large Hadron Collider
 
Jorge gomes
Jorge gomesJorge gomes
Jorge gomes
 
AI models for Ice Classification - ExtremeEarth Open Workshop
AI models for Ice Classification - ExtremeEarth Open WorkshopAI models for Ice Classification - ExtremeEarth Open Workshop
AI models for Ice Classification - ExtremeEarth Open Workshop
 
How HPC and large-scale data analytics are transforming experimental science
How HPC and large-scale data analytics are transforming experimental scienceHow HPC and large-scale data analytics are transforming experimental science
How HPC and large-scale data analytics are transforming experimental science
 
The World Wide Distributed Computing Architecture of the LHC Datagrid
The World Wide Distributed Computing Architecture of the LHC DatagridThe World Wide Distributed Computing Architecture of the LHC Datagrid
The World Wide Distributed Computing Architecture of the LHC Datagrid
 
LambdaGrids--Earth and Planetary Sciences Driving High Performance Networks a...
LambdaGrids--Earth and Planetary Sciences Driving High Performance Networks a...LambdaGrids--Earth and Planetary Sciences Driving High Performance Networks a...
LambdaGrids--Earth and Planetary Sciences Driving High Performance Networks a...
 
Report to the NAC
Report to the NACReport to the NAC
Report to the NAC
 
NPOESS Program Overview
NPOESS Program OverviewNPOESS Program Overview
NPOESS Program Overview
 
Brochure ADF Modeling Suite 2016
Brochure ADF Modeling Suite 2016Brochure ADF Modeling Suite 2016
Brochure ADF Modeling Suite 2016
 
Polar Use Case - ExtremeEarth Open Workshop
Polar Use Case  - ExtremeEarth Open WorkshopPolar Use Case  - ExtremeEarth Open Workshop
Polar Use Case - ExtremeEarth Open Workshop
 
Hybrid Cloud for CERN
Hybrid Cloud for CERN Hybrid Cloud for CERN
Hybrid Cloud for CERN
 
ADF modeling suite: DFT to MD software for chemistry and materials
ADF modeling suite: DFT to MD software for chemistry and materialsADF modeling suite: DFT to MD software for chemistry and materials
ADF modeling suite: DFT to MD software for chemistry and materials
 
Chemistry and Materials with the ADF modeling suite
Chemistry and Materials with the ADF modeling suiteChemistry and Materials with the ADF modeling suite
Chemistry and Materials with the ADF modeling suite
 
CloudLightning and the OPM-based Use Case
CloudLightning and the OPM-based Use CaseCloudLightning and the OPM-based Use Case
CloudLightning and the OPM-based Use Case
 
Hpc Cloud project Overview
Hpc Cloud project OverviewHpc Cloud project Overview
Hpc Cloud project Overview
 
Big Data for Big Discoveries
Big Data for Big DiscoveriesBig Data for Big Discoveries
Big Data for Big Discoveries
 
An NSA Big Graph experiment
An NSA Big Graph experimentAn NSA Big Graph experiment
An NSA Big Graph experiment
 
Making Sense of Information Through Planetary Scale Computing
Making Sense of Information Through Planetary Scale ComputingMaking Sense of Information Through Planetary Scale Computing
Making Sense of Information Through Planetary Scale Computing
 

Andere mochten auch

Qualifications
QualificationsQualifications
Qualifications
abhijitv
 
Digital Distance Learning at NYIF
Digital Distance Learning at NYIFDigital Distance Learning at NYIF
Digital Distance Learning at NYIF
maxngerry
 
Photoshop Express
Photoshop ExpressPhotoshop Express
Photoshop Express
Jensie007
 
Cross escolar solidario 2013. ppt
Cross escolar solidario 2013. pptCross escolar solidario 2013. ppt
Cross escolar solidario 2013. ppt
Mª Teresa Guerrero
 
Hostages To Hostility, June 09
Hostages To Hostility, June 09Hostages To Hostility, June 09
Hostages To Hostility, June 09
asgilbert
 

Andere mochten auch (20)

Qualifications
QualificationsQualifications
Qualifications
 
Digital Distance Learning at NYIF
Digital Distance Learning at NYIFDigital Distance Learning at NYIF
Digital Distance Learning at NYIF
 
Diversity for Life campaign
Diversity for Life campaignDiversity for Life campaign
Diversity for Life campaign
 
Photoshop Express
Photoshop ExpressPhotoshop Express
Photoshop Express
 
NERPIO. Noviembre de 2013
NERPIO. Noviembre de 2013NERPIO. Noviembre de 2013
NERPIO. Noviembre de 2013
 
Carnaval 2014
Carnaval 2014Carnaval 2014
Carnaval 2014
 
Carnaval 2010
Carnaval 2010Carnaval 2010
Carnaval 2010
 
Cross escolar solidario 2013. ppt
Cross escolar solidario 2013. pptCross escolar solidario 2013. ppt
Cross escolar solidario 2013. ppt
 
Hostages To Hostility, June 09
Hostages To Hostility, June 09Hostages To Hostility, June 09
Hostages To Hostility, June 09
 
Tropical Paradise?
Tropical Paradise?Tropical Paradise?
Tropical Paradise?
 
About
AboutAbout
About
 
Presentazione Maimone Festival della soft economy_2015
Presentazione  Maimone Festival della soft economy_2015Presentazione  Maimone Festival della soft economy_2015
Presentazione Maimone Festival della soft economy_2015
 
Sosyal medyada transmedia storytelling yaklaşımı
Sosyal medyada transmedia storytelling yaklaşımıSosyal medyada transmedia storytelling yaklaşımı
Sosyal medyada transmedia storytelling yaklaşımı
 
Managing Organizational Communication In The Hotellerie Sector
Managing Organizational Communication In The Hotellerie SectorManaging Organizational Communication In The Hotellerie Sector
Managing Organizational Communication In The Hotellerie Sector
 
La Comunicazione Organizzativa
La Comunicazione OrganizzativaLa Comunicazione Organizzativa
La Comunicazione Organizzativa
 
La Nuova Comunicazione Interna Maimone
La Nuova Comunicazione Interna MaimoneLa Nuova Comunicazione Interna Maimone
La Nuova Comunicazione Interna Maimone
 
Origen de la Vida
Origen de la VidaOrigen de la Vida
Origen de la Vida
 
Nutrition Camp
Nutrition CampNutrition Camp
Nutrition Camp
 
Www manhattanjobs-com
Www manhattanjobs-comWww manhattanjobs-com
Www manhattanjobs-com
 
Great team starts pecha kucha
Great team starts pecha kuchaGreat team starts pecha kucha
Great team starts pecha kucha
 

Ähnlich wie TeraGrid and Physics Research

NASA Advanced Computing Environment for Science & Engineering
NASA Advanced Computing Environment for Science & EngineeringNASA Advanced Computing Environment for Science & Engineering
NASA Advanced Computing Environment for Science & Engineering
inside-BigData.com
 
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
inside-BigData.com
 

Ähnlich wie TeraGrid and Physics Research (20)

Science and Cyberinfrastructure in the Data-Dominated Era
Science and Cyberinfrastructure in the Data-Dominated EraScience and Cyberinfrastructure in the Data-Dominated Era
Science and Cyberinfrastructure in the Data-Dominated Era
 
NASA Advanced Computing Environment for Science & Engineering
NASA Advanced Computing Environment for Science & EngineeringNASA Advanced Computing Environment for Science & Engineering
NASA Advanced Computing Environment for Science & Engineering
 
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
 
TeraGrid Communication and Computation
TeraGrid Communication and ComputationTeraGrid Communication and Computation
TeraGrid Communication and Computation
 
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
 
A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Int...
A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Int...A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Int...
A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Int...
 
European Exascale System Interconnect & Storage
European Exascale System Interconnect & StorageEuropean Exascale System Interconnect & Storage
European Exascale System Interconnect & Storage
 
Accelerators at ORNL - Application Readiness, Early Science, and Industry Impact
Accelerators at ORNL - Application Readiness, Early Science, and Industry ImpactAccelerators at ORNL - Application Readiness, Early Science, and Industry Impact
Accelerators at ORNL - Application Readiness, Early Science, and Industry Impact
 
DA-JPL-final
DA-JPL-finalDA-JPL-final
DA-JPL-final
 
Grid optical network service architecture for data intensive applications
Grid optical network service architecture for data intensive applicationsGrid optical network service architecture for data intensive applications
Grid optical network service architecture for data intensive applications
 
ESCAPE Kick-off meeting - KM3Net, Opening a new window on our universe (Feb 2...
ESCAPE Kick-off meeting - KM3Net, Opening a new window on our universe (Feb 2...ESCAPE Kick-off meeting - KM3Net, Opening a new window on our universe (Feb 2...
ESCAPE Kick-off meeting - KM3Net, Opening a new window on our universe (Feb 2...
 
Lessons learned from shifting real data around: An ad hoc data challenge from...
Lessons learned from shifting real data around: An ad hoc data challenge from...Lessons learned from shifting real data around: An ad hoc data challenge from...
Lessons learned from shifting real data around: An ad hoc data challenge from...
 
The Pacific Research Platform
The Pacific Research PlatformThe Pacific Research Platform
The Pacific Research Platform
 
Barcelona Supercomputing Center, Generador de Riqueza
Barcelona Supercomputing Center, Generador de RiquezaBarcelona Supercomputing Center, Generador de Riqueza
Barcelona Supercomputing Center, Generador de Riqueza
 
NASA Earth Exchange (NEX) Overview
NASA Earth Exchange (NEX) OverviewNASA Earth Exchange (NEX) Overview
NASA Earth Exchange (NEX) Overview
 
Scientific
Scientific Scientific
Scientific
 
Panel: NRP Science Impacts​
Panel: NRP Science Impacts​Panel: NRP Science Impacts​
Panel: NRP Science Impacts​
 
Running a GPU burst for Multi-Messenger Astrophysics with IceCube across all ...
Running a GPU burst for Multi-Messenger Astrophysics with IceCube across all ...Running a GPU burst for Multi-Messenger Astrophysics with IceCube across all ...
Running a GPU burst for Multi-Messenger Astrophysics with IceCube across all ...
 
Implementing AI: Hardware Challenges
Implementing AI: Hardware ChallengesImplementing AI: Hardware Challenges
Implementing AI: Hardware Challenges
 
Overview of the Exascale Additive Manufacturing Project
Overview of the Exascale Additive Manufacturing ProjectOverview of the Exascale Additive Manufacturing Project
Overview of the Exascale Additive Manufacturing Project
 

Kürzlich hochgeladen

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Kürzlich hochgeladen (20)

Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 

TeraGrid and Physics Research

  • 1. Teragrid and Physics Research Ralph Roskies, Scientific Director Pittsburgh Supercomputing Center roskies@psc.edu March 20,2009 1
  • 2. High Performance Computing is Transforming Physics Research • TeraGrid ties together the high end computational resources (supercomputing, storage, visualization, data collections, science gateways) provided by NSF for the nation’s researchers, • Supported by computing and technology experts, many who have science PhDs and speak the users’ language. • World-class facilities, on a much larger scale than ever before, present major new opportunities for physics researchers to carry out computations that would have been infeasible just a few years ago. 2
  • 4. Hardware must be heterogeneous • Different capabilities • Different vendors • Potential for great burden on people trying to use more than one system. 4
  • 5. Integrated View for Users • Single signon, • Single application form for access (it’s free- more later) • Single ticket system (especially useful for problems between systems) • Coordinated user support (find experts at any site) • Simplified data movement; (e.g. compute in one place, analyze in another) • Makes data sharing easy 5
  • 6. Diversity of Resources (not exhaustive) • Very Powerful Tightly Coupled Distributed Memory – Trk2a-Texas (TACC)- Ranger (62,976 cores, 579 Teraflops, 123 TB memory) – Trk2b-Tennessee (NICS)- Kraken (Cray XT5, 66,048 cores, 608 teraflops, over 1 petaflop later in 2009). • Shared Memory – NCSA- Cobalt, Altix, 8 TF, 3 TB shared memory – PSC- Pople- Altix, 5 Tflop, 1.5 TB shared memory • Clusters with Infiniband – NCSA-Abe- 90 Tflops – TACC-Lonestar- 61 Tflops – LONI-Queen Bee 51 Tflops • Condor Pool (Loosely Coupled) – Purdue- up to 22,000 cpus • Visualization Resources – Purdue-TeraDRE-48 node nVIDIA GPUs – TACC-Spur- 32 nVIDIA GPUs • Various Storage Resources 6
  • 7. Resources to come • Recognize that science is being increasingly data driven (LHC, LSST, …) • PSC- large shared memory system • Track2D being competed – A data-intensive HPC system – An experimental HPC system – A pool of loosely coupled grid computing resources – An experimental, high-performance grid test-bed • Track1 System at NCSA- 10 Pflop peak, 1 Pflop sustained on serious applications in 2011 7
  • 8. Some Example Impacts on Physics (not overlapping with the presentations to follow) 8
  • 9. Lattice QCD- MILC collaboration • Improved precision on “standard model”, required to uncover new physics. • Need larger lattices, lighter quarks • Frequent algorithmic improvements • UseTeraGrid resources at NICS, PSC, NCSA, TACC; DOE resources at Argonne, NERSC, specialized QCD machine at Brookhaven, cluster at Fermilab Store results with The International Lattice Data Grid (ILDG), an international organization which provides standards, services, methods and tools that facilitates the sharing and interchange of lattice QCD gauge configurations among scientific collaborations (US, UK, Japan, Germany, Italy, France, and Australia) .http://www.usqcd.org/ildg/ 9
  • 10. Astrophysics-Mike Norman et al UCSD • Small (1 part in 105) spatial inhomogeneities 380,000 years after the Big Bang, as revealed by WMAP Satellite data, get transformed by gravitation into the pattern of severe inhomogeneities (galaxies, stars, voids etc.) that we see today. • Uniform meshes won’t do, must zoom in on dense regions to capture the key physical processes- gravitation (including dark matter), shock heating and radiative cooling of gas. So need an adaptive mesh refinement scheme (they use 7 levels of mesh refinement). The filamentary structure in this simulation in a cube 1.5 billion light years on a side is also seen in real life observations such as the Sloan Digital Sky Survey. 10
  • 11. Astrophysics (cont’d) • Need large shared memory capabilities for generating initial conditions, (adaptive mesh refinement is very hard to load- balance on distributed memory machines); then the largest distributed memory machines (Ranger & Kraken) for the simulation; shared memory again for data analysis and visualization; and need long term archival storage for configurations – so lots of data movement between sites. • TeraGrid helped make major improvements in the scaling and efficiency of the code (ENZO), and in the visualization tools which are being stressed at these volumes. 11
  • 12. Nanoscale Electronic Structure (nanoHUB, Klimeck, Purdue) • Challenge of designing microprocessors and other devices with nanoscale components. Need quantum mechanics for quantum dots, resonant tunneling diodes, and nanowires. • Largest codes operate at the petascale (NEMO-3D, OMEN), using 32,768 cores of Ranger, and generally use resources at NCSA, PSC, IU,ORNL and Purdue. • Developing modeling and simulation tools and a simple user interface (Gateways) for non-expert users. nanoHUB.org hosts more than 90 tools, had >6200 users, ran>300,000 simulations, supported 44 classes, in 2008. • Will benefit from improved metascheduling capabilities to be implemented this year in TeraGrid because want interactive response for the simple calculations. • Communities develop the Gateways- TG helps interface that to TG resources. 12
  • 13. Aquaporins - Schulten group,UIUC • Aquaporins are proteins which conduct large volumes of water through cell walls while filtering out charged particles like hydrogen ions. • Start with known crystal structure, simulate 12 nanoseconds of molecular dynamics of over 100,000 atoms, using NAMD • Water moves through aquaporin channels in single file. Oxygen leads the way in. At the most constricted point of channel, water molecule flips. Protons can’t do this.
  • 14. Aquaporin Mechanism Animation pointed to by 2003 Nobel chemistry prize announcement for structure of aquaporins (Peter Agre) The simulation helped explain how the structure led to the function
  • 16. 2008 TeraGrid Usage By Discipline 16
  • 17. If you’re not yet a TeraGrid user and constraining your research to fit into your local capabilities… • Consider TeraGrid. Getting time is easy. • It’s free • We’ll even help you with coding and optimization • See www.teragrid.org/userinfo/getting_started.php? • Don’t be constrained by what appears possible today. Think about your problem and talk to us. 17
  • 18. Training (also free) March 12 - 13, 2009 Parallel Optimization and Scientific Visualization for Ranger March 19 - 20, 2009 OSG Grid Site Administrators Workshop March 23 - 26, 2009 PSC/Intel Multi-core Programming and Performance Tuning Workshop March 24, 2009 C Programming Basics for HPC (TACC) April 13 - 16, 2009 2009 Cray XT5 Quad-core Workshop (NICS) April 21, 2009 Fortran 90/95 Programming for HPC (TACC) June 22 - 26, 2009 TeraGrid '09 For fuller schedule see: http://www.teragrid.org/eot/workshops.php 18
  • 19. Campus Champions Program • Campus advocate for TeraGrid and CI • TeraGrid ombudsman for local users • Training program for campus representatives • Quick start-up accounts for campus • TeraGrid contacts for problem resolution • Over 31 campuses signed on, more in discussions • We’re looking for interested campuses! –See Laura McGinnis