06.01.13
Invited Talk
Department of Computer Science
Donald Bren School of Information and Computer Sciences
Title: Metacomputer Architecture of the Global LambdaGrid
Irvine, CA
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Metacomputer Architecture of the Global LambdaGrid
1. “ Metacomputer Architecture of the Global LambdaGrid " Invited Talk Department of Computer Science Donald Bren School of Information and Computer Sciences University of California, Irvine January 13, 2006 Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technology Harry E. Gruber Professor, Dept. of Computer Science and Engineering Jacobs School of Engineering, UCSD
2. Abstract I will describe my research in metacomputer architecture, a term I coined in 1988, in which one builds virtual ensembles of computers, storage, networks, and visualization devices into an integrated system. Working with a set of colleagues, I have driven development in this field through national and international workshops and conferences, including SIGGRAPH, Supercomputing, and iGrid. Although the vision has remained constant over nearly two decades, it is only the recent availability of dedicated optical paths, or lambdas, that has enabled the vision to be realized. These lambdas enable the Grid program to be completed, in that they add the network elements to the compute and storage elements which can be discovered, reserved, and integrated by the Grid middleware to form global LambdaGrids. I will describe my current research in the four grants in which I am PI or co-PI, OptIPuter, Quartzite, LOOKING, and CAMERA, which both develop the computer science of LambdaGrids, but also couple intimately to the application drivers in biomedical imaging, ocean observatories, and marine microbial metagenomics.
5. The First Metacomputer: NSFnet and the Six NSF Supercomputers NCSA NSFNET 56 Kb/s Backbone (1986-8) PSC NCAR CTC JVNC SDSC
6.
7. From Metacomputer to TeraGrid and OptIPuter: 15 Years of Development TeraGrid PI OptIPuter PI 1992 “ Metacomputer” Coined by Smarr in 1988
8. “ What we really have to do is eliminate distance between individuals who want to interact with other people and with other computers.” ― Larry Smarr, Director, NCSA Long-Term Goal: Dedicated Fiber Optic Infrastructure Using Analog Communications to Prototype the Digital Future “ We’re using satellite technology…to demo what It might be like to have high-speed fiber-optic links between advanced computers in two different geographic locations.” ― Al Gore, Senator Chair, US Senate Subcommittee on Science, Technology and Space Illinois Boston SIGGRAPH 1989
9. NCSA Web Server Traffic Increase Led to NCSA Creating the First Parallel Web Server 1993 1995 1994 Peak was 4 Million Hits per Week! Data Source: Software Development Group, NCSA, Graph: Larry Smarr
12. The NCSA Alliance Research Agenda- Create a National Scale Metacomputer The Alliance will strive to make computing routinely parallel, distributed, collaborative, and immersive. --Larry Smarr, CACM Guest Editor Source: Special Issue of Comm. ACM 1997
16. Challenge: Average Throughput of NASA Data Products to End User is < 50 Mbps Tested October 2005 http://ensight.eos.nasa.gov/Missions/icesat/index.shtml Internet2 Backbone is 10,000 Mbps! Throughput is < 0.5% to End User
17. Each Optical Fiber Can Now Carry Many Parallel Line Paths or “Lambdas” ( WDM) Source: Steve Wallach, Chiaro Networks “ Lambdas”
18. States are Acquiring Their Own Dark Fiber Networks -- Illinois’s I-WIRE and Indiana’s I-LIGHT Source: Larry Smarr, Rick Stevens, Tom DeFanti, Charlie Catlett Today Two Dozen State and Regional Optical Networks 1999
19. From “Supercomputer–Centric” to “Supernetwork-Centric” Cyberinfrastructure Megabit/s Gigabit/s Terabit/s Network Data Source: Timothy Lance, President, NYSERNet 32x10Gb “Lambdas” 1 GFLOP Cray2 60 TFLOP Altix Bandwidth of NYSERNet Research Network Backbones T1 Optical WAN Research Bandwidth Has Grown Much Faster Than Supercomputer Speed! Computing Speed (GFLOPS)
20.
21.
22. End User Device Tiled Wall Driven by OptIPuter Graphics Cluster Source: Mark Ellisman, OptIPuter co-PI
23. Campuses Must Provide Fiber Infrastructure to End-User Laboratories & Large Rotating Data Stores SIO Ocean Supercomputer IBM Storage Cluster 2 Ten Gbps Campus Lambda Raceway Streaming Microscope Source: Phil Papadopoulos, SDSC, Calit2 UCSD Campus LambdaStore Architecture Global LambdaGrid
24. OptIPuter@UCI is Up and Working Created 09-27-2005 by Garrett Hildebrand Modified 11-03-2005 by Jessica Yu 10 GE SPDS Catalyst 3750 in CSI ONS 15540 WDM at UCI campus MPOE (CPL) 10 GE DWDM Network Line Engineering Gateway Building, Catalyst 3750 in 3 rd floor IDF MDF Catalyst 6500 w/ firewall, 1 st floor closet Wave-2 : layer-2 GE. UCSD address space 137.110.247.210-222/28 Floor 2 Catalyst 6500 Floor 3 Catalyst 6500 Floor 4 Catalyst 6500 Wave-1 : UCSD address space 137.110.247.242-246 NACS-reserved for testing ESMF Catalyst 3750 in NACS Machine Room (Optiputer) Viz Lab Wave 1 1GE Wave 2 1GE Kim-Jitter Measurements This Week! Calit2 Building UCInet HIPerWall Los Angeles 1 GE DWDM Network Line Tustin CENIC Calren POP UCSD Optiputer Network
25. OptIPuter Software Architecture--a Service-Oriented Architecture Integrating Lambdas Into the Grid GTP XCP UDT LambdaStream CEP RBUDP Globus XIO GRAM GSI DVC Configuration Distributed Virtual Computer (DVC) API DVC Runtime Library Distributed Applications/ Web Services Telescience Vol-a-Tile SAGE JuxtaView Visualization Data Services LambdaRAM DVC Services DVC Core Services DVC Job Scheduling DVC Communication Resource Identify/Acquire Namespace Management Security Management High Speed Communication Storage Services IP Lambdas Discovery and Control PIN/PDC RobuStore
26.
27. NSF is Launching a New Cyberinfrastructure Initiative www.ctwatch.org “ Research is being stalled by ‘information overload,’ Mr. Bement said, because data from digital instruments are piling up far faster than researchers can study. In particular, he said, campus networks need to be improved. High-speed data lines crossing the nation are the equivalent of six-lane superhighways, he said. But networks at colleges and universities are not so capable. “Those massive conduits are reduced to two-lane roads at most college and university campuses,” he said. Improving cyberinfrastructure, he said, “will transform the capabilities of campus-based scientists.” -- Arden Bement, the director of the National Science Foundation
28.
29. “ Access Grid” Was Developed by the Alliance for Multi-site Collaboration Access Grid Talk with 35 Locations on 5 Continents— SC Global Keynote Supercomputing ‘04 Problems Are Video Quality of Service and IP Multicasting
30. Multiple HD Streams Over Lambdas Will Radically Transform Global Collaboration U. Washington JGN II Workshop Osaka, Japan Jan 2005 Prof. Osaka Prof. Aoyama Prof. Smarr Source: U Washington Research Channel Telepresence Using Uncompressed 1.5 Gbps HDTV Streaming Over IP on Fiber Optics-- 75x Home Cable “HDTV” Bandwidth!
31. Partnering with NASA to Combine Telepresence with Remote Interactive Analysis of Data Over National LambdaRail HDTV Over Lambda OptIPuter Visualized Data SIO/UCSD NASA Goddard www.calit2.net/articles/article.php?id=660 August 8, 2005
32.
33. First Trans-Pacific Super High Definition Telepresence Meeting in New Calit2 Digital Cinema Auditorium Lays Technical Basis for Global Digital Cinema Sony NTT SGI Keio University President Anzai UCSD Chancellor Fox
34. The OptIPuter Enabled Collaboratory: Remote Researchers Jointly Exploring Complex Data OptIPuter will Connect The Calit2@UCI 200M-Pixel Wall to The Calit2@UCSD 100M-Pixel Display With Shared Fast Deep Storage “ SunScreen” Run by Sun Opteron Cluster UCI UCSD
37. First Remote Interactive High Definition Video Exploration of Deep Sea Vents Source John Delaney & Deborah Kelley, UWash Canadian-U.S. Collaboration
41. Evolution is the Principle of Biological Systems: Most of Evolutionary Time Was in the Microbial World Source: Carl Woese, et al You Are Here Much of Genome Work Has Occurred in Animals
42. Calit2 Intends to Jump Beyond Traditional Web-Accessible Databases Data Backend (DB, Files) W E B PORTAL (pre-filtered, queries metadata) Response Request + many others Source: Phil Papadopoulos, SDSC, Calit2 BIRN PDB NCBI Genbank
43. Data Servers Must Become Lambda Connected to Allow for Directly Optical Connection to End User Clusters Traditional User Response Request Source: Phil Papadopoulos, SDSC, Calit2 + Web Services Flat File Server Farm W E B PORTAL Dedicated Compute Farm (1000 CPUs) TeraGrid: Cyberinfrastructure Backplane (scheduled activities, e.g. all by all comparison) (10000s of CPUs) Web (other service) Local Cluster Local Environment Direct Access Lambda Cnxns OptIPuter Cluster Cloud Data- Base Farm (0.3PB) 10 GigE Fabric
45. Calit2/SDSC Proposal to Create a UC Cyberinfrastructure of OptIPuter “On-Ramps” to TeraGrid Resources UC San Francisco UC San Diego UC Riverside UC Irvine UC Davis UC Berkeley UC Santa Cruz UC Santa Barbara UC Los Angeles UC Merced OptIPuter + CalREN-XD + TeraGrid = “OptiGrid” Source: Fran Berman, SDSC , Larry Smarr, Calit2 Creating a Critical Mass of End Users on a Secure LambdaGrid