09.11.03
Report to the
Dept. of Energy Advanced Scientific Computing Advisory Committee
Title: Project StarGate An End-to-End 10Gbps HPC to User Cyberinfrastructure ANL * Calit2 * LBNL * NICS * ORNL * SDSC
Oak Ridge, TN
Project StarGate An End-to-End 10Gbps HPC to User Cyberinfrastructure ANL * Calit2 * LBNL * NICS * ORNL * SDSC
1. Project StarGate An End-to-End 10Gbps HPC to User Cyberinfrastructure ANL * Calit2 * LBNL * NICS * ORNL * SDSC Report to the Dept. of Energy Advanced Scientific Computing Advisory Committee Oak Ridge, TN November 3, 2009 Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technology Harry E. Gruber Professor, Dept. of Computer Science and Engineering Jacobs School of Engineering, UCSD Twitter: lsmarr
7. Opening Up 10Gbps Data Path ORNL/NICS to ANL to SDSC Connectivity provided by ESnet Science Data Network End-to-End Coupling of User with DOE/NSF HPC Facilities
8.
9.
10.
11.
12.
13.
14.
15.
Hinweis der Redaktion
NSF TeraGrid Review January 10, 2006 Charlie Catlett (cec@uchicago.edu) Eureka – the visualization cluster at ALCF Each node has 2 graphics cards 8 processors 32 GB RAM fast interconnect local disk Server FLOPS = 2.0 GHz * 8 cores * 2 flop per clock = 32 GFLOPS
NSF TeraGrid Review January 10, 2006 Charlie Catlett (cec@uchicago.edu) One of its strengths is its speed, and ability to handle large data sets. Number of procs = power of 2 to do rendering + 1 for compositing 2 graphics cards per node, so half as many nodes as listed here Data i/o is clearly the bottleneck Doing an animation of a single time step, data is only loaded once, can be pretty quick