Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

HPCフォーラム2015 B-1RandD 100 Award 受賞記念講演 常温水冷スパコンHP Apollo 8000開発エンジニアによる誕生秘話 Nicolas Dube Ph.D

953 Aufrufe

Veröffentlicht am

日本HP主催イベント HPCフォーラム2015 トラックB(コミュニティトラック) 
HP Tech Power Club 第3回スケールアウト分科会
データセンターの電力を少なくデザインするには? 
常温水冷スパコン の誕生秘話

ヒューレット・パッカードカンパニー
最上級テクノロジスト
Nicolas Dobé, Ph.D

Veröffentlicht in: Technologie
  • Als Erste(r) kommentieren

HPCフォーラム2015 B-1RandD 100 Award 受賞記念講演 常温水冷スパコンHP Apollo 8000開発エンジニアによる誕生秘話 Nicolas Dube Ph.D

  1. 1. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HPC フォーラム 2015 テクニカルコンピューティング最前線 ~HP-CAST Japan ~ R&D100Award受賞記念講演 常温水冷スパコンHPApollo8000、 開発エンジニアによる誕生秘話 2015年4月24日 ヒューレット・パッカードカンパニー 最上級テクノロジスト Nicolas Dobé, Ph.D
  2. 2. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.2 Proportion of renewable energy grew from 1.12% to 2.3% worldwide from 1990 to 2010. Fact. => Therefore the world is more environmentally friendly ?
  3. 3. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.3 Worldwide Energy Situation Not quite. • The Bad Side of Ratios: • Renewables % growing • But overall fossil fuels absolute dependency is still increasing
  4. 4. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.4 IT Energy Consumption Report to Congress on Server and Data Center Energy Efficiency, August 2007 USA 2013: 10560*1000*24*365 = 92 billion kWh DataCenter Dynamics Census 2013 Just the beginning of a BIG problem 2013
  5. 5. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.5 How big is that? 38 840 MW... Three Gorges Dam: 22 500 MW
  6. 6. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.6 Worldwide Datacenter Power: 38 840 MW • 78 Million people, about the population of Germany (@ 2kW per person on average) • 8.5 Million houses (US average consumption) • 31 Million electric cars @ 100 km / day How much energy is that? In the US alone: 10 560 MW Source: Make IT Green, Cloud Computing and it’s contribution to climate change, GreenPeace, March 30, 2010 • 39% US Energy is coal • Coal generates ~1 kg CO2 eq / kWh ( “clean coal” is ~ .85 kg CO2 eq / kWh) • 4 118 000 kW * 24 * 365 = 36 billion kWh • Or 36 Million tons of CO2 emissions / year • Equivalent carbon footprint of 9 Million people (using world average at 4 tons per capita)
  7. 7. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. AssessSituation
  8. 8. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.8 You can’t optimize what you can’t measure Power Usage Effectiveness (PUE): the first widely accepted datacenter metric (2006) PUE = Total Datacenter Power IT Systems Power Total Datacenter Power IT Systems Power
  9. 9. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.9 The “holy grail” of datacenters operators? Power Usage Effectiveness 0 1 2 Median 2009 Average today Best Practice NREL + Apollo Lighting Networking Power Distribution Cooling Computer PUE = Total Datacenter Power IT Systems Power
  10. 10. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. DesignDatacentertoMinimize CoolingEnergy
  11. 11. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.11 Liquid moves energy more efficiently. Datacenter Face off 0.58 bhp Fan 0.05 bhp Pump 4 gpm
  12. 12. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.12 DataCenter Cooling Systems Chillers Evaporative Towers Dry Coolers 0.5 – 1.2 kW / Ton 0.05 – 0.1 kW / Ton + make up water 0.05 – 0.1 kW / Ton Outlet Water Temp: Cooling Energy cost: typically 7°C-12°C 3-5°C of “wet-bulb” temperature 3-5°C of “dry-bulb” temperature
  13. 13. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.13 Tokyo Climate “Climate year” psychrometric chart
  14. 14. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.14 Singapore Climate
  15. 15. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.15 Stockholm Climate
  16. 16. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.16 Tokyo Air cooled datacenter
  17. 17. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.17 Tokyo Apollo 8000
  18. 18. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.18
  19. 19. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.19 for 1MW system in Tokyo Energy Savings vs Competition % liquid cooled Water inlet temp Compute Energy* Cooling Energy Cooling PUE Air 0% 12ºC 4547h 8.76 GWh 2.54 GWh - 1.29 RDHX 100% 15ºC 3758h >8.76 GWh 2.36 GWh 7% 1.27 Apollo 8K 100% 30ºC 4h (0h @ 31ºC) <8.76 GWh 0.30 GWh 88% 1.03
  20. 20. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.20 Energy Savings detailed (Tokyo) 1 MW Computing System - Energy savings vs air-cooled Air cooled system: 2 540 425 kWh Apollo 8000 system: 305 643 kWh Annual energy savings: 2 234 782 kWh @ 15 cents = 335 217$  1 676 086$ savings over 5 years Heat re-use: 4609 hours / year  4.6 GWh per 1MW of compute capability  691 350$ savings / year @ 15 cents / kWh  3 456 750$ savings over 5 years Combined: 5 132 836$ Energy savings (1MW, 5 years operation)
  21. 21. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Peregrine/“ProjectApollo” TheBirthofApollo8000 Buildingawarmwater-cooledsupercomputing platform
  22. 22. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.22 NREL ESIF Concept • Reveals not just boxes with blinking lights, but the inner workings of the building as well. • Tour views into pump room and mechanical spaces • Color code pipes, LCD monitors Datacenter equivalent of the “visible man”
  23. 23. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.23 Key Specifications NREL ESIF Datacenter • Warm water cooling: (24C) • Water much better working fluid than air - pumps trump fans. • Utilize high quality waste heat, (35C+) • +90% IT heat load to liquid. • High power distribution • 480VAC, Eliminate conversions. • Think outside the box • Don’t be satisfied with an energy efficient data center nestled on campus surrounded by inefficient laboratory and office buildings. • Innovate, integrate, optimize. Dashboards report instantaneous, seasonal and cumulative PUE/ERE values and multiple other metrics
  24. 24. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.24 One wild ride From concept to production in less than 2 years October 2011: First “concept” rack January 2012: NREL issues RFP for HVAC + warm-water cooled HPC platform March 2012: HP submits RFP response to NREL July 2012: NREL awards contract to HP November 2012: Initial delivery: compute, storage, switches Feb 19th 2013: Delivery of prototype system (4 IT racks, 2 CDUs) Feb 26th 2013: 200TF acceptance test passed August 2013: Delivery of final Peregrine system (11 IT racks, 6 CDUs) September 11th 2013: Energy Secretary Ernest Moniz dedication event September 15th 2013: Peregrine goes into production
  25. 25. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.25 NREL Prototype
  26. 26. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.26 Doing it the “usual” way Plumbing – January 2013
  27. 27. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.27 Facilities plumbing: round 2
  28. 28. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.28 Redefining the deployment model Plumbing – August 2013
  29. 29. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.29 Improved modularity and serviceability Cooling Distribution Unit
  30. 30. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.30 NREL Peregrine In production since September 2013
  31. 31. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.31 Next generation ultra-efficient HPC system The first HPC data center dedicated solely to advancing energy systems integration, renewable energy research, and energy efficiency technologies New ultra-energy-efficient, petascale HPC system • $1 million in annual energy savings and cost avoidance through efficiency improvements • Petascale (one million billion calculations/ second) • Average PUE of 1.06 or better • Source of heat for ESIF’s 185,000 square feet of office and lab spaces, as well as the walkways • 1MW of data center power in under 1,000 sq. ft. => very energy- dense configuration Designed to support NREL’s mission, address research challenges, reduce risks and accelerate the transformation of our energy system. In production at the DoE National Renewable Energy Laboratory (NREL)
  32. 32. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HPApollo8000
  33. 33. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.33 Advancing the science of supercomputing The New HP Apollo 8000 System Leading teraflops per rack for accelerated results • 4X teraflops/sq. ft. than air-cooled systems • > 250 teraflops/rack Efficient liquid cooling without the risk • 40% more FLOPS/watt and 28% less energy than air- cooled systems • Dry-disconnect servers, intelligent Cooling Distribution Unit (iCDU) monitoring and isolation Redefining data center energy recycling • Save up to 3,800 tons of CO2/year (790 cars) • Recycle water to heat facility Scientific Computing • Research computing • Climate modeling • Protein analysis Manufacturing • Product modeling • Simulations • Material analysis 4Xteraflops/sq. ft. 40% more FLOPS/watt 3,800tons of CO2
  34. 34. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.34 Leading performance density Dry disconnect server trays HP Apollo 8000 Cooling Circuit HP Apollo f8000 Rack HP ProLiant XL730f 2x2P Servers HP ProLiant XL740f 2P+ 2 Accelerators HP ProLiant XL750f 2P+2 GPUs Efficient liquid cooling without the risk HP Apollo 8000 iCDU Rack HP Apollo 8000 System HP InfiniBand Switch for Apollo 8000 System
  35. 35. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.35 HP ProLiant XL730f Server (2X2P) Front Node (2) Intel Xeon E5-2600 Processors per node Rear Node (1) Optional SSD per node (1) IB FDR Adaptor Kit per node (16) DDR4 DIMM Slots per node (1) SUV port per node
  36. 36. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.36 Reducing the risk of liquid cooling • Water stays isolated from server nodes and heat transfer is enabled via thermal bus bars • No “make or break” water connections during node service events • Integrated controls maintain server components under optimal temperature specifications Nine Dry Disconnect Thermal Bus Bars assembled into a “Water Wall” Dry Disconnect Innovation
  37. 37. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.37 Warm air exits the nodes Air to Water Integrated Heat Exchanger Back Front Cool air is generated as if flows through the HEX. Cold air enters the front of the trays Hybrid Cooling Concept – Air Side Top down view of the rack cut-away
  38. 38. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.38 HP Apollo 8000 System Technologies Dry-disconnect servers • 100% water cooled components • Designed for serviceability Open door view of 4 f8000, redundant iCDU racks and underfloor plumbing kit Management infrastructure • HP iLO4, IPMI 2.0 and DCMI 1.0 • Rack-level Advanced Power Manager Power infrastructure • Up to 80kW per rack • Four 30A 3-phase 380-480VAC Intelligent Cooling Distribution Unit • 320 KW power capacity • Integrated controls with active-active failover Warm water • Closed secondary loop in CDU • Isolated and open facility loop Raised Floor
  39. 39. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.39 From loading dock to Linpack in 13 days! Cyfronet “Prometheus” system installation 12 Compute Racks, 3 CDUs, 1728 servers
  40. 40. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Nextstep:Integratewiththe Environment
  41. 41. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.41 Energy Reuse LoopSnow melting Building Heating Cooling Tower or Dry Cooler Warm Water Cooled Servers Chiller less Data Centers
  42. 42. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.42 Energy based scheduling and accounting R&D: Energy-aware scheduling • Real-time energy monitoring infrastructure • Resource allocation based on forecasted kWh and heat generation profile • Accounting and “billing” post-execution • Energy based • IT + datacenter • BTU credits for “hot” water generation in winter
  43. 43. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.43 Apollo 8000 - Most Innovative Product of 2014 US Department of Energy 2014 Sustainability Award
  44. 44. © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.44 Thank you nicdube@hp.com

×