Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

Grid Computing In Israel

2.397 Aufrufe

Veröffentlicht am

A presentation from the HP-CAST 9 conference, Singapore, May 2008.

Veröffentlicht in: Technologie
  • Loggen Sie sich ein, um Kommentare anzuzeigen.

Grid Computing In Israel

  1. 1. Grid Computing and High-Performance Computing in Israel Guy Tel-Zur, Ph.D. The Israeli Association of Grid Technologies tel-zur@computer.org http://www.Grid.org.il
  2. 2. Topics ■ Background  About the country  Infrastructure ■ The Academy  IAG  The Technion, The Hebrew Univ., Ben-Gurion Univ. ■ The Industry  IGT ■ The SEPAC Collaboration
  3. 3. Background
  4. 4. This means~500,000PCs Overall Computing Power: ~500 TFLOPS 
  5. 5. Network Infrastructure
  6. 6. ILAN Statistics 2006 2008
  7. 7. Israel IUCC IUCC – The Inter University Computation Center
  8. 8. The Academy
  9. 9. The Israel Academic Grid (IAG) • http://iag.iucc.ac.il/ • Funded by the MOST • Steering & Technical Committees • Coordinates the Israeli activity in EGEE, EGI and IsraGrid IUCC is the CA for the IAG
  10. 10. EGEE III ■ May 1st 2008 to April 30th 2010 ■ Budget reduced by about 50% ■ Subject to severe FP7 regulations ■ SA1 to be handled through ISRAGRID Vision of EGI Formation of National Grid Initiatives (NGIs) which unite efforts within each country, providing a single point of contact for coordinated efforts
  11. 11. IsraGrid ■ A national committee recommended last July to establish a National Grid Computing Infrastructure. ■ Waiting for final approval by the Government ■ To be used only for R&D purposes ■ To be used by all the Academic institutes and the Israeli High-Tech industry ■ Secured access ■ Managed by the IUCC
  12. 12. MOSIX A management system targeted for HPC on x86 Linux clusters and multi-cluster organizational grids Main features: – supports parallel processes and batch jobs – Automatic resource discovery – Adaptive workload distribution by process migration Outcome: the grid and each cluster performs like a single computer with multiple processors Guest processes can’t modify resources in hosting nodes
  13. 13. The Hebrew University Organizational Grid • 15 MOSIX clusters ~400 nodes • In life-sciences, medical school, chemistry and computer science • Applications: Nano-technology, Molecular dynamics, Protein folding, Genomics (BLAT, BLAST, SW), Meteorological weather forecast (WRF), Navier-Stokes equations and turbulence (CFD) , CPU simulator of new hardware design (SimpleScalar) More information at http://www.MOSIX.org
  14. 14. Nanco- a cluster for Nanotechnology Technion Center for Computation in Nanotechnology, Russell-Berrie Nanotechnology Institute Taub Computer Center M M M P P P P P P 64 dual processor dual core compute nodes (total 256 cores), Opteron Rev. F 8GB RAM memory/node Infiniband Switch  2 master nodes for H/A , Operational since summer 2007  also Opterons for redundency Provided by Sun – integrated by EMET and Voltaire  Fast DDR Infiniband SUN and GNU compilers  Interconnect Voltaire MPI and OpenMPI for parallelization  Netapp storage Most of codes are MPI codes – either commercial or self developed  More info on http://phycomp.technion.ac.il/~nanco
  15. 15. Grid Computing at the Technion Israel Institute of Technology http://dsl.cs.technion.ac.il/index.html • Projects: • Distributed Systems – GMS Laboratory – Super-Link Online – The Dependable Grid • Prof. Assaf Schuster – Head – EGEE – …and more
  16. 16. GMS – Grid Monitoring System  Distributively store all logs of a large batch system in local databases  Apply distributed data mining on logs  Implementation using Condor  Taken up by Intel NetBatch team: started a $3M project
  17. 17. SuperLink Online  http://bioinfo.cs.technion.ac.il/superlink-online/ a production portal for geneticists working at hospitals  Submitted tasks contain gene mapping results from lab experiments  Portal user sees a single computer (!)  Implemented using a hierarchy of Condor pools − Highest/smallest pool in Technion (DSL) − Lowest/largest in Madison (GLOW).  In progress: linkage@home and EGEE BioMed implementations.
  18. 18. The Dependable Grid  Provide a High Availability (HA) Library as a service for any Grid component  HA for Condor matchmaker with zero loc changes (!!!)  Part of Condor 6.8 distribution  Deployed in many large Condor production pools  Plans to develop and support an open-source distribution
  19. 19. Ben Gurion University of the Negev • Inter campus Condor pool • Grid Computing
  20. 20. The BGU Condor Pool • Started in 2000 • Today: ~200 processors • Linux & Windows • Campus-wide project • Non-dedicated resources
  21. 21. We plan to build a new Condor pool installation at the Soroka Medical Center in Beer-Sheva
  22. 22. Grid Computing in the Negev - BGU, NRCN BGU: • A Certified EGEE-II Production site • A Pre-Production EGEE site NRCN:  A small Condor pool  40 processors, Part of the IGT Grid Lab.  A member of the SEPAC Grid Collaboration
  23. 23. Parallel Processing Education ■ Cluster made of Virtual Machines (Xen) ■ “Classic” tools: MPI, OpenMP ■ “Modern” tools: Star-P, Grid Mathematica ■ Grid Computing practice: Condor, Gilda and UNICORE ■ Final projects on a variety of subjects: Parallel Image Processing, Parallel Game of Life, Map/Reduce, Monte Carlo…
  24. 24. The IDIP Group Inter-Disciplinary Digital Image Processing Scientific Computing, Physics and Optimization and Engineering Data Analysis Applied Imaging Science • BGU Members belong to variety of departments from the faculties of Engineering, Exact Sciences and the School of Medicine • Collaborations with various parties in the academia and Industry • More than 20 research students
  25. 25. Multi-scale Geometric methods for Filaments detection in 3D  Development of state of the art tools  Due to the large typical size of real 3D images and the high dimensionality of the coefficients space the computational and storage complexity are very high
  26. 26. The Israeli Association of Grid Technologies (IGT)
  27. 27. IGT Members
  28. 28. IGT Work Groups • Grid-Data Centers & Labs Utilization Peter Weinstein, IGT Lab Manager • Grid-SOA Ronen Yochpaz, CTO VeNotion • Grid-HPC Dr. Guy Tel-Zur, NRCN • Grid-Application Server Nati Shalom, CTO GigaSpaces • Grid-RDMA Asaf Somekh, Voltaire • Grid-Virtualization Niran Even Chen, BenefIT
  29. 29. IGT WEB Site Knowledge Sharing and Networking 16,500 Visitors per Month/ 75% from the US 1GigaByte Downloads per Month
  30. 30. IGT2008 – World Summit of Cloud Computing December 1-2, 2008, Hertzelia, Israel Cristophe Bisciglia Simone Brunozzi Creator of Google's Academic Web Services Evangelist, Cloud Computing Initiative (ACCI) Amazon Web Services Senior Software Engineer, Google Paul Strong Dr. Owen O'Malley Distinguished Research Scientist, eBay Owen O’Malley, Yahoo! Hadoop Architect and Apache VP for Hadoop
  31. 31. IGT2008 – World Summit of Cloud Computing December 1-2, 2008, Hertzelia, Israel Steve Rubinow, CIO, NYSE Euronext, Dr. Frank Baetke The largest exchange in the world. Global HPC-Technology Program Manager HPCD Richardson / Munich Dr. Yaron Wolfsthal Senior Manager, Reliable System Technologies IBM Research Lab in Haifa (HRL) IBM and EU Joint Research Initiative for Cloud Computing - RESERVOIR
  32. 32. The SEPAC Collaboration SEPAC, The Southern European Partnership for Advanced Computing, is a multi-national Grid-cooperation initiated by major South- European High Performance Computing Centers. The objective is to build a Grid as a highly reliable application framework based on open interfaces facilitating a consistent and easy-to- use user interface for scientists and researchers in distributed heterogeneous environments.
  33. 33. The SEPAC Grid
  34. 34. SEPAC Future We are looking for more sites to join us http://www.sepac-grid.org Cloud Computing Layer
  35. 35. http://www.facebook.com/ group.php?gid=8450870046
  36. 36. Thanks to… ■ Prof. David Horn, TAU, Head of the IAG ■ Mr. Avner Agom, IGT General Manager ■ Prof. Amnon Barak, CS Dept., HUJI. ■ Mr. Eddie Aharonovich, CS. Dept., TAU ■ Dr. Anne Weill,The Technion ■ Dr. Ofer Levi, The Ben-Gurion Univ. ■ The SEPAC Collaboration
  37. 37. Questions ? References:  Condor at the BGU: http://www.ee.bgu.ac.il/~tel-zur/condor/  quot;An Introduction to Parallel Processing” course at the BGU: http://www.ee.bgu.ac.il/~tel-zur/teaching/2008B  Grid Computing at the BGU: http://www.ee.bgu.ac.il/~tel-zur/grid.html  IGT: http://www.grid.org.il  EGI: http://web.eu-egi.eu/