SlideShare ist ein Scribd-Unternehmen logo
1 von 58
Downloaden Sie, um offline zu lesen
High-Performance and Scalable Designs of Programming
Models for Exascale Systems
Dhabaleswar K. (DK) Panda
The Ohio State University
E-mail: panda@cse.ohio-state.edu
http://www.cse.ohio-state.edu/~panda
Keynote Talk at HPCAC, Switzerland (April 2017)
by
HPCAC-Switzerland (April ‘17) 2Network Based Computing Laboratory
High-End Computing (HEC): Towards Exascale-Era
Expected to have an ExaFlop system in 2021!
150-300
PFlops in
2017-18?
1 EFlops
in 2021?
HPCAC-Switzerland (April ‘17) 3Network Based Computing Laboratory
• Scientific Computing
– Message Passing Interface (MPI), including MPI + OpenMP, is the Dominant
Programming Model
– Many discussions towards Partitioned Global Address Space (PGAS)
• UPC, OpenSHMEM, CAF, UPC++ etc.
– Hybrid Programming: MPI + PGAS (OpenSHMEM, UPC)
• Deep Learning
– Caffe, CNTK, TensorFlow, and many more
• Big Data/Enterprise/Commercial Computing
– Focuses on large data and data analysis
– Spark and Hadoop (HDFS, HBase, MapReduce)
– Memcached is also used for Web 2.0
Three Major Computing Categories
HPCAC-Switzerland (April ‘17) 4Network Based Computing Laboratory
Parallel Programming Models Overview
P1 P2 P3
Shared Memory
P1 P2 P3
Memory Memory Memory
P1 P2 P3
Memory Memory Memory
Logical shared memory
Shared Memory Model
SHMEM, DSM
Distributed Memory Model
MPI (Message Passing Interface)
Partitioned Global Address Space (PGAS)
Global Arrays, UPC, Chapel, X10, CAF, …
• Programming models provide abstract machine models
• Models can be mapped on different types of systems
– e.g. Distributed Shared Memory (DSM), MPI within a node, etc.
• PGAS models and Hybrid MPI+PGAS models are gradually receiving
importance
HPCAC-Switzerland (April ‘17) 5Network Based Computing Laboratory
Partitioned Global Address Space (PGAS) Models
• Key features
- Simple shared memory abstractions
- Light weight one-sided communication
- Easier to express irregular communication
• Different approaches to PGAS
- Languages
• Unified Parallel C (UPC)
• Co-Array Fortran (CAF)
• X10
• Chapel
- Libraries
• OpenSHMEM
• UPC++
• Global Arrays
HPCAC-Switzerland (April ‘17) 6Network Based Computing Laboratory
Hybrid (MPI+PGAS) Programming
• Application sub-kernels can be re-written in MPI/PGAS based on communication
characteristics
• Benefits:
– Best of Distributed Computing Model
– Best of Shared Memory Computing Model
Kernel 1
MPI
Kernel 2
MPI
Kernel 3
MPI
Kernel N
MPI
HPC Application
Kernel 2
PGAS
Kernel N
PGAS
HPCAC-Switzerland (April ‘17) 7Network Based Computing Laboratory
Designing Communication Middleware for Multi-Petaflop
and Exaflop Systems: Challenges
Programming Models
MPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP,
OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.
Application Kernels/Applications
Networking Technologies
(InfiniBand, 40/100GigE,
Aries, and OmniPath)
Multi/Many-core
Architectures
Accelerators
(NVIDIA and MIC)
Middleware
Co-Design
Opportunities
and
Challenges
across Various
Layers
Performance
Scalability
Fault-
Resilience
Communication Library or Runtime for Programming Models
Point-to-point
Communication
Collective
Communication
Energy-
Awareness
Synchronization
and Locks
I/O and
File Systems
Fault
Tolerance
HPCAC-Switzerland (April ‘17) 8Network Based Computing Laboratory
• Scalability for million to billion processors
– Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided)
– Scalable job start-up
• Scalable Collective communication
– Offload
– Non-blocking
– Topology-aware
• Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores)
– Multiple end-points per node
• Support for efficient multi-threading
• Integrated Support for GPGPUs and Accelerators
• Fault-tolerance/resiliency
• QoS support for communication and I/O
• Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM,
MPI+UPC++, CAF, …)
• Virtualization
• Energy-Awareness
Broad Challenges in Designing Communication Middleware for (MPI+X) at
Exascale
HPCAC-Switzerland (April ‘17) 9Network Based Computing Laboratory
• Extreme Low Memory Footprint
– Memory per core continues to decrease
• D-L-A Framework
– Discover
• Overall network topology (fat-tree, 3D, …), Network topology for processes for a given job
• Node architecture, Health of network and node
– Learn
• Impact on performance and scalability
• Potential for failure
– Adapt
• Internal protocols and algorithms
• Process mapping
• Fault-tolerance solutions
– Low overhead techniques while delivering performance, scalability and fault-tolerance
Additional Challenges for Designing Exascale Software Libraries
HPCAC-Switzerland (April ‘17) 10Network Based Computing Laboratory
Overview of the MVAPICH2 Project
• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)
– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.0), Started in 2001, First version available in 2002
– MVAPICH2-X (MPI + PGAS), Available since 2011
– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014
– Support for Virtualization (MVAPICH2-Virt), Available since 2015
– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015
– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015
– Used by more than 2,750 organizations in 83 countries
– More than 413,000 (> 0.4 million) downloads from the OSU site directly
– Empowering many TOP500 clusters (Nov ‘16 ranking)
• 1st, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China
• 13th, 241,108-core (Pleiades) at NASA
• 17th, 462,462-core (Stampede) at TACC
• 40th, 74,520-core (Tsubame 2.5) at Tokyo Institute of Technology
– Available with software stacks of many vendors and Linux Distros (RedHat and SuSE)
– http://mvapich.cse.ohio-state.edu
• Empowering Top500 systems for over a decade
– System-X from Virginia Tech (3rd in Nov 2003, 2,200 processors, 12.25 TFlops) ->
– Sunway TaihuLight (1st in Jun’16, 10M cores, 100 PFlops)
HPCAC-Switzerland (April ‘17) 11Network Based Computing Laboratory
0
50000
100000
150000
200000
250000
300000
350000
400000
450000
Sep-04
Feb-05
Jul-05
Dec-05
May-06
Oct-06
Mar-07
Aug-07
Jan-08
Jun-08
Nov-08
Apr-09
Sep-09
Feb-10
Jul-10
Dec-10
May-11
Oct-11
Mar-12
Aug-12
Jan-13
Jun-13
Nov-13
Apr-14
Sep-14
Feb-15
Jul-15
Dec-15
May-16
Oct-16
Mar-17
NumberofDownloads
Timeline
MV0.9.4
MV20.9.0
MV20.9.8
MV21.0
MV1.0
MV21.0.3
MV1.1
MV21.4
MV21.5
MV21.6
MV21.7
MV21.8
MV21.9
MV22.1
MV2-GDR2.0b
MV2-MIC2.0
MV2-Virt2.1rc2
MV2-GDR2.2rc1
MV2-X2.2
MV22.3a
MVAPICH2 Release Timeline and Downloads
HPCAC-Switzerland (April ‘17) 12Network Based Computing Laboratory
Architecture of MVAPICH2 Software Family
High Performance Parallel Programming Models
Message Passing Interface
(MPI)
PGAS
(UPC, OpenSHMEM, CAF, UPC++)
Hybrid --- MPI + X
(MPI + PGAS + OpenMP/Cilk)
High Performance and Scalable Communication Runtime
Diverse APIs and Mechanisms
Point-to-
point
Primitives
Collectives
Algorithms
Energy-
Awareness
Remote
Memory
Access
I/O and
File Systems
Fault
Tolerance
Virtualization
Active
Messages
Job Startup
Introspection
& Analysis
Support for Modern Networking Technology
(InfiniBand, iWARP, RoCE, Omni-Path)
Support for Modern Multi-/Many-core Architectures
(Intel-Xeon, OpenPower, Xeon-Phi (MIC, KNL), NVIDIA GPGPU)
Transport Protocols Modern Features
RC XRC UD DC UMR ODP
SR-
IOV
Multi
Rail
Transport Mechanisms
Shared
Memory
CMA IVSHMEM
Modern Features
MCDRAM* NVLink* CAPI*
* Upcoming
HPCAC-Switzerland (April ‘17) 13Network Based Computing Laboratory
MVAPICH2 Software Family
High-Performance Parallel Programming Libraries
MVAPICH2 Support for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE
MVAPICH2-X Advanced MPI features, OSU INAM, PGAS (OpenSHMEM, UPC, UPC++, and CAF), and
MPI+PGAS programming models with unified communication runtime
MVAPICH2-GDR Optimized MPI for clusters with NVIDIA GPUs
MVAPICH2-Virt High-performance and scalable MPI for hypervisor and container based HPC cloud
MVAPICH2-EA Energy aware and High-performance MPI
MVAPICH2-MIC Optimized MPI for clusters with Intel KNC
Microbenchmarks
OMB Microbenchmarks suite to evaluate MPI and PGAS (OpenSHMEM, UPC, and UPC++)
libraries for CPUs and GPUs
Tools
OSU INAM Network monitoring, profiling, and analysis for clusters with MPI and scheduler
integration
OEMT Utility to measure the energy consumption of MPI applications
HPCAC-Switzerland (April ‘17) 14Network Based Computing Laboratory
• Released on 03/29/2017
• Major Features and Enhancements
– Based on and ABI compatible with MPICH 3.2
– Support collective offload using Mellanox's SHArP for Allreduce
• Enhance tuning framework for Allreduce using SHArP
– Introduce capability to run MPI jobs across multiple InfiniBand subnets
– Introduce basic support for executing MPI jobs in Singularity
– Enhance collective tuning for Intel Knight's Landing and Intel Omni-path
– Enhance process mapping support for multi-threaded MPI applications
• Introduce MV2_CPU_BINDING_POLICY=hybrid
• Introduce MV2_THREADS_PER_PROCESS
– On-demand connection management for PSM-CH3 and PSM2-CH3 channels
– Enhance PSM-CH3 and PSM2-CH3 job startup to use non-blocking PMI calls
– Enhance debugging support for PSM-CH3 and PSM2-CH3 channels
– Improve performance of architecture detection
– Introduce run time parameter MV2_SHOW_HCA_BINDING to show process to HCA binding
• Enhance MV2_SHOW_CPU_BINDING to enable display of CPU bindings on all nodes
– Deprecate OFA-IB-Nemesis channel
– Update to hwloc version 1.11.6
MVAPICH2 2.3a
HPCAC-Switzerland (April ‘17) 15Network Based Computing Laboratory
• Scalability for million to billion processors
– Support for highly-efficient inter-node and intra-node communication
– Support for advanced IB mechanisms (ODP)
– Extremely minimal memory footprint (DCT)
– Scalable Job Start-up
– Dynamic and Adaptive Tag Matching
– Collective Support with SHArP
• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …)
• Integrated Support for GPGPUs
• InfiniBand Network Analysis and Monitoring (INAM)
• Virtualization and HPC Cloud
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
HPCAC-Switzerland (April ‘17) 16Network Based Computing Laboratory
One-way Latency: MPI over IB with MVAPICH2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8 Small Message Latency
Message Size (bytes)
Latency(us)
1.11
1.19
1.01
1.15
1.04
TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch
ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch
ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch
ConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch
Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch
0
20
40
60
80
100
120
TrueScale-QDR
ConnectX-3-FDR
ConnectIB-DualFDR
ConnectX-4-EDR
Omni-Path
Large Message Latency
Message Size (bytes)
Latency(us)
HPCAC-Switzerland (April ‘17) 17Network Based Computing Laboratory
TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch
ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch
ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch
ConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 IB switch
Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch
Bandwidth: MPI over IB with MVAPICH2
0
5000
10000
15000
20000
25000
TrueScale-QDR
ConnectX-3-FDR
ConnectIB-DualFDR
ConnectX-4-EDR
Omni-Path
Bidirectional Bandwidth
Bandwidth
(MBytes/sec)
Message Size (bytes)
21,227
12,161
21,983
6,228
24,136
0
2000
4000
6000
8000
10000
12000
14000 Unidirectional Bandwidth
Bandwidth
(MBytes/sec)
Message Size (bytes)
12,590
3,373
6,356
12,083
12,366
HPCAC-Switzerland (April ‘17) 18Network Based Computing Laboratory
0
0.5
1
0 1 2 4 8 16 32 64 128 256 512 1K
Latency(us)
Message Size (Bytes)
Latency
Intra-Socket Inter-Socket
MVAPICH2 Two-Sided Intra-Node Performance
(Shared memory and Kernel-based Zero-copy Support (LiMIC and CMA))
Intel Ivy-bridge
0.18 us
0.45 us
0
5000
10000
15000
Bandwidth(MB/s)
Message Size (Bytes)
Bandwidth (Inter-socket)
inter-Socket-CMA
inter-Socket-Shmem
inter-Socket-LiMIC
0
5000
10000
15000
Bandwidth(MB/s)
Message Size (Bytes)
Bandwidth (Intra-socket)
intra-Socket-CMA
intra-Socket-Shmem
intra-Socket-LiMIC
14,250 MB/s
13,749 MB/s
HPCAC-Switzerland (April ‘17) 19Network Based Computing Laboratory
Minimizing Memory Footprint by Direct Connect (DC) Transport
Node
0
P1
P0 Node 1
P3
P2
Node 3
P7
P6
Node
2
P5
P4IB
Network
• Constant connection cost (One QP for any peer)
• Full Feature Set (RDMA, Atomics etc)
• Separate objects for send (DC Initiator) and receive (DC
Target)
– DC Target identified by “DCT Number”
– Messages routed with (DCT Number, LID)
– Requires same “DC Key” to enable communication
• Available since MVAPICH2-X 2.2a
0
0.5
1
160 320 620
NormalizedExecution
Time
Number of Processes
NAMD - Apoa1: Large data set
RC DC-Pool UD XRC
10
22
47
97
1 1 1
2
10 10 10 10
1 1
3
5
1
10
100
80 160 320 640
ConnectionMemory(KB)
Number of Processes
Memory Footprint for Alltoall
RC DC-Pool UD XRC
H. Subramoni, K. Hamidouche, A. Venkatesh, S. Chakraborty and D. K. Panda, Designing MPI Library with Dynamic Connected Transport (DCT)
of InfiniBand : Early Experiences. IEEE International Supercomputing Conference (ISC ’14)
HPCAC-Switzerland (April ‘17) 20Network Based Computing Laboratory
• Near-constant MPI and OpenSHMEM
initialization time at any process count
• 10x and 30x improvement in startup time
of MPI and OpenSHMEM respectively at
16,384 processes
• Memory consumption reduced for
remote endpoint information by
O(processes per node)
• 1GB Memory saved per node with 1M
processes and 16 processes per node
Towards High Performance and Scalable Startup at Exascale
P M
O
Job Startup Performance
MemoryRequiredtoStore
EndpointInformation
a b c d
eP
M
PGAS – State of the art
MPI – State of the art
O PGAS/MPI – Optimized
PMIX_Ring
PMIX_Ibarrier
PMIX_Iallgather
Shmem based PMI
b
c
d
e
a
On-demand
Connection
On-demand Connection Management for OpenSHMEM and OpenSHMEM+MPI. S. Chakraborty, H. Subramoni, J. Perkins, A. A. Awan, and D K
Panda, 20th International Workshop on High-level Parallel Programming Models and Supportive Environments (HIPS ’15)
PMI Extensions for Scalable MPI Startup. S. Chakraborty, H. Subramoni, A. Moody, J. Perkins, M. Arnold, and D K Panda, Proceedings of the 21st
European MPI Users' Group Meeting (EuroMPI/Asia ’14)
Non-blocking PMI Extensions for Fast MPI Startup. S. Chakraborty, H. Subramoni, A. Moody, A. Venkatesh, J. Perkins, and D K Panda, 15th
IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid ’15)
SHMEMPMI – Shared Memory based PMI for Improved Performance and Scalability. S. Chakraborty, H. Subramoni, J. Perkins, and D K Panda, 16th
IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid ’16)
a
b
c d
e
HPCAC-Switzerland (April ‘17) 21Network Based Computing Laboratory
Scalable Job Startup with Non-blocking PMI
0
10
20
30
40
50
16 64 256 1K 4K 16K
TimeTakenforMPI_Init
(seconds)
Number of Processes
Hello World - MV2-2.0 +
Default SLURM
MPI_Init - MV2-2.0 +
Default SLURM
Hello World - MV2-2.1 +
Optimized SLURM
MPI_Init - MV2-2.1 +
Optimized SLURM 0
10
20
30
64 256 1K 4K 16K 64K
TimeTaken(Seconds)
Number of Processes
Hello World
MPI_Init
• Near-constant MPI_Init at any scale
• MPI_Init is 59 times faster
• Hello World (MPI_Init + MPI_Finalize) takes 5.7 times
less time with 8,192 processes (512 nodes)
• 16,384 processes, 1,024 Nodes (Sandy Bridge + FDR)
• Non-blocking PMI implementation in mpirun_rsh
• On-demand connection management for PSM
and Omni-path
• Efficient intra-node startup for Knights Landing
• MPI_Init takes 5.8 seconds
• Hello World takes 21 seconds
• 65,536 processes, 1,024 Nodes (KNL + Omni-Path)
Co-designing Resource Manager and MPI library Resource Manager Independent Design
New designs available in MVAPICH2-2.3a and as patch for SLURM-15.08.8 and SLURM-16.05.1
HPCAC-Switzerland (April ‘17) 22Network Based Computing Laboratory
• Introduced by Mellanox to avoid pinning the pages of registered memory regions
• ODP-aware runtime could reduce the size of pin-down buffers while maintaining performance
• Available in MVAPICH2-X 2.2
On-Demand Paging (ODP)
M. Li, K. Hamidouche, X. Lu, H. Subramoni, J. Zhang, and D. K. Panda, “Designing MPI Library with On-Demand Paging (ODP) of
InfiniBand: Challenges and Benefits”, SC, 2016
1
10
100
1000
ExecutionTime(s)
Applications (64 Processes)
Pin-down
ODP
HPCAC-Switzerland (April ‘17) 23Network Based Computing Laboratory
Dynamic and Adaptive Tag Matching
Normalized Total Tag Matching Time at 512 Processes
Normalized to Default (Lower is Better)
Normalized Memory Overhead per Process at 512 Processes
Compared to Default (Lower is Better)
Adaptive and Dynamic Design for MPI Tag Matching; M. Bayatpour, H. Subramoni, S. Chakraborty, and D. K. Panda; IEEE Cluster 2016. [Best Paper Nominee]
Challenge
Tag matching is a significant
overhead for receivers
Existing Solutions are
- Static and do not adapt
dynamically to
communication pattern
- Do not consider memory
overhead
Solution
A new tag matching design
- Dynamically adapt to
communication patterns
- Use different strategies for
different ranks
- Decisions are based on the
number of request object
that must be traversed
before hitting on the
required one
Results
Better performance than
other state-of-the art tag-
matching schemes
Minimum memory
consumption
Will be available in future
MVAPICH2 releases
HPCAC-Switzerland (April ‘17) 24Network Based Computing Laboratory
0
2
4
6
8
(4,4) (8,4) (16,4)
Latency(us)
(Number of Nodes, PPN)
MV2-128B
MV2-SHArP-128B
MV2-32B
MV2-SHArP-32B 2.3x
0
2
4
6
8
10
12
4 8 16 32 64 128 256
Latency(us)
Message Size (Bytes)
4 PPN*, 16 NodesMVAPICH2
MVAPICH2-SHArP
0
2
4
6
8
10
12
14
16
4 8 16 32 64 128 256
Latency(us)
Message Size (Bytes)
28 PPN, 16 NodesMVAPICH2
MVAPICH2-SHArP
Advanced Allreduce Collective Designs Using SHArP
osu_allreduce (OSU Micro Benchmark) using MVAPICH2 2.3a
2.3x
*PPN: Processes Per Node
1.5x
0
2
4
6
8
10
12
(4,28) (8,28) (16,28)Latency(us)
(Number of Nodes, PPN)
MV2-128B
MV2-SHArP-128B
MV2-32B
MV2-SHArP-32B
1.4x
HPCAC-Switzerland (April ‘17) 25Network Based Computing Laboratory
0
0.05
0.1
0.15
0.2
(4,28) (8,28) (16,28)
Latency(seconds)
(Number of Nodes, PPN)
MVAPICH2
MVAPICH2-SHArP
13%
Mesh Refinement Time of MiniAMR
0
0.1
0.2
0.3
0.4
(4,28) (8,28) (16,28)
Latency(seconds)
(Number of Nodes, PPN)
MVAPICH2
MVAPICH2-SHArP
Advanced Allreduce Collective Designs Using SHArP
12%
Avg DDOT Allreduce time of HPCG
Based on MVAPICH2 2.3a
HPCAC-Switzerland (April ‘17) 26Network Based Computing Laboratory
• Scalability for million to billion processors
• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …)
• Integrated Support for GPGPUs
• InfiniBand Network Analysis and Monitoring (INAM)
• Virtualization and HPC Cloud
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
HPCAC-Switzerland (April ‘17) 27Network Based Computing Laboratory
MVAPICH2-X for Hybrid MPI + PGAS Applications
• Current Model – Separate Runtimes for OpenSHMEM/UPC/UPC++/CAF and MPI
– Possible deadlock if both runtimes are not progressed
– Consumes more network resource
• Unified communication runtime for MPI, UPC, UPC++, OpenSHMEM, CAF
– Available with since 2012 (starting with MVAPICH2-X 1.9)
– http://mvapich.cse.ohio-state.edu
HPCAC-Switzerland (April ‘17) 28Network Based Computing Laboratory
UPC++ Support in MVAPICH2-X
MPI + {UPC++} Application
GASNet Interfaces
UPC++
Interface
Network
Conduit (MPI)
MVAPICH2-X
Unified Communication
Runtime (UCR)
MPI + {UPC++} Application
UPC++
Runtime
MPI
Interfaces
• Full and native support for hybrid MPI + UPC++ applications
• Better performance compared to IBV and MPI conduits
• OSU Micro-benchmarks (OMB) support for UPC++
• Available since MVAPICH2-X (2.2rc1)
0
5
10
15
20
25
30
35
40
1K 2K 4K 8K 16K 32K 64K 128K256K512K 1M
Time(ms)
Message Size (bytes)
GASNet_MPI
GASNET_IBV
MV2-X
14x
Inter-node Broadcast (64 nodes 1:ppn)
MPI Runtime
HPCAC-Switzerland (April ‘17) 29Network Based Computing Laboratory
Application Level Performance with Graph500 and Sort
Graph500 Execution Time
J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models,
International Supercomputing Conference (ISC’13), June 2013
J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation,
Int'l Conference on Parallel Processing (ICPP '12), September 2012
0
5
10
15
20
25
30
35
4K 8K 16K
Time(s)
No. of Processes
MPI-Simple
MPI-CSC
MPI-CSR
Hybrid (MPI+OpenSHMEM)
13X
7.6X
• Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design
• 8,192 processes
- 2.4X improvement over MPI-CSR
- 7.6X improvement over MPI-Simple
• 16,384 processes
- 1.5X improvement over MPI-CSR
- 13X improvement over MPI-Simple
J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, Optimizing Collective Communication in OpenSHMEM, Int'l Conference on Partitioned
Global Address Space Programming Models (PGAS '13), October 2013.
Sort Execution Time
0
500
1000
1500
2000
2500
3000
500GB-512 1TB-1K 2TB-2K 4TB-4K
Time(seconds)
Input Data - No. of Processes
MPI Hybrid
51%
• Performance of Hybrid (MPI+OpenSHMEM) Sort
Application
• 4,096 processes, 4 TB Input Size
- MPI – 2408 sec; 0.16 TB/min
- Hybrid – 1172 sec; 0.36 TB/min
- 51% improvement over MPI-design
HPCAC-Switzerland (April ‘17) 30Network Based Computing Laboratory
• Scalability for million to billion processors
• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …)
• Integrated Support for GPGPUs
– CUDA-aware MPI
– GPUDirect RDMA (GDR) Support
– Preliminary Evaluation on OpenPower
• InfiniBand Network Analysis and Monitoring (INAM)
• Virtualization and HPC Cloud
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
HPCAC-Switzerland (April ‘17) 31Network Based Computing Laboratory
At Sender:
At Receiver:
MPI_Recv(r_devbuf, size, …);
inside
MVAPICH2
• Standard MPI interfaces used for unified data movement
• Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)
• Overlaps data movement from GPU with RDMA transfers
High Performance and High Productivity
MPI_Send(s_devbuf, size, …);
GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU
HPCAC-Switzerland (April ‘17) 32Network Based Computing Laboratory
CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.2 Releases
• Support for MPI communication from NVIDIA GPU device memory
• High performance RDMA-based inter-node point-to-point communication
(GPU-GPU, GPU-Host and Host-GPU)
• High performance intra-node point-to-point communication for multi-GPU
adapters/node (GPU-GPU, GPU-Host and Host-GPU)
• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node
communication for multiple GPU adapters/node
• Optimized and tuned collectives for GPU device buffers
• MPI datatype support for point-to-point and collective communication from
GPU device buffers
• Unified memory
HPCAC-Switzerland (April ‘17) 33Network Based Computing Laboratory
0
1000
2000
3000
4000
1 4 16 64 256 1K 4K
MV2-GDR2.2
MV2-GDR2.0b
MV2 w/o GDR
GPU-GPU Internode Bi-Bandwidth
Message Size (bytes)
Bi-Bandwidth(MB/s)
0
5
10
15
20
25
30
0 2 8 32 128 512 2K
MV2-GDR2.2 MV2-GDR2.0b
MV2 w/o GDR
GPU-GPU internode latency
Message Size (bytes)
Latency(us)
MVAPICH2-GDR-2.2
Intel Ivy Bridge (E5-2680 v2) node - 20 cores
NVIDIA Tesla K40c GPU
Mellanox Connect-X4 EDR HCA
CUDA 8.0
Mellanox OFED 3.0 with GPU-Direct-RDMA
10x
2X
11x
Performance of MVAPICH2-GPU with GPU-Direct RDMA (GDR)
2.18us
0
500
1000
1500
2000
2500
3000
1 4 16 64 256 1K 4K
MV2-GDR2.2
MV2-GDR2.0b
MV2 w/o GDR
GPU-GPU Internode Bandwidth
Message Size (bytes)
Bandwidth
(MB/s)
11X
2X
3X
HPCAC-Switzerland (April ‘17) 34Network Based Computing Laboratory
• Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB)
• HoomdBlue Version 1.0.5
• GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768
MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768
MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384
Application-Level Evaluation (HOOMD-blue)
0
500
1000
1500
2000
2500
4 8 16 32
AverageTimeStepsper
second(TPS)
Number of Processes
MV2 MV2+GDR
0
500
1000
1500
2000
2500
3000
3500
4 8 16 32
AverageTimeStepsper
second(TPS)
Number of Processes
64K Particles 256K Particles
2X
2X
HPCAC-Switzerland (April ‘17) 35Network Based Computing Laboratory
Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland
0
0.2
0.4
0.6
0.8
1
1.2
16 32 64 96NormalizedExecutionTime
Number of GPUs
CSCS GPU cluster
Default Callback-based Event-based
0
0.2
0.4
0.6
0.8
1
1.2
4 8 16 32
NormalizedExecutionTime
Number of GPUs
Wilkes GPU Cluster
Default Callback-based Event-based
• 2X improvement on 32 GPUs nodes
• 30% improvement on 96 GPU nodes (8 GPUs/node)
C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data
Movement Processing on Modern GPU-enabled Systems, IPDPS’16
On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application
Cosmo model: http://www2.cosmo-model.org/content
/tasks/operational/meteoSwiss/
HPCAC-Switzerland (April ‘17) 36Network Based Computing Laboratory
0
1000
2000
3000
4000
5000
6000
1
4
16
64
256
1K
4K
16K
64K
256K
1M
4M
Bandwidth(MB/s)
GDR=1 GDRCOPY=0
GDR=1 GDRCOPY=1
MVAPICH2-GDR on OpenPower (Preliminary Evaluation)
0
10
20
30
40
1
2
4
8
16
32
64
128
256
512
1K
2K
4K
8K
Latency(us)
GDR=1 GDRCOPY=0
GDR=1 GDRCOPY=1
osu_latency [D-D]
osu_bw [H-D]
0
5
10
15
20
1 4 16 64 256 1K 4K 16K 64K
Latency(us)
GDR=1 GDRCOPY=0
GDR=1 GDRCOPY=1
osu_latency [H-D]
POWER8NVL-ppc64le @ 4.023GHz
MVAPICH2-GDR2.2, MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=32768
Intra-node Numbers (Basic Configuration)
HPCAC-Switzerland (April ‘17) 37Network Based Computing Laboratory
• Scalability for million to billion processors
• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …)
• Integrated Support for GPGPUs
• Energy-Awareness
• InfiniBand Network Analysis and Monitoring (INAM)
• Virtualization and HPC Cloud
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
HPCAC-Switzerland (April ‘17) 38Network Based Computing Laboratory
Overview of OSU INAM
• A network monitoring and analysis tool that is capable of analyzing traffic on the InfiniBand network with inputs from the
MPI runtime
– http://mvapich.cse.ohio-state.edu/tools/osu-inam/
• Monitors IB clusters in real time by querying various subnet management entities and gathering input from the MPI runtimes
• OSU INAM v0.9.1 released on 05/13/16
• Significant enhancements to user interface to enable scaling to clusters with thousands of nodes
• Improve database insert times by using 'bulk inserts‘
• Capability to look up list of nodes communicating through a network link
• Capability to classify data flowing over a network link at job level and process level granularity in conjunction with
MVAPICH2-X 2.2rc1
• “Best practices “ guidelines for deploying OSU INAM on different clusters
• Capability to analyze and profile node-level, job-level and process-level activities for MPI communication
– Point-to-Point, Collectives and RMA
• Ability to filter data based on type of counters using “drop down” list
• Remotely monitor various metrics of MPI processes at user specified granularity
• "Job Page" to display jobs in ascending/descending order of various performance metrics in conjunction with MVAPICH2-X
• Visualize the data transfer happening in a “live” or “historical” fashion for entire network, job or set of nodes
HPCAC-Switzerland (April ‘17) 39Network Based Computing Laboratory
OSU INAM Features
• Show network topology of large clusters
• Visualize traffic pattern on different links
• Quickly identify congested links/links in error state
• See the history unfold – play back historical state of the network
Comet@SDSC --- Clustered View
(1,879 nodes, 212 switches, 4,377 network links)
Finding Routes Between Nodes
HPCAC-Switzerland (April ‘17) 40Network Based Computing Laboratory
OSU INAM Features (Cont.)
Visualizing a Job (5 Nodes)
• Job level view
• Show different network metrics (load, error, etc.) for any live job
• Play back historical data for completed jobs to identify bottlenecks
• Node level view - details per process or per node
• CPU utilization for each rank/node
• Bytes sent/received for MPI operations (pt-to-pt, collective, RMA)
• Network metrics (e.g. XmitDiscard, RcvError) per rank/node
Estimated Process Level Link Utilization
• Estimated Link Utilization view
• Classify data flowing over a network link at
different granularity in conjunction with
MVAPICH2-X 2.2rc1
• Job level and
• Process level
HPCAC-Switzerland (April ‘17) 41Network Based Computing Laboratory
• Scalability for million to billion processors
• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI +
UPC, CAF, UPC++, …)
• Integrated Support for GPGPUs
• InfiniBand Network Analysis and Monitoring (INAM)
• Virtualization and HPC Cloud
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
HPCAC-Switzerland (April ‘17) 42Network Based Computing Laboratory
• Virtualization has many benefits
– Fault-tolerance
– Job migration
– Compaction
• Have not been very popular in HPC due to overhead associated with
Virtualization
• New SR-IOV (Single Root – IO Virtualization) support available with
Mellanox InfiniBand adapters changes the field
• Enhanced MVAPICH2 support for SR-IOV
• MVAPICH2-Virt 2.1 (with and without OpenStack) is publicly available
Can HPC and Virtualization be Combined?
J. Zhang, X. Lu, J. Jose, R. Shi and D. K. Panda, Can Inter-VM Shmem Benefit MPI Applications on SR-IOV based
Virtualized InfiniBand Clusters? EuroPar'14
J. Zhang, X. Lu, J. Jose, M. Li, R. Shi and D.K. Panda, High Performance MPI Libray over SR-IOV enabled InfiniBand
Clusters, HiPC’14
J. Zhang, X .Lu, M. Arnold and D. K. Panda, MVAPICH2 Over OpenStack with SR-IOV: an Efficient Approach to build
HPC Clouds, CCGrid’15
HPCAC-Switzerland (April ‘17) 43Network Based Computing Laboratory
• Redesign MVAPICH2 to make it
virtual machine aware
– SR-IOV shows near to native
performance for inter-node point to
point communication
– IVSHMEM offers zero-copy access to
data on shared memory of co-resident
VMs
– Locality Detector: maintains the locality
information of co-resident virtual machines
– Communication Coordinator: selects the
communication channel (SR-IOV, IVSHMEM)
adaptively
Overview of MVAPICH2-Virt with SR-IOV and IVSHMEM
Host Environment
Guest 1
Hypervisor PF Driver
Infiniband Adapter
Physical
Function
user space
kernel space
MPI
proc
PCI
Device
VF
Driver
Guest 2
user space
kernel space
MPI
proc
PCI
Device
VF
Driver
Virtual
Function
Virtual
Function
/dev/shm/
IV-SHM
IV-Shmem Channel
SR-IOV Channel
J. Zhang, X. Lu, J. Jose, R. Shi, D. K. Panda. Can Inter-VM
Shmem Benefit MPI Applications on SR-IOV based
Virtualized InfiniBand Clusters? Euro-Par, 2014.
J. Zhang, X. Lu, J. Jose, R. Shi, M. Li, D. K. Panda. High
Performance MPI Library over SR-IOV Enabled InfiniBand
Clusters. HiPC, 2014.
HPCAC-Switzerland (April ‘17) 44Network Based Computing Laboratory
• OpenStack is one of the most popular
open-source solutions to build clouds and
manage virtual machines
• Deployment with OpenStack
– Supporting SR-IOV configuration
– Supporting IVSHMEM configuration
– Virtual Machine aware design of MVAPICH2
with SR-IOV
• An efficient approach to build HPC Clouds
with MVAPICH2-Virt and OpenStack
MVAPICH2-Virt with SR-IOV and IVSHMEM over OpenStack
J. Zhang, X. Lu, M. Arnold, D. K. Panda. MVAPICH2 over OpenStack with SR-IOV: An Efficient Approach to Build
HPC Clouds. CCGrid, 2015.
HPCAC-Switzerland (April ‘17) 45Network Based Computing Laboratory
0
50
100
150
200
250
300
350
400
milc leslie3d pop2 GAPgeofem zeusmp2 lu
ExecutionTime(s)
MV2-SR-IOV-Def
MV2-SR-IOV-Opt
MV2-Native
1%
9.5%
0
1000
2000
3000
4000
5000
6000
22,20 24,10 24,16 24,20 26,10 26,16
ExecutionTime(ms)
Problem Size (Scale, Edgefactor)
MV2-SR-IOV-Def
MV2-SR-IOV-Opt
MV2-Native
2%
• 32 VMs, 6 Core/VM
• Compared to Native, 2-5% overhead for Graph500 with 128 Procs
• Compared to Native, 1-9.5% overhead for SPEC MPI2007 with 128 Procs
Application-Level Performance on Chameleon
SPEC MPI2007Graph500
5%
HPCAC-Switzerland (April ‘17) 46Network Based Computing Laboratory
0
500
1000
1500
2000
2500
3000
3500
4000
22, 16 22, 20 24, 16 24, 20 26, 16 26, 20
ExecutionTime(ms)
Problem Size (Scale, Edgefactor)
Container-Def
Container-Opt
Native
0
10
20
30
40
50
60
70
80
90
100
MG.D FT.D EP.D LU.D CG.D
ExecutionTime(s)
Container-Def
Container-Opt
Native
• 64 Containers across 16 nodes, pining 4 Cores per Container
• Compared to Container-Def, up to 11% and 16% of execution time reduction for NAS and Graph 500
• Compared to Native, less than 9 % and 4% overhead for NAS and Graph 500
• Optimized Container support is available in the latest MVAPICH2-Virt
Containers Support: Application-Level Performance on Chameleon
Graph 500NAS
11%
16%
HPCAC-Switzerland (April ‘17) 47Network Based Computing Laboratory
MVAPICH2 Point-to-Point and Collectives Performance with Singularity
0
5
10
15
20
1 4 16 64 256 1024 4096 16384 65536
Latency(us)
Message Size (Byte)
Latency
Singularity-Intra-Node
Native-Intra-Node
Singularity-Inter-Node
Native-Inter-Node
0
2000
4000
6000
8000
10000
12000
14000
16000
18000
Bandwidth(MB/s)
Message Size (Byte)
BW
Singularity-Intra-Node
Native-Intra-Node
Singularity-Inter-Node
Native-Inter-Node
0
10
20
30
40
50
60
70
80
1 4 16 64 256 1024 4096 16384 65536
Latency(us)
Message Size (Byte)
Bcast
Singularity
Native
0
50
100
150
200
4 16 64 256 1024 4096 16384 65536
Latency(us) Message Size (Byte)
Allreduce
Singularity
Native
Performance Very Close to Native
HPCAC-Switzerland (April ‘17) 48Network Based Computing Laboratory
High Performance VM Migration Framework for MPI Applications on
SR-IOV enabled InfiniBand Clusters
J. Zhang, X. Lu, D. K. Panda. High-Performance Virtual Machine Migration Framework for MPI Applications on SR-IOV
enabled InfiniBand Clusters. IPDPS, 2017 (Support will be available in upcoming MVAPICH2-Virt)
• Migration with SR-IOV device has to handle the
challenges of detachment/re-attachment of
virtualized IB device and IB connection
• Consist of SR-IOV enabled IB Cluster and External
Migration Controller
• Multiple parallel libraries to notify MPI
applications during migration (detach/reattach
SR-IOV/IVShmem, migrate VMs, migration status)
• Handle the IB connection suspending and
reactivating
• Propose Progress engine (PE) and migration
thread based (MT) design to optimize VM
migration and MPI application performance
HPCAC-Switzerland (April ‘17) 49Network Based Computing Laboratory
• Scientific Computing
– Message Passing Interface (MPI), including MPI + OpenMP, is the Dominant
Programming Model
– Many discussions towards Partitioned Global Address Space (PGAS)
• UPC, OpenSHMEM, CAF, UPC++ etc.
– Hybrid Programming: MPI + PGAS (OpenSHMEM, UPC)
• Deep Learning
– Caffe, CNTK, TensorFlow, and many more
Major Computing Categories
HPCAC-Switzerland (April ‘17) 50Network Based Computing Laboratory
• Deep Learning frameworks are a different game
altogether
– Unusually large message sizes (order of
megabytes)
– Most communication based on GPU buffers
• How to address these newer requirements?
– GPU-specific Communication Libraries (NCCL)
• NVidia's NCCL library provides inter-GPU
communication
– CUDA-Aware MPI (MVAPICH2-GDR)
• Provides support for GPU-based communication
• Can we exploit CUDA-Aware MPI and NCCL to
support Deep Learning applications?
Deep Learning: New Challenges for MPI Runtimes
1
32
4
Internode Comm. (Knomial)
1 2
CPU
PLX
3 4
PLX
Intranode Comm.
(NCCL Ring)
Ring Direction
Hierarchical Communication (Knomial + NCCL ring)
HPCAC-Switzerland (April ‘17) 51Network Based Computing Laboratory
• NCCL has some limitations
– Only works for a single node, thus, no scale-out on
multiple nodes
– Degradation across IOH (socket) for scale-up (within a
node)
• We propose optimized MPI_Bcast
– Communication of very large GPU buffers (order of
megabytes)
– Scale-out on large number of dense multi-GPU nodes
• Hierarchical Communication that efficiently exploits:
– CUDA-Aware MPI_Bcast in MV2-GDR
– NCCL Broadcast primitive
Efficient Broadcast: MVAPICH2-GDR and NCCL
1
10
100
1000
10000
100000
1
8
64
512
4K
32K
256K
2M
16M
128M
Latency(us)
LogScale
Message Size
MV2-GDR MV2-GDR-Opt
100x
0
10
20
30
2 4 8 16 32 64
Time(seconds)
Number of GPUs
MV2-GDR MV2-GDR-Opt
Performance Benefits: Microsoft CNTK DL framework
(25% avg. improvement )
Performance Benefits: OSU Micro-benchmarks
Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep Learning,
A. Awan , K. Hamidouche , A. Venkatesh , and D. K. Panda,
The 23rd European MPI Users' Group Meeting (EuroMPI 16), Sep 2016 [Best Paper Runner-Up]
HPCAC-Switzerland (April ‘17) 52Network Based Computing Laboratory
0
100
200
300
2 4 8 16 32 64 128
Latency(ms)
Message Size (MB)
Reduce – 192 GPUs
Large Message Optimized Collectives for Deep Learning
0
100
200
128 160 192
Latency(ms)
No. of GPUs
Reduce – 64 MB
0
100
200
300
16 32 64
Latency(ms)
No. of GPUs
Allreduce - 128 MB
0
50
100
2 4 8 16 32 64 128
Latency(ms)
Message Size (MB)
Bcast – 64 GPUs
0
50
100
16 32 64
Latency(ms)
No. of GPUs
Bcast 128 MB
• MV2-GDR provides
optimized collectives for
large message sizes
• Optimized Reduce,
Allreduce, and Bcast
• Good scaling with large
number of GPUs
• Available with MVAPICH2-
GDR 2.2GA
0
100
200
300
2 4 8 16 32 64 128
Latency(ms)
Message Size (MB)
Allreduce – 64 GPUs
HPCAC-Switzerland (April ‘17) 53Network Based Computing Laboratory
• Caffe : A flexible and layered Deep Learning
framework.
• Benefits and Weaknesses
– Multi-GPU Training within a single node
– Performance degradation for GPUs across different
sockets
– Limited Scale-out
• OSU-Caffe: MPI-based Parallel Training
– Enable Scale-up (within a node) and Scale-out
(across multi-GPU nodes)
– network on ImageNet dataset
OSU-Caffe: Scalable Deep Learning
0
50
100
150
200
250
8 16 32 64 128
TrainingTime(seconds)
No. of GPUs
GoogLeNet (ImageNet) on 128 GPUs
Caffe OSU-Caffe (1024) OSU-Caffe (2048)
Invalid use caseOSU-Caffe is publicly available from:
http://hidl.cse.ohio-state.edu
Awan, K. Hamidouche, J. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI
Runtimes and Caffe for Scalable Deep Learning on Modern GPU Clusters,
PPoPP, Sep 2017
HPCAC-Switzerland (April ‘17) 54Network Based Computing Laboratory
MVAPICH2 – Plans for Exascale
• Performance and Memory scalability toward 1M cores
• Hybrid programming (MPI + OpenSHMEM, MPI + UPC, MPI + CAF …)
• MPI + Task*
• Enhanced Optimization for GPU Support and Accelerators
• Enhanced communication schemes for upcoming architectures
• Knights Landing with MCDRAM*
• NVLINK*
• CAPI*
• Extended topology-aware collectives
• Extended Energy-aware designs and Virtualization Support
• Extended Support for MPI Tools Interface (as in MPI 3.1)
• Extended Checkpoint-Restart and migration support with SCR
• Support for * features will be available in future MVAPICH2 Releases
HPCAC-Switzerland (April ‘17) 55Network Based Computing Laboratory
Funding Acknowledgments
Funding Support by
Equipment Support by
HPCAC-Switzerland (April ‘17) 56Network Based Computing Laboratory
Personnel Acknowledgments
Current Students
– A. Awan (Ph.D.)
– R. Biswas (M.S.)
– M. Bayatpour (Ph.D.)
– S. Chakraborthy (Ph.D.)
Past Students
– A. Augustine (M.S.)
– P. Balaji (Ph.D.)
– S. Bhagvat (M.S.)
– A. Bhat (M.S.)
– D. Buntinas (Ph.D.)
– L. Chai (Ph.D.)
– B. Chandrasekharan (M.S.)
– N. Dandapanthula (M.S.)
– V. Dhanraj (M.S.)
– T. Gangadharappa (M.S.)
– K. Gopalakrishnan (M.S.)
– R. Rajachandrasekar (Ph.D.)
– G. Santhanaraman (Ph.D.)
– A. Singh (Ph.D.)
– J. Sridhar (M.S.)
– S. Sur (Ph.D.)
– H. Subramoni (Ph.D.)
– K. Vaidyanathan (Ph.D.)
– A. Venkatesh (Ph.D.)
– A. Vishnu (Ph.D.)
– J. Wu (Ph.D.)
– W. Yu (Ph.D.)
Past Research Scientist
– K. Hamidouche
– S. Sur
Past Post-Docs
– D. Banerjee
– X. Besseron
– H.-W. Jin
– W. Huang (Ph.D.)
– N. Islam (Ph.D.)
– W. Jiang (M.S.)
– J. Jose (Ph.D.)
– S. Kini (M.S.)
– M. Koop (Ph.D.)
– K. Kulkarni (M.S.)
– R. Kumar (M.S.)
– S. Krishnamoorthy (M.S.)
– K. Kandalla (Ph.D.)
– P. Lai (M.S.)
– J. Liu (Ph.D.)
– M. Luo (Ph.D.)
– A. Mamidala (Ph.D.)
– G. Marsh (M.S.)
– V. Meshram (M.S.)
– A. Moody (M.S.)
– S. Naravula (Ph.D.)
– R. Noronha (Ph.D.)
– X. Ouyang (Ph.D.)
– S. Pai (M.S.)
– S. Potluri (Ph.D.)
– M.-W. Rahman (Ph.D.)
– C.-H. Chu (Ph.D.)
– S. Guganani (Ph.D.)
– J. Hashmi (Ph.D.)
– H. Javed (Ph.D.)
– J. Lin
– M. Luo
– E. Mancini
Current Research Scientists
– X. Lu
– H. Subramoni
Past Programmers
– D. Bureddy
– M. Arnold
– J. Perkins
Current Research Specialist
– J. Smith
– M. Li (Ph.D.)
– D. Shankar (Ph.D.)
– H. Shi (Ph.D.)
– J. Zhang (Ph.D.)
– S. Marcarelli
– J. Vienne
– H. Wang
HPCAC-Switzerland (April ‘17) 57Network Based Computing Laboratory
International Workshop on Communication
Architectures at Extreme Scale (ExaComm)
ExaComm series is being held with Int’l Supercomputing Conference
(ISC) since 2015
ExaComm 2017 will be held in conjunction with ISC ’17
http://nowlab.cse.ohio-state.edu/exacomm/
Keynote Talk: Jeffrey S. Vetter, ORNL
Best Paper Award
Technical Paper Submission Deadline: Friday, April 14, 2017
HPCAC-Switzerland (April ‘17) 58Network Based Computing Laboratory
panda@cse.ohio-state.edu
Thank You!
The High-Performance Deep Learning Project
http://hidl.cse.ohio-state.edu/
Network-Based Computing Laboratory
http://nowlab.cse.ohio-state.edu/
The MVAPICH2 Project
http://mvapich.cse.ohio-state.edu/

Weitere ähnliche Inhalte

Was ist angesagt?

GEN-Z: An Overview and Use Cases
GEN-Z: An Overview and Use CasesGEN-Z: An Overview and Use Cases
GEN-Z: An Overview and Use Casesinside-BigData.com
 
The HPE Machine and Gen-Z - BUD17-503
The HPE Machine and Gen-Z - BUD17-503The HPE Machine and Gen-Z - BUD17-503
The HPE Machine and Gen-Z - BUD17-503Linaro
 
High Performance Interconnects: Landscape, Assessments & Rankings
High Performance Interconnects: Landscape, Assessments & RankingsHigh Performance Interconnects: Landscape, Assessments & Rankings
High Performance Interconnects: Landscape, Assessments & Rankingsinside-BigData.com
 
Overview of the MVAPICH Project and Future Roadmap
Overview of the MVAPICH Project and Future RoadmapOverview of the MVAPICH Project and Future Roadmap
Overview of the MVAPICH Project and Future Roadmapinside-BigData.com
 
Challenges and Opportunities for HPC Interconnects and MPI
Challenges and Opportunities for HPC Interconnects and MPIChallenges and Opportunities for HPC Interconnects and MPI
Challenges and Opportunities for HPC Interconnects and MPIinside-BigData.com
 
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...inside-BigData.com
 
CUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computingCUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computinginside-BigData.com
 
Programming Models for Exascale Systems
Programming Models for Exascale SystemsProgramming Models for Exascale Systems
Programming Models for Exascale Systemsinside-BigData.com
 
Introducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi ClusterIntroducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi Clusterinside-BigData.com
 
A Library for Emerging High-Performance Computing Clusters
A Library for Emerging High-Performance Computing ClustersA Library for Emerging High-Performance Computing Clusters
A Library for Emerging High-Performance Computing ClustersIntel® Software
 
Optimize Single Particle Orbital (SPO) Evaluations Based on B-splines
Optimize Single Particle Orbital (SPO) Evaluations Based on B-splinesOptimize Single Particle Orbital (SPO) Evaluations Based on B-splines
Optimize Single Particle Orbital (SPO) Evaluations Based on B-splinesIntel® Software
 
Advancing OpenFabrics Interfaces
Advancing OpenFabrics InterfacesAdvancing OpenFabrics Interfaces
Advancing OpenFabrics Interfacesinside-BigData.com
 
Deep Learning: Convergence of HPC and Hyperscale
Deep Learning: Convergence of HPC and HyperscaleDeep Learning: Convergence of HPC and Hyperscale
Deep Learning: Convergence of HPC and Hyperscaleinside-BigData.com
 
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...inside-BigData.com
 
The Sierra Supercomputer: Science and Technology on a Mission
The Sierra Supercomputer: Science and Technology on a MissionThe Sierra Supercomputer: Science and Technology on a Mission
The Sierra Supercomputer: Science and Technology on a Missioninside-BigData.com
 
Ucx an open source framework for hpc network ap is and beyond
Ucx  an open source framework for hpc network ap is and beyondUcx  an open source framework for hpc network ap is and beyond
Ucx an open source framework for hpc network ap is and beyondinside-BigData.com
 
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems
Designing HPC, Deep Learning, and Cloud Middleware for Exascale SystemsDesigning HPC, Deep Learning, and Cloud Middleware for Exascale Systems
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systemsinside-BigData.com
 
Data Plane and VNF Acceleration Mini Summit
Data Plane and VNF Acceleration Mini Summit Data Plane and VNF Acceleration Mini Summit
Data Plane and VNF Acceleration Mini Summit Open-NFP
 

Was ist angesagt? (20)

GEN-Z: An Overview and Use Cases
GEN-Z: An Overview and Use CasesGEN-Z: An Overview and Use Cases
GEN-Z: An Overview and Use Cases
 
The HPE Machine and Gen-Z - BUD17-503
The HPE Machine and Gen-Z - BUD17-503The HPE Machine and Gen-Z - BUD17-503
The HPE Machine and Gen-Z - BUD17-503
 
High Performance Interconnects: Landscape, Assessments & Rankings
High Performance Interconnects: Landscape, Assessments & RankingsHigh Performance Interconnects: Landscape, Assessments & Rankings
High Performance Interconnects: Landscape, Assessments & Rankings
 
Overview of the MVAPICH Project and Future Roadmap
Overview of the MVAPICH Project and Future RoadmapOverview of the MVAPICH Project and Future Roadmap
Overview of the MVAPICH Project and Future Roadmap
 
Challenges and Opportunities for HPC Interconnects and MPI
Challenges and Opportunities for HPC Interconnects and MPIChallenges and Opportunities for HPC Interconnects and MPI
Challenges and Opportunities for HPC Interconnects and MPI
 
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
 
CUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computingCUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computing
 
Programming Models for Exascale Systems
Programming Models for Exascale SystemsProgramming Models for Exascale Systems
Programming Models for Exascale Systems
 
Introducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi ClusterIntroducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi Cluster
 
Manycores for the Masses
Manycores for the MassesManycores for the Masses
Manycores for the Masses
 
A Library for Emerging High-Performance Computing Clusters
A Library for Emerging High-Performance Computing ClustersA Library for Emerging High-Performance Computing Clusters
A Library for Emerging High-Performance Computing Clusters
 
Optimize Single Particle Orbital (SPO) Evaluations Based on B-splines
Optimize Single Particle Orbital (SPO) Evaluations Based on B-splinesOptimize Single Particle Orbital (SPO) Evaluations Based on B-splines
Optimize Single Particle Orbital (SPO) Evaluations Based on B-splines
 
Advancing OpenFabrics Interfaces
Advancing OpenFabrics InterfacesAdvancing OpenFabrics Interfaces
Advancing OpenFabrics Interfaces
 
Deep Learning: Convergence of HPC and Hyperscale
Deep Learning: Convergence of HPC and HyperscaleDeep Learning: Convergence of HPC and Hyperscale
Deep Learning: Convergence of HPC and Hyperscale
 
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
 
The Sierra Supercomputer: Science and Technology on a Mission
The Sierra Supercomputer: Science and Technology on a MissionThe Sierra Supercomputer: Science and Technology on a Mission
The Sierra Supercomputer: Science and Technology on a Mission
 
Ucx an open source framework for hpc network ap is and beyond
Ucx  an open source framework for hpc network ap is and beyondUcx  an open source framework for hpc network ap is and beyond
Ucx an open source framework for hpc network ap is and beyond
 
FD.io - The Universal Dataplane
FD.io - The Universal DataplaneFD.io - The Universal Dataplane
FD.io - The Universal Dataplane
 
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems
Designing HPC, Deep Learning, and Cloud Middleware for Exascale SystemsDesigning HPC, Deep Learning, and Cloud Middleware for Exascale Systems
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems
 
Data Plane and VNF Acceleration Mini Summit
Data Plane and VNF Acceleration Mini Summit Data Plane and VNF Acceleration Mini Summit
Data Plane and VNF Acceleration Mini Summit
 

Ähnlich wie High-Performance and Scalable Designs of Programming Models for Exascale Systems

Panda scalable hpc_bestpractices_tue100418
Panda scalable hpc_bestpractices_tue100418Panda scalable hpc_bestpractices_tue100418
Panda scalable hpc_bestpractices_tue100418inside-BigData.com
 
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale SystemsDesigning Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systemsinside-BigData.com
 
Designing HPC & Deep Learning Middleware for Exascale Systems
Designing HPC & Deep Learning Middleware for Exascale SystemsDesigning HPC & Deep Learning Middleware for Exascale Systems
Designing HPC & Deep Learning Middleware for Exascale Systemsinside-BigData.com
 
Communication Frameworks for HPC and Big Data
Communication Frameworks for HPC and Big DataCommunication Frameworks for HPC and Big Data
Communication Frameworks for HPC and Big Datainside-BigData.com
 
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...inside-BigData.com
 
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...inside-BigData.com
 
Scalable and Distributed DNN Training on Modern HPC Systems
Scalable and Distributed DNN Training on Modern HPC SystemsScalable and Distributed DNN Training on Modern HPC Systems
Scalable and Distributed DNN Training on Modern HPC Systemsinside-BigData.com
 
Scallable Distributed Deep Learning on OpenPOWER systems
Scallable Distributed Deep Learning on OpenPOWER systemsScallable Distributed Deep Learning on OpenPOWER systems
Scallable Distributed Deep Learning on OpenPOWER systemsGanesan Narayanasamy
 
Accelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing TechnologiesAccelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing TechnologiesIntel® Software
 
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...DataWorks Summit/Hadoop Summit
 
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...inside-BigData.com
 
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future PlansMVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plansinside-BigData.com
 
UCX: An Open Source Framework for HPC Network APIs and Beyond
UCX: An Open Source Framework for HPC Network APIs and BeyondUCX: An Open Source Framework for HPC Network APIs and Beyond
UCX: An Open Source Framework for HPC Network APIs and BeyondEd Dodds
 
Designing High-Performance and Scalable Middleware for HPC, AI and Data Science
Designing High-Performance and Scalable Middleware for HPC, AI and Data ScienceDesigning High-Performance and Scalable Middleware for HPC, AI and Data Science
Designing High-Performance and Scalable Middleware for HPC, AI and Data ScienceObject Automation
 
Designing High performance & Scalable Middleware for HPC
Designing High performance & Scalable Middleware for HPCDesigning High performance & Scalable Middleware for HPC
Designing High performance & Scalable Middleware for HPCObject Automation
 
General Introduction to technologies that will be seen in the school
General Introduction to technologies that will be seen in the school General Introduction to technologies that will be seen in the school
General Introduction to technologies that will be seen in the school ISSGC Summer School
 
OpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsOpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsHPCC Systems
 
Role of python in hpc
Role of python in hpcRole of python in hpc
Role of python in hpcDr Reeja S R
 
CloudLightning and the OPM-based Use Case
CloudLightning and the OPM-based Use CaseCloudLightning and the OPM-based Use Case
CloudLightning and the OPM-based Use CaseCloudLightning
 
Accelerating TensorFlow with RDMA for high-performance deep learning
Accelerating TensorFlow with RDMA for high-performance deep learningAccelerating TensorFlow with RDMA for high-performance deep learning
Accelerating TensorFlow with RDMA for high-performance deep learningDataWorks Summit
 

Ähnlich wie High-Performance and Scalable Designs of Programming Models for Exascale Systems (20)

Panda scalable hpc_bestpractices_tue100418
Panda scalable hpc_bestpractices_tue100418Panda scalable hpc_bestpractices_tue100418
Panda scalable hpc_bestpractices_tue100418
 
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale SystemsDesigning Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
 
Designing HPC & Deep Learning Middleware for Exascale Systems
Designing HPC & Deep Learning Middleware for Exascale SystemsDesigning HPC & Deep Learning Middleware for Exascale Systems
Designing HPC & Deep Learning Middleware for Exascale Systems
 
Communication Frameworks for HPC and Big Data
Communication Frameworks for HPC and Big DataCommunication Frameworks for HPC and Big Data
Communication Frameworks for HPC and Big Data
 
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
 
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...
 
Scalable and Distributed DNN Training on Modern HPC Systems
Scalable and Distributed DNN Training on Modern HPC SystemsScalable and Distributed DNN Training on Modern HPC Systems
Scalable and Distributed DNN Training on Modern HPC Systems
 
Scallable Distributed Deep Learning on OpenPOWER systems
Scallable Distributed Deep Learning on OpenPOWER systemsScallable Distributed Deep Learning on OpenPOWER systems
Scallable Distributed Deep Learning on OpenPOWER systems
 
Accelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing TechnologiesAccelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing Technologies
 
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
 
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
 
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future PlansMVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
 
UCX: An Open Source Framework for HPC Network APIs and Beyond
UCX: An Open Source Framework for HPC Network APIs and BeyondUCX: An Open Source Framework for HPC Network APIs and Beyond
UCX: An Open Source Framework for HPC Network APIs and Beyond
 
Designing High-Performance and Scalable Middleware for HPC, AI and Data Science
Designing High-Performance and Scalable Middleware for HPC, AI and Data ScienceDesigning High-Performance and Scalable Middleware for HPC, AI and Data Science
Designing High-Performance and Scalable Middleware for HPC, AI and Data Science
 
Designing High performance & Scalable Middleware for HPC
Designing High performance & Scalable Middleware for HPCDesigning High performance & Scalable Middleware for HPC
Designing High performance & Scalable Middleware for HPC
 
General Introduction to technologies that will be seen in the school
General Introduction to technologies that will be seen in the school General Introduction to technologies that will be seen in the school
General Introduction to technologies that will be seen in the school
 
OpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsOpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC Systems
 
Role of python in hpc
Role of python in hpcRole of python in hpc
Role of python in hpc
 
CloudLightning and the OPM-based Use Case
CloudLightning and the OPM-based Use CaseCloudLightning and the OPM-based Use Case
CloudLightning and the OPM-based Use Case
 
Accelerating TensorFlow with RDMA for high-performance deep learning
Accelerating TensorFlow with RDMA for high-performance deep learningAccelerating TensorFlow with RDMA for high-performance deep learning
Accelerating TensorFlow with RDMA for high-performance deep learning
 

Mehr von inside-BigData.com

Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...inside-BigData.com
 
Transforming Private 5G Networks
Transforming Private 5G NetworksTransforming Private 5G Networks
Transforming Private 5G Networksinside-BigData.com
 
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...inside-BigData.com
 
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...inside-BigData.com
 
HPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural NetworksHPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural Networksinside-BigData.com
 
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean MonitoringBiohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoringinside-BigData.com
 
Machine Learning for Weather Forecasts
Machine Learning for Weather ForecastsMachine Learning for Weather Forecasts
Machine Learning for Weather Forecastsinside-BigData.com
 
HPC AI Advisory Council Update
HPC AI Advisory Council UpdateHPC AI Advisory Council Update
HPC AI Advisory Council Updateinside-BigData.com
 
Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19inside-BigData.com
 
Energy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic TuningEnergy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic Tuninginside-BigData.com
 
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPODHPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPODinside-BigData.com
 
Versal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud AccelerationVersal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud Accelerationinside-BigData.com
 
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance EfficientlyZettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance Efficientlyinside-BigData.com
 
Scaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's EraScaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's Erainside-BigData.com
 
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...inside-BigData.com
 
Adaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and EigensolversAdaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and Eigensolversinside-BigData.com
 

Mehr von inside-BigData.com (20)

Major Market Shifts in IT
Major Market Shifts in ITMajor Market Shifts in IT
Major Market Shifts in IT
 
Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...
 
Transforming Private 5G Networks
Transforming Private 5G NetworksTransforming Private 5G Networks
Transforming Private 5G Networks
 
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
 
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
 
HPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural NetworksHPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural Networks
 
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean MonitoringBiohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
 
Machine Learning for Weather Forecasts
Machine Learning for Weather ForecastsMachine Learning for Weather Forecasts
Machine Learning for Weather Forecasts
 
HPC AI Advisory Council Update
HPC AI Advisory Council UpdateHPC AI Advisory Council Update
HPC AI Advisory Council Update
 
Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19
 
Energy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic TuningEnergy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic Tuning
 
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPODHPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
 
State of ARM-based HPC
State of ARM-based HPCState of ARM-based HPC
State of ARM-based HPC
 
Versal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud AccelerationVersal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud Acceleration
 
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance EfficientlyZettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
 
Scaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's EraScaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's Era
 
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
 
Data Parallel Deep Learning
Data Parallel Deep LearningData Parallel Deep Learning
Data Parallel Deep Learning
 
Making Supernovae with Jets
Making Supernovae with JetsMaking Supernovae with Jets
Making Supernovae with Jets
 
Adaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and EigensolversAdaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and Eigensolvers
 

Kürzlich hochgeladen

COMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a WebsiteCOMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a Websitedgelyza
 
Introduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxIntroduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxMatsuo Lab
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationIES VE
 
UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6DianaGray10
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfAijun Zhang
 
UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"
UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"
UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"DianaGray10
 
20230202 - Introduction to tis-py
20230202 - Introduction to tis-py20230202 - Introduction to tis-py
20230202 - Introduction to tis-pyJamie (Taka) Wang
 
VoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXVoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXTarek Kalaji
 
Bird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemBird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemAsko Soukka
 
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online CollaborationCOMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online Collaborationbruanjhuli
 
Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioChristian Posta
 
How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?IES VE
 
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...Will Schroeder
 
Nanopower In Semiconductor Industry.pdf
Nanopower  In Semiconductor Industry.pdfNanopower  In Semiconductor Industry.pdf
Nanopower In Semiconductor Industry.pdfPedro Manuel
 
Governance in SharePoint Premium:What's in the box?
Governance in SharePoint Premium:What's in the box?Governance in SharePoint Premium:What's in the box?
Governance in SharePoint Premium:What's in the box?Juan Carlos Gonzalez
 
Videogame localization & technology_ how to enhance the power of translation.pdf
Videogame localization & technology_ how to enhance the power of translation.pdfVideogame localization & technology_ how to enhance the power of translation.pdf
Videogame localization & technology_ how to enhance the power of translation.pdfinfogdgmi
 
9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding TeamAdam Moalla
 
Cybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxCybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxGDSC PJATK
 
Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024SkyPlanner
 

Kürzlich hochgeladen (20)

COMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a WebsiteCOMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a Website
 
Introduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxIntroduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptx
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
 
UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdf
 
UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"
UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"
UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"
 
20230202 - Introduction to tis-py
20230202 - Introduction to tis-py20230202 - Introduction to tis-py
20230202 - Introduction to tis-py
 
VoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXVoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBX
 
Bird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemBird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystem
 
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online CollaborationCOMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
 
Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and Istio
 
How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?
 
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
 
Nanopower In Semiconductor Industry.pdf
Nanopower  In Semiconductor Industry.pdfNanopower  In Semiconductor Industry.pdf
Nanopower In Semiconductor Industry.pdf
 
Governance in SharePoint Premium:What's in the box?
Governance in SharePoint Premium:What's in the box?Governance in SharePoint Premium:What's in the box?
Governance in SharePoint Premium:What's in the box?
 
Videogame localization & technology_ how to enhance the power of translation.pdf
Videogame localization & technology_ how to enhance the power of translation.pdfVideogame localization & technology_ how to enhance the power of translation.pdf
Videogame localization & technology_ how to enhance the power of translation.pdf
 
9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team
 
Cybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxCybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptx
 
Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024
 
20230104 - machine vision
20230104 - machine vision20230104 - machine vision
20230104 - machine vision
 

High-Performance and Scalable Designs of Programming Models for Exascale Systems

  • 1. High-Performance and Scalable Designs of Programming Models for Exascale Systems Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda Keynote Talk at HPCAC, Switzerland (April 2017) by
  • 2. HPCAC-Switzerland (April ‘17) 2Network Based Computing Laboratory High-End Computing (HEC): Towards Exascale-Era Expected to have an ExaFlop system in 2021! 150-300 PFlops in 2017-18? 1 EFlops in 2021?
  • 3. HPCAC-Switzerland (April ‘17) 3Network Based Computing Laboratory • Scientific Computing – Message Passing Interface (MPI), including MPI + OpenMP, is the Dominant Programming Model – Many discussions towards Partitioned Global Address Space (PGAS) • UPC, OpenSHMEM, CAF, UPC++ etc. – Hybrid Programming: MPI + PGAS (OpenSHMEM, UPC) • Deep Learning – Caffe, CNTK, TensorFlow, and many more • Big Data/Enterprise/Commercial Computing – Focuses on large data and data analysis – Spark and Hadoop (HDFS, HBase, MapReduce) – Memcached is also used for Web 2.0 Three Major Computing Categories
  • 4. HPCAC-Switzerland (April ‘17) 4Network Based Computing Laboratory Parallel Programming Models Overview P1 P2 P3 Shared Memory P1 P2 P3 Memory Memory Memory P1 P2 P3 Memory Memory Memory Logical shared memory Shared Memory Model SHMEM, DSM Distributed Memory Model MPI (Message Passing Interface) Partitioned Global Address Space (PGAS) Global Arrays, UPC, Chapel, X10, CAF, … • Programming models provide abstract machine models • Models can be mapped on different types of systems – e.g. Distributed Shared Memory (DSM), MPI within a node, etc. • PGAS models and Hybrid MPI+PGAS models are gradually receiving importance
  • 5. HPCAC-Switzerland (April ‘17) 5Network Based Computing Laboratory Partitioned Global Address Space (PGAS) Models • Key features - Simple shared memory abstractions - Light weight one-sided communication - Easier to express irregular communication • Different approaches to PGAS - Languages • Unified Parallel C (UPC) • Co-Array Fortran (CAF) • X10 • Chapel - Libraries • OpenSHMEM • UPC++ • Global Arrays
  • 6. HPCAC-Switzerland (April ‘17) 6Network Based Computing Laboratory Hybrid (MPI+PGAS) Programming • Application sub-kernels can be re-written in MPI/PGAS based on communication characteristics • Benefits: – Best of Distributed Computing Model – Best of Shared Memory Computing Model Kernel 1 MPI Kernel 2 MPI Kernel 3 MPI Kernel N MPI HPC Application Kernel 2 PGAS Kernel N PGAS
  • 7. HPCAC-Switzerland (April ‘17) 7Network Based Computing Laboratory Designing Communication Middleware for Multi-Petaflop and Exaflop Systems: Challenges Programming Models MPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP, OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc. Application Kernels/Applications Networking Technologies (InfiniBand, 40/100GigE, Aries, and OmniPath) Multi/Many-core Architectures Accelerators (NVIDIA and MIC) Middleware Co-Design Opportunities and Challenges across Various Layers Performance Scalability Fault- Resilience Communication Library or Runtime for Programming Models Point-to-point Communication Collective Communication Energy- Awareness Synchronization and Locks I/O and File Systems Fault Tolerance
  • 8. HPCAC-Switzerland (April ‘17) 8Network Based Computing Laboratory • Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided) – Scalable job start-up • Scalable Collective communication – Offload – Non-blocking – Topology-aware • Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores) – Multiple end-points per node • Support for efficient multi-threading • Integrated Support for GPGPUs and Accelerators • Fault-tolerance/resiliency • QoS support for communication and I/O • Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM, MPI+UPC++, CAF, …) • Virtualization • Energy-Awareness Broad Challenges in Designing Communication Middleware for (MPI+X) at Exascale
  • 9. HPCAC-Switzerland (April ‘17) 9Network Based Computing Laboratory • Extreme Low Memory Footprint – Memory per core continues to decrease • D-L-A Framework – Discover • Overall network topology (fat-tree, 3D, …), Network topology for processes for a given job • Node architecture, Health of network and node – Learn • Impact on performance and scalability • Potential for failure – Adapt • Internal protocols and algorithms • Process mapping • Fault-tolerance solutions – Low overhead techniques while delivering performance, scalability and fault-tolerance Additional Challenges for Designing Exascale Software Libraries
  • 10. HPCAC-Switzerland (April ‘17) 10Network Based Computing Laboratory Overview of the MVAPICH2 Project • High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE) – MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.0), Started in 2001, First version available in 2002 – MVAPICH2-X (MPI + PGAS), Available since 2011 – Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014 – Support for Virtualization (MVAPICH2-Virt), Available since 2015 – Support for Energy-Awareness (MVAPICH2-EA), Available since 2015 – Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015 – Used by more than 2,750 organizations in 83 countries – More than 413,000 (> 0.4 million) downloads from the OSU site directly – Empowering many TOP500 clusters (Nov ‘16 ranking) • 1st, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China • 13th, 241,108-core (Pleiades) at NASA • 17th, 462,462-core (Stampede) at TACC • 40th, 74,520-core (Tsubame 2.5) at Tokyo Institute of Technology – Available with software stacks of many vendors and Linux Distros (RedHat and SuSE) – http://mvapich.cse.ohio-state.edu • Empowering Top500 systems for over a decade – System-X from Virginia Tech (3rd in Nov 2003, 2,200 processors, 12.25 TFlops) -> – Sunway TaihuLight (1st in Jun’16, 10M cores, 100 PFlops)
  • 11. HPCAC-Switzerland (April ‘17) 11Network Based Computing Laboratory 0 50000 100000 150000 200000 250000 300000 350000 400000 450000 Sep-04 Feb-05 Jul-05 Dec-05 May-06 Oct-06 Mar-07 Aug-07 Jan-08 Jun-08 Nov-08 Apr-09 Sep-09 Feb-10 Jul-10 Dec-10 May-11 Oct-11 Mar-12 Aug-12 Jan-13 Jun-13 Nov-13 Apr-14 Sep-14 Feb-15 Jul-15 Dec-15 May-16 Oct-16 Mar-17 NumberofDownloads Timeline MV0.9.4 MV20.9.0 MV20.9.8 MV21.0 MV1.0 MV21.0.3 MV1.1 MV21.4 MV21.5 MV21.6 MV21.7 MV21.8 MV21.9 MV22.1 MV2-GDR2.0b MV2-MIC2.0 MV2-Virt2.1rc2 MV2-GDR2.2rc1 MV2-X2.2 MV22.3a MVAPICH2 Release Timeline and Downloads
  • 12. HPCAC-Switzerland (April ‘17) 12Network Based Computing Laboratory Architecture of MVAPICH2 Software Family High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid --- MPI + X (MPI + PGAS + OpenMP/Cilk) High Performance and Scalable Communication Runtime Diverse APIs and Mechanisms Point-to- point Primitives Collectives Algorithms Energy- Awareness Remote Memory Access I/O and File Systems Fault Tolerance Virtualization Active Messages Job Startup Introspection & Analysis Support for Modern Networking Technology (InfiniBand, iWARP, RoCE, Omni-Path) Support for Modern Multi-/Many-core Architectures (Intel-Xeon, OpenPower, Xeon-Phi (MIC, KNL), NVIDIA GPGPU) Transport Protocols Modern Features RC XRC UD DC UMR ODP SR- IOV Multi Rail Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features MCDRAM* NVLink* CAPI* * Upcoming
  • 13. HPCAC-Switzerland (April ‘17) 13Network Based Computing Laboratory MVAPICH2 Software Family High-Performance Parallel Programming Libraries MVAPICH2 Support for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE MVAPICH2-X Advanced MPI features, OSU INAM, PGAS (OpenSHMEM, UPC, UPC++, and CAF), and MPI+PGAS programming models with unified communication runtime MVAPICH2-GDR Optimized MPI for clusters with NVIDIA GPUs MVAPICH2-Virt High-performance and scalable MPI for hypervisor and container based HPC cloud MVAPICH2-EA Energy aware and High-performance MPI MVAPICH2-MIC Optimized MPI for clusters with Intel KNC Microbenchmarks OMB Microbenchmarks suite to evaluate MPI and PGAS (OpenSHMEM, UPC, and UPC++) libraries for CPUs and GPUs Tools OSU INAM Network monitoring, profiling, and analysis for clusters with MPI and scheduler integration OEMT Utility to measure the energy consumption of MPI applications
  • 14. HPCAC-Switzerland (April ‘17) 14Network Based Computing Laboratory • Released on 03/29/2017 • Major Features and Enhancements – Based on and ABI compatible with MPICH 3.2 – Support collective offload using Mellanox's SHArP for Allreduce • Enhance tuning framework for Allreduce using SHArP – Introduce capability to run MPI jobs across multiple InfiniBand subnets – Introduce basic support for executing MPI jobs in Singularity – Enhance collective tuning for Intel Knight's Landing and Intel Omni-path – Enhance process mapping support for multi-threaded MPI applications • Introduce MV2_CPU_BINDING_POLICY=hybrid • Introduce MV2_THREADS_PER_PROCESS – On-demand connection management for PSM-CH3 and PSM2-CH3 channels – Enhance PSM-CH3 and PSM2-CH3 job startup to use non-blocking PMI calls – Enhance debugging support for PSM-CH3 and PSM2-CH3 channels – Improve performance of architecture detection – Introduce run time parameter MV2_SHOW_HCA_BINDING to show process to HCA binding • Enhance MV2_SHOW_CPU_BINDING to enable display of CPU bindings on all nodes – Deprecate OFA-IB-Nemesis channel – Update to hwloc version 1.11.6 MVAPICH2 2.3a
  • 15. HPCAC-Switzerland (April ‘17) 15Network Based Computing Laboratory • Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication – Support for advanced IB mechanisms (ODP) – Extremely minimal memory footprint (DCT) – Scalable Job Start-up – Dynamic and Adaptive Tag Matching – Collective Support with SHArP • Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …) • Integrated Support for GPGPUs • InfiniBand Network Analysis and Monitoring (INAM) • Virtualization and HPC Cloud Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
  • 16. HPCAC-Switzerland (April ‘17) 16Network Based Computing Laboratory One-way Latency: MPI over IB with MVAPICH2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Small Message Latency Message Size (bytes) Latency(us) 1.11 1.19 1.01 1.15 1.04 TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch 0 20 40 60 80 100 120 TrueScale-QDR ConnectX-3-FDR ConnectIB-DualFDR ConnectX-4-EDR Omni-Path Large Message Latency Message Size (bytes) Latency(us)
  • 17. HPCAC-Switzerland (April ‘17) 17Network Based Computing Laboratory TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 IB switch Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch Bandwidth: MPI over IB with MVAPICH2 0 5000 10000 15000 20000 25000 TrueScale-QDR ConnectX-3-FDR ConnectIB-DualFDR ConnectX-4-EDR Omni-Path Bidirectional Bandwidth Bandwidth (MBytes/sec) Message Size (bytes) 21,227 12,161 21,983 6,228 24,136 0 2000 4000 6000 8000 10000 12000 14000 Unidirectional Bandwidth Bandwidth (MBytes/sec) Message Size (bytes) 12,590 3,373 6,356 12,083 12,366
  • 18. HPCAC-Switzerland (April ‘17) 18Network Based Computing Laboratory 0 0.5 1 0 1 2 4 8 16 32 64 128 256 512 1K Latency(us) Message Size (Bytes) Latency Intra-Socket Inter-Socket MVAPICH2 Two-Sided Intra-Node Performance (Shared memory and Kernel-based Zero-copy Support (LiMIC and CMA)) Intel Ivy-bridge 0.18 us 0.45 us 0 5000 10000 15000 Bandwidth(MB/s) Message Size (Bytes) Bandwidth (Inter-socket) inter-Socket-CMA inter-Socket-Shmem inter-Socket-LiMIC 0 5000 10000 15000 Bandwidth(MB/s) Message Size (Bytes) Bandwidth (Intra-socket) intra-Socket-CMA intra-Socket-Shmem intra-Socket-LiMIC 14,250 MB/s 13,749 MB/s
  • 19. HPCAC-Switzerland (April ‘17) 19Network Based Computing Laboratory Minimizing Memory Footprint by Direct Connect (DC) Transport Node 0 P1 P0 Node 1 P3 P2 Node 3 P7 P6 Node 2 P5 P4IB Network • Constant connection cost (One QP for any peer) • Full Feature Set (RDMA, Atomics etc) • Separate objects for send (DC Initiator) and receive (DC Target) – DC Target identified by “DCT Number” – Messages routed with (DCT Number, LID) – Requires same “DC Key” to enable communication • Available since MVAPICH2-X 2.2a 0 0.5 1 160 320 620 NormalizedExecution Time Number of Processes NAMD - Apoa1: Large data set RC DC-Pool UD XRC 10 22 47 97 1 1 1 2 10 10 10 10 1 1 3 5 1 10 100 80 160 320 640 ConnectionMemory(KB) Number of Processes Memory Footprint for Alltoall RC DC-Pool UD XRC H. Subramoni, K. Hamidouche, A. Venkatesh, S. Chakraborty and D. K. Panda, Designing MPI Library with Dynamic Connected Transport (DCT) of InfiniBand : Early Experiences. IEEE International Supercomputing Conference (ISC ’14)
  • 20. HPCAC-Switzerland (April ‘17) 20Network Based Computing Laboratory • Near-constant MPI and OpenSHMEM initialization time at any process count • 10x and 30x improvement in startup time of MPI and OpenSHMEM respectively at 16,384 processes • Memory consumption reduced for remote endpoint information by O(processes per node) • 1GB Memory saved per node with 1M processes and 16 processes per node Towards High Performance and Scalable Startup at Exascale P M O Job Startup Performance MemoryRequiredtoStore EndpointInformation a b c d eP M PGAS – State of the art MPI – State of the art O PGAS/MPI – Optimized PMIX_Ring PMIX_Ibarrier PMIX_Iallgather Shmem based PMI b c d e a On-demand Connection On-demand Connection Management for OpenSHMEM and OpenSHMEM+MPI. S. Chakraborty, H. Subramoni, J. Perkins, A. A. Awan, and D K Panda, 20th International Workshop on High-level Parallel Programming Models and Supportive Environments (HIPS ’15) PMI Extensions for Scalable MPI Startup. S. Chakraborty, H. Subramoni, A. Moody, J. Perkins, M. Arnold, and D K Panda, Proceedings of the 21st European MPI Users' Group Meeting (EuroMPI/Asia ’14) Non-blocking PMI Extensions for Fast MPI Startup. S. Chakraborty, H. Subramoni, A. Moody, A. Venkatesh, J. Perkins, and D K Panda, 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid ’15) SHMEMPMI – Shared Memory based PMI for Improved Performance and Scalability. S. Chakraborty, H. Subramoni, J. Perkins, and D K Panda, 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid ’16) a b c d e
  • 21. HPCAC-Switzerland (April ‘17) 21Network Based Computing Laboratory Scalable Job Startup with Non-blocking PMI 0 10 20 30 40 50 16 64 256 1K 4K 16K TimeTakenforMPI_Init (seconds) Number of Processes Hello World - MV2-2.0 + Default SLURM MPI_Init - MV2-2.0 + Default SLURM Hello World - MV2-2.1 + Optimized SLURM MPI_Init - MV2-2.1 + Optimized SLURM 0 10 20 30 64 256 1K 4K 16K 64K TimeTaken(Seconds) Number of Processes Hello World MPI_Init • Near-constant MPI_Init at any scale • MPI_Init is 59 times faster • Hello World (MPI_Init + MPI_Finalize) takes 5.7 times less time with 8,192 processes (512 nodes) • 16,384 processes, 1,024 Nodes (Sandy Bridge + FDR) • Non-blocking PMI implementation in mpirun_rsh • On-demand connection management for PSM and Omni-path • Efficient intra-node startup for Knights Landing • MPI_Init takes 5.8 seconds • Hello World takes 21 seconds • 65,536 processes, 1,024 Nodes (KNL + Omni-Path) Co-designing Resource Manager and MPI library Resource Manager Independent Design New designs available in MVAPICH2-2.3a and as patch for SLURM-15.08.8 and SLURM-16.05.1
  • 22. HPCAC-Switzerland (April ‘17) 22Network Based Computing Laboratory • Introduced by Mellanox to avoid pinning the pages of registered memory regions • ODP-aware runtime could reduce the size of pin-down buffers while maintaining performance • Available in MVAPICH2-X 2.2 On-Demand Paging (ODP) M. Li, K. Hamidouche, X. Lu, H. Subramoni, J. Zhang, and D. K. Panda, “Designing MPI Library with On-Demand Paging (ODP) of InfiniBand: Challenges and Benefits”, SC, 2016 1 10 100 1000 ExecutionTime(s) Applications (64 Processes) Pin-down ODP
  • 23. HPCAC-Switzerland (April ‘17) 23Network Based Computing Laboratory Dynamic and Adaptive Tag Matching Normalized Total Tag Matching Time at 512 Processes Normalized to Default (Lower is Better) Normalized Memory Overhead per Process at 512 Processes Compared to Default (Lower is Better) Adaptive and Dynamic Design for MPI Tag Matching; M. Bayatpour, H. Subramoni, S. Chakraborty, and D. K. Panda; IEEE Cluster 2016. [Best Paper Nominee] Challenge Tag matching is a significant overhead for receivers Existing Solutions are - Static and do not adapt dynamically to communication pattern - Do not consider memory overhead Solution A new tag matching design - Dynamically adapt to communication patterns - Use different strategies for different ranks - Decisions are based on the number of request object that must be traversed before hitting on the required one Results Better performance than other state-of-the art tag- matching schemes Minimum memory consumption Will be available in future MVAPICH2 releases
  • 24. HPCAC-Switzerland (April ‘17) 24Network Based Computing Laboratory 0 2 4 6 8 (4,4) (8,4) (16,4) Latency(us) (Number of Nodes, PPN) MV2-128B MV2-SHArP-128B MV2-32B MV2-SHArP-32B 2.3x 0 2 4 6 8 10 12 4 8 16 32 64 128 256 Latency(us) Message Size (Bytes) 4 PPN*, 16 NodesMVAPICH2 MVAPICH2-SHArP 0 2 4 6 8 10 12 14 16 4 8 16 32 64 128 256 Latency(us) Message Size (Bytes) 28 PPN, 16 NodesMVAPICH2 MVAPICH2-SHArP Advanced Allreduce Collective Designs Using SHArP osu_allreduce (OSU Micro Benchmark) using MVAPICH2 2.3a 2.3x *PPN: Processes Per Node 1.5x 0 2 4 6 8 10 12 (4,28) (8,28) (16,28)Latency(us) (Number of Nodes, PPN) MV2-128B MV2-SHArP-128B MV2-32B MV2-SHArP-32B 1.4x
  • 25. HPCAC-Switzerland (April ‘17) 25Network Based Computing Laboratory 0 0.05 0.1 0.15 0.2 (4,28) (8,28) (16,28) Latency(seconds) (Number of Nodes, PPN) MVAPICH2 MVAPICH2-SHArP 13% Mesh Refinement Time of MiniAMR 0 0.1 0.2 0.3 0.4 (4,28) (8,28) (16,28) Latency(seconds) (Number of Nodes, PPN) MVAPICH2 MVAPICH2-SHArP Advanced Allreduce Collective Designs Using SHArP 12% Avg DDOT Allreduce time of HPCG Based on MVAPICH2 2.3a
  • 26. HPCAC-Switzerland (April ‘17) 26Network Based Computing Laboratory • Scalability for million to billion processors • Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …) • Integrated Support for GPGPUs • InfiniBand Network Analysis and Monitoring (INAM) • Virtualization and HPC Cloud Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
  • 27. HPCAC-Switzerland (April ‘17) 27Network Based Computing Laboratory MVAPICH2-X for Hybrid MPI + PGAS Applications • Current Model – Separate Runtimes for OpenSHMEM/UPC/UPC++/CAF and MPI – Possible deadlock if both runtimes are not progressed – Consumes more network resource • Unified communication runtime for MPI, UPC, UPC++, OpenSHMEM, CAF – Available with since 2012 (starting with MVAPICH2-X 1.9) – http://mvapich.cse.ohio-state.edu
  • 28. HPCAC-Switzerland (April ‘17) 28Network Based Computing Laboratory UPC++ Support in MVAPICH2-X MPI + {UPC++} Application GASNet Interfaces UPC++ Interface Network Conduit (MPI) MVAPICH2-X Unified Communication Runtime (UCR) MPI + {UPC++} Application UPC++ Runtime MPI Interfaces • Full and native support for hybrid MPI + UPC++ applications • Better performance compared to IBV and MPI conduits • OSU Micro-benchmarks (OMB) support for UPC++ • Available since MVAPICH2-X (2.2rc1) 0 5 10 15 20 25 30 35 40 1K 2K 4K 8K 16K 32K 64K 128K256K512K 1M Time(ms) Message Size (bytes) GASNet_MPI GASNET_IBV MV2-X 14x Inter-node Broadcast (64 nodes 1:ppn) MPI Runtime
  • 29. HPCAC-Switzerland (April ‘17) 29Network Based Computing Laboratory Application Level Performance with Graph500 and Sort Graph500 Execution Time J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models, International Supercomputing Conference (ISC’13), June 2013 J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation, Int'l Conference on Parallel Processing (ICPP '12), September 2012 0 5 10 15 20 25 30 35 4K 8K 16K Time(s) No. of Processes MPI-Simple MPI-CSC MPI-CSR Hybrid (MPI+OpenSHMEM) 13X 7.6X • Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design • 8,192 processes - 2.4X improvement over MPI-CSR - 7.6X improvement over MPI-Simple • 16,384 processes - 1.5X improvement over MPI-CSR - 13X improvement over MPI-Simple J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, Optimizing Collective Communication in OpenSHMEM, Int'l Conference on Partitioned Global Address Space Programming Models (PGAS '13), October 2013. Sort Execution Time 0 500 1000 1500 2000 2500 3000 500GB-512 1TB-1K 2TB-2K 4TB-4K Time(seconds) Input Data - No. of Processes MPI Hybrid 51% • Performance of Hybrid (MPI+OpenSHMEM) Sort Application • 4,096 processes, 4 TB Input Size - MPI – 2408 sec; 0.16 TB/min - Hybrid – 1172 sec; 0.36 TB/min - 51% improvement over MPI-design
  • 30. HPCAC-Switzerland (April ‘17) 30Network Based Computing Laboratory • Scalability for million to billion processors • Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …) • Integrated Support for GPGPUs – CUDA-aware MPI – GPUDirect RDMA (GDR) Support – Preliminary Evaluation on OpenPower • InfiniBand Network Analysis and Monitoring (INAM) • Virtualization and HPC Cloud Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
  • 31. HPCAC-Switzerland (April ‘17) 31Network Based Computing Laboratory At Sender: At Receiver: MPI_Recv(r_devbuf, size, …); inside MVAPICH2 • Standard MPI interfaces used for unified data movement • Takes advantage of Unified Virtual Addressing (>= CUDA 4.0) • Overlaps data movement from GPU with RDMA transfers High Performance and High Productivity MPI_Send(s_devbuf, size, …); GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU
  • 32. HPCAC-Switzerland (April ‘17) 32Network Based Computing Laboratory CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.2 Releases • Support for MPI communication from NVIDIA GPU device memory • High performance RDMA-based inter-node point-to-point communication (GPU-GPU, GPU-Host and Host-GPU) • High performance intra-node point-to-point communication for multi-GPU adapters/node (GPU-GPU, GPU-Host and Host-GPU) • Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node communication for multiple GPU adapters/node • Optimized and tuned collectives for GPU device buffers • MPI datatype support for point-to-point and collective communication from GPU device buffers • Unified memory
  • 33. HPCAC-Switzerland (April ‘17) 33Network Based Computing Laboratory 0 1000 2000 3000 4000 1 4 16 64 256 1K 4K MV2-GDR2.2 MV2-GDR2.0b MV2 w/o GDR GPU-GPU Internode Bi-Bandwidth Message Size (bytes) Bi-Bandwidth(MB/s) 0 5 10 15 20 25 30 0 2 8 32 128 512 2K MV2-GDR2.2 MV2-GDR2.0b MV2 w/o GDR GPU-GPU internode latency Message Size (bytes) Latency(us) MVAPICH2-GDR-2.2 Intel Ivy Bridge (E5-2680 v2) node - 20 cores NVIDIA Tesla K40c GPU Mellanox Connect-X4 EDR HCA CUDA 8.0 Mellanox OFED 3.0 with GPU-Direct-RDMA 10x 2X 11x Performance of MVAPICH2-GPU with GPU-Direct RDMA (GDR) 2.18us 0 500 1000 1500 2000 2500 3000 1 4 16 64 256 1K 4K MV2-GDR2.2 MV2-GDR2.0b MV2 w/o GDR GPU-GPU Internode Bandwidth Message Size (bytes) Bandwidth (MB/s) 11X 2X 3X
  • 34. HPCAC-Switzerland (April ‘17) 34Network Based Computing Laboratory • Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB) • HoomdBlue Version 1.0.5 • GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768 MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768 MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384 Application-Level Evaluation (HOOMD-blue) 0 500 1000 1500 2000 2500 4 8 16 32 AverageTimeStepsper second(TPS) Number of Processes MV2 MV2+GDR 0 500 1000 1500 2000 2500 3000 3500 4 8 16 32 AverageTimeStepsper second(TPS) Number of Processes 64K Particles 256K Particles 2X 2X
  • 35. HPCAC-Switzerland (April ‘17) 35Network Based Computing Laboratory Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland 0 0.2 0.4 0.6 0.8 1 1.2 16 32 64 96NormalizedExecutionTime Number of GPUs CSCS GPU cluster Default Callback-based Event-based 0 0.2 0.4 0.6 0.8 1 1.2 4 8 16 32 NormalizedExecutionTime Number of GPUs Wilkes GPU Cluster Default Callback-based Event-based • 2X improvement on 32 GPUs nodes • 30% improvement on 96 GPU nodes (8 GPUs/node) C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data Movement Processing on Modern GPU-enabled Systems, IPDPS’16 On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application Cosmo model: http://www2.cosmo-model.org/content /tasks/operational/meteoSwiss/
  • 36. HPCAC-Switzerland (April ‘17) 36Network Based Computing Laboratory 0 1000 2000 3000 4000 5000 6000 1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M Bandwidth(MB/s) GDR=1 GDRCOPY=0 GDR=1 GDRCOPY=1 MVAPICH2-GDR on OpenPower (Preliminary Evaluation) 0 10 20 30 40 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K Latency(us) GDR=1 GDRCOPY=0 GDR=1 GDRCOPY=1 osu_latency [D-D] osu_bw [H-D] 0 5 10 15 20 1 4 16 64 256 1K 4K 16K 64K Latency(us) GDR=1 GDRCOPY=0 GDR=1 GDRCOPY=1 osu_latency [H-D] POWER8NVL-ppc64le @ 4.023GHz MVAPICH2-GDR2.2, MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=32768 Intra-node Numbers (Basic Configuration)
  • 37. HPCAC-Switzerland (April ‘17) 37Network Based Computing Laboratory • Scalability for million to billion processors • Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …) • Integrated Support for GPGPUs • Energy-Awareness • InfiniBand Network Analysis and Monitoring (INAM) • Virtualization and HPC Cloud Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
  • 38. HPCAC-Switzerland (April ‘17) 38Network Based Computing Laboratory Overview of OSU INAM • A network monitoring and analysis tool that is capable of analyzing traffic on the InfiniBand network with inputs from the MPI runtime – http://mvapich.cse.ohio-state.edu/tools/osu-inam/ • Monitors IB clusters in real time by querying various subnet management entities and gathering input from the MPI runtimes • OSU INAM v0.9.1 released on 05/13/16 • Significant enhancements to user interface to enable scaling to clusters with thousands of nodes • Improve database insert times by using 'bulk inserts‘ • Capability to look up list of nodes communicating through a network link • Capability to classify data flowing over a network link at job level and process level granularity in conjunction with MVAPICH2-X 2.2rc1 • “Best practices “ guidelines for deploying OSU INAM on different clusters • Capability to analyze and profile node-level, job-level and process-level activities for MPI communication – Point-to-Point, Collectives and RMA • Ability to filter data based on type of counters using “drop down” list • Remotely monitor various metrics of MPI processes at user specified granularity • "Job Page" to display jobs in ascending/descending order of various performance metrics in conjunction with MVAPICH2-X • Visualize the data transfer happening in a “live” or “historical” fashion for entire network, job or set of nodes
  • 39. HPCAC-Switzerland (April ‘17) 39Network Based Computing Laboratory OSU INAM Features • Show network topology of large clusters • Visualize traffic pattern on different links • Quickly identify congested links/links in error state • See the history unfold – play back historical state of the network Comet@SDSC --- Clustered View (1,879 nodes, 212 switches, 4,377 network links) Finding Routes Between Nodes
  • 40. HPCAC-Switzerland (April ‘17) 40Network Based Computing Laboratory OSU INAM Features (Cont.) Visualizing a Job (5 Nodes) • Job level view • Show different network metrics (load, error, etc.) for any live job • Play back historical data for completed jobs to identify bottlenecks • Node level view - details per process or per node • CPU utilization for each rank/node • Bytes sent/received for MPI operations (pt-to-pt, collective, RMA) • Network metrics (e.g. XmitDiscard, RcvError) per rank/node Estimated Process Level Link Utilization • Estimated Link Utilization view • Classify data flowing over a network link at different granularity in conjunction with MVAPICH2-X 2.2rc1 • Job level and • Process level
  • 41. HPCAC-Switzerland (April ‘17) 41Network Based Computing Laboratory • Scalability for million to billion processors • Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …) • Integrated Support for GPGPUs • InfiniBand Network Analysis and Monitoring (INAM) • Virtualization and HPC Cloud Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
  • 42. HPCAC-Switzerland (April ‘17) 42Network Based Computing Laboratory • Virtualization has many benefits – Fault-tolerance – Job migration – Compaction • Have not been very popular in HPC due to overhead associated with Virtualization • New SR-IOV (Single Root – IO Virtualization) support available with Mellanox InfiniBand adapters changes the field • Enhanced MVAPICH2 support for SR-IOV • MVAPICH2-Virt 2.1 (with and without OpenStack) is publicly available Can HPC and Virtualization be Combined? J. Zhang, X. Lu, J. Jose, R. Shi and D. K. Panda, Can Inter-VM Shmem Benefit MPI Applications on SR-IOV based Virtualized InfiniBand Clusters? EuroPar'14 J. Zhang, X. Lu, J. Jose, M. Li, R. Shi and D.K. Panda, High Performance MPI Libray over SR-IOV enabled InfiniBand Clusters, HiPC’14 J. Zhang, X .Lu, M. Arnold and D. K. Panda, MVAPICH2 Over OpenStack with SR-IOV: an Efficient Approach to build HPC Clouds, CCGrid’15
  • 43. HPCAC-Switzerland (April ‘17) 43Network Based Computing Laboratory • Redesign MVAPICH2 to make it virtual machine aware – SR-IOV shows near to native performance for inter-node point to point communication – IVSHMEM offers zero-copy access to data on shared memory of co-resident VMs – Locality Detector: maintains the locality information of co-resident virtual machines – Communication Coordinator: selects the communication channel (SR-IOV, IVSHMEM) adaptively Overview of MVAPICH2-Virt with SR-IOV and IVSHMEM Host Environment Guest 1 Hypervisor PF Driver Infiniband Adapter Physical Function user space kernel space MPI proc PCI Device VF Driver Guest 2 user space kernel space MPI proc PCI Device VF Driver Virtual Function Virtual Function /dev/shm/ IV-SHM IV-Shmem Channel SR-IOV Channel J. Zhang, X. Lu, J. Jose, R. Shi, D. K. Panda. Can Inter-VM Shmem Benefit MPI Applications on SR-IOV based Virtualized InfiniBand Clusters? Euro-Par, 2014. J. Zhang, X. Lu, J. Jose, R. Shi, M. Li, D. K. Panda. High Performance MPI Library over SR-IOV Enabled InfiniBand Clusters. HiPC, 2014.
  • 44. HPCAC-Switzerland (April ‘17) 44Network Based Computing Laboratory • OpenStack is one of the most popular open-source solutions to build clouds and manage virtual machines • Deployment with OpenStack – Supporting SR-IOV configuration – Supporting IVSHMEM configuration – Virtual Machine aware design of MVAPICH2 with SR-IOV • An efficient approach to build HPC Clouds with MVAPICH2-Virt and OpenStack MVAPICH2-Virt with SR-IOV and IVSHMEM over OpenStack J. Zhang, X. Lu, M. Arnold, D. K. Panda. MVAPICH2 over OpenStack with SR-IOV: An Efficient Approach to Build HPC Clouds. CCGrid, 2015.
  • 45. HPCAC-Switzerland (April ‘17) 45Network Based Computing Laboratory 0 50 100 150 200 250 300 350 400 milc leslie3d pop2 GAPgeofem zeusmp2 lu ExecutionTime(s) MV2-SR-IOV-Def MV2-SR-IOV-Opt MV2-Native 1% 9.5% 0 1000 2000 3000 4000 5000 6000 22,20 24,10 24,16 24,20 26,10 26,16 ExecutionTime(ms) Problem Size (Scale, Edgefactor) MV2-SR-IOV-Def MV2-SR-IOV-Opt MV2-Native 2% • 32 VMs, 6 Core/VM • Compared to Native, 2-5% overhead for Graph500 with 128 Procs • Compared to Native, 1-9.5% overhead for SPEC MPI2007 with 128 Procs Application-Level Performance on Chameleon SPEC MPI2007Graph500 5%
  • 46. HPCAC-Switzerland (April ‘17) 46Network Based Computing Laboratory 0 500 1000 1500 2000 2500 3000 3500 4000 22, 16 22, 20 24, 16 24, 20 26, 16 26, 20 ExecutionTime(ms) Problem Size (Scale, Edgefactor) Container-Def Container-Opt Native 0 10 20 30 40 50 60 70 80 90 100 MG.D FT.D EP.D LU.D CG.D ExecutionTime(s) Container-Def Container-Opt Native • 64 Containers across 16 nodes, pining 4 Cores per Container • Compared to Container-Def, up to 11% and 16% of execution time reduction for NAS and Graph 500 • Compared to Native, less than 9 % and 4% overhead for NAS and Graph 500 • Optimized Container support is available in the latest MVAPICH2-Virt Containers Support: Application-Level Performance on Chameleon Graph 500NAS 11% 16%
  • 47. HPCAC-Switzerland (April ‘17) 47Network Based Computing Laboratory MVAPICH2 Point-to-Point and Collectives Performance with Singularity 0 5 10 15 20 1 4 16 64 256 1024 4096 16384 65536 Latency(us) Message Size (Byte) Latency Singularity-Intra-Node Native-Intra-Node Singularity-Inter-Node Native-Inter-Node 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 Bandwidth(MB/s) Message Size (Byte) BW Singularity-Intra-Node Native-Intra-Node Singularity-Inter-Node Native-Inter-Node 0 10 20 30 40 50 60 70 80 1 4 16 64 256 1024 4096 16384 65536 Latency(us) Message Size (Byte) Bcast Singularity Native 0 50 100 150 200 4 16 64 256 1024 4096 16384 65536 Latency(us) Message Size (Byte) Allreduce Singularity Native Performance Very Close to Native
  • 48. HPCAC-Switzerland (April ‘17) 48Network Based Computing Laboratory High Performance VM Migration Framework for MPI Applications on SR-IOV enabled InfiniBand Clusters J. Zhang, X. Lu, D. K. Panda. High-Performance Virtual Machine Migration Framework for MPI Applications on SR-IOV enabled InfiniBand Clusters. IPDPS, 2017 (Support will be available in upcoming MVAPICH2-Virt) • Migration with SR-IOV device has to handle the challenges of detachment/re-attachment of virtualized IB device and IB connection • Consist of SR-IOV enabled IB Cluster and External Migration Controller • Multiple parallel libraries to notify MPI applications during migration (detach/reattach SR-IOV/IVShmem, migrate VMs, migration status) • Handle the IB connection suspending and reactivating • Propose Progress engine (PE) and migration thread based (MT) design to optimize VM migration and MPI application performance
  • 49. HPCAC-Switzerland (April ‘17) 49Network Based Computing Laboratory • Scientific Computing – Message Passing Interface (MPI), including MPI + OpenMP, is the Dominant Programming Model – Many discussions towards Partitioned Global Address Space (PGAS) • UPC, OpenSHMEM, CAF, UPC++ etc. – Hybrid Programming: MPI + PGAS (OpenSHMEM, UPC) • Deep Learning – Caffe, CNTK, TensorFlow, and many more Major Computing Categories
  • 50. HPCAC-Switzerland (April ‘17) 50Network Based Computing Laboratory • Deep Learning frameworks are a different game altogether – Unusually large message sizes (order of megabytes) – Most communication based on GPU buffers • How to address these newer requirements? – GPU-specific Communication Libraries (NCCL) • NVidia's NCCL library provides inter-GPU communication – CUDA-Aware MPI (MVAPICH2-GDR) • Provides support for GPU-based communication • Can we exploit CUDA-Aware MPI and NCCL to support Deep Learning applications? Deep Learning: New Challenges for MPI Runtimes 1 32 4 Internode Comm. (Knomial) 1 2 CPU PLX 3 4 PLX Intranode Comm. (NCCL Ring) Ring Direction Hierarchical Communication (Knomial + NCCL ring)
  • 51. HPCAC-Switzerland (April ‘17) 51Network Based Computing Laboratory • NCCL has some limitations – Only works for a single node, thus, no scale-out on multiple nodes – Degradation across IOH (socket) for scale-up (within a node) • We propose optimized MPI_Bcast – Communication of very large GPU buffers (order of megabytes) – Scale-out on large number of dense multi-GPU nodes • Hierarchical Communication that efficiently exploits: – CUDA-Aware MPI_Bcast in MV2-GDR – NCCL Broadcast primitive Efficient Broadcast: MVAPICH2-GDR and NCCL 1 10 100 1000 10000 100000 1 8 64 512 4K 32K 256K 2M 16M 128M Latency(us) LogScale Message Size MV2-GDR MV2-GDR-Opt 100x 0 10 20 30 2 4 8 16 32 64 Time(seconds) Number of GPUs MV2-GDR MV2-GDR-Opt Performance Benefits: Microsoft CNTK DL framework (25% avg. improvement ) Performance Benefits: OSU Micro-benchmarks Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep Learning, A. Awan , K. Hamidouche , A. Venkatesh , and D. K. Panda, The 23rd European MPI Users' Group Meeting (EuroMPI 16), Sep 2016 [Best Paper Runner-Up]
  • 52. HPCAC-Switzerland (April ‘17) 52Network Based Computing Laboratory 0 100 200 300 2 4 8 16 32 64 128 Latency(ms) Message Size (MB) Reduce – 192 GPUs Large Message Optimized Collectives for Deep Learning 0 100 200 128 160 192 Latency(ms) No. of GPUs Reduce – 64 MB 0 100 200 300 16 32 64 Latency(ms) No. of GPUs Allreduce - 128 MB 0 50 100 2 4 8 16 32 64 128 Latency(ms) Message Size (MB) Bcast – 64 GPUs 0 50 100 16 32 64 Latency(ms) No. of GPUs Bcast 128 MB • MV2-GDR provides optimized collectives for large message sizes • Optimized Reduce, Allreduce, and Bcast • Good scaling with large number of GPUs • Available with MVAPICH2- GDR 2.2GA 0 100 200 300 2 4 8 16 32 64 128 Latency(ms) Message Size (MB) Allreduce – 64 GPUs
  • 53. HPCAC-Switzerland (April ‘17) 53Network Based Computing Laboratory • Caffe : A flexible and layered Deep Learning framework. • Benefits and Weaknesses – Multi-GPU Training within a single node – Performance degradation for GPUs across different sockets – Limited Scale-out • OSU-Caffe: MPI-based Parallel Training – Enable Scale-up (within a node) and Scale-out (across multi-GPU nodes) – network on ImageNet dataset OSU-Caffe: Scalable Deep Learning 0 50 100 150 200 250 8 16 32 64 128 TrainingTime(seconds) No. of GPUs GoogLeNet (ImageNet) on 128 GPUs Caffe OSU-Caffe (1024) OSU-Caffe (2048) Invalid use caseOSU-Caffe is publicly available from: http://hidl.cse.ohio-state.edu Awan, K. Hamidouche, J. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU Clusters, PPoPP, Sep 2017
  • 54. HPCAC-Switzerland (April ‘17) 54Network Based Computing Laboratory MVAPICH2 – Plans for Exascale • Performance and Memory scalability toward 1M cores • Hybrid programming (MPI + OpenSHMEM, MPI + UPC, MPI + CAF …) • MPI + Task* • Enhanced Optimization for GPU Support and Accelerators • Enhanced communication schemes for upcoming architectures • Knights Landing with MCDRAM* • NVLINK* • CAPI* • Extended topology-aware collectives • Extended Energy-aware designs and Virtualization Support • Extended Support for MPI Tools Interface (as in MPI 3.1) • Extended Checkpoint-Restart and migration support with SCR • Support for * features will be available in future MVAPICH2 Releases
  • 55. HPCAC-Switzerland (April ‘17) 55Network Based Computing Laboratory Funding Acknowledgments Funding Support by Equipment Support by
  • 56. HPCAC-Switzerland (April ‘17) 56Network Based Computing Laboratory Personnel Acknowledgments Current Students – A. Awan (Ph.D.) – R. Biswas (M.S.) – M. Bayatpour (Ph.D.) – S. Chakraborthy (Ph.D.) Past Students – A. Augustine (M.S.) – P. Balaji (Ph.D.) – S. Bhagvat (M.S.) – A. Bhat (M.S.) – D. Buntinas (Ph.D.) – L. Chai (Ph.D.) – B. Chandrasekharan (M.S.) – N. Dandapanthula (M.S.) – V. Dhanraj (M.S.) – T. Gangadharappa (M.S.) – K. Gopalakrishnan (M.S.) – R. Rajachandrasekar (Ph.D.) – G. Santhanaraman (Ph.D.) – A. Singh (Ph.D.) – J. Sridhar (M.S.) – S. Sur (Ph.D.) – H. Subramoni (Ph.D.) – K. Vaidyanathan (Ph.D.) – A. Venkatesh (Ph.D.) – A. Vishnu (Ph.D.) – J. Wu (Ph.D.) – W. Yu (Ph.D.) Past Research Scientist – K. Hamidouche – S. Sur Past Post-Docs – D. Banerjee – X. Besseron – H.-W. Jin – W. Huang (Ph.D.) – N. Islam (Ph.D.) – W. Jiang (M.S.) – J. Jose (Ph.D.) – S. Kini (M.S.) – M. Koop (Ph.D.) – K. Kulkarni (M.S.) – R. Kumar (M.S.) – S. Krishnamoorthy (M.S.) – K. Kandalla (Ph.D.) – P. Lai (M.S.) – J. Liu (Ph.D.) – M. Luo (Ph.D.) – A. Mamidala (Ph.D.) – G. Marsh (M.S.) – V. Meshram (M.S.) – A. Moody (M.S.) – S. Naravula (Ph.D.) – R. Noronha (Ph.D.) – X. Ouyang (Ph.D.) – S. Pai (M.S.) – S. Potluri (Ph.D.) – M.-W. Rahman (Ph.D.) – C.-H. Chu (Ph.D.) – S. Guganani (Ph.D.) – J. Hashmi (Ph.D.) – H. Javed (Ph.D.) – J. Lin – M. Luo – E. Mancini Current Research Scientists – X. Lu – H. Subramoni Past Programmers – D. Bureddy – M. Arnold – J. Perkins Current Research Specialist – J. Smith – M. Li (Ph.D.) – D. Shankar (Ph.D.) – H. Shi (Ph.D.) – J. Zhang (Ph.D.) – S. Marcarelli – J. Vienne – H. Wang
  • 57. HPCAC-Switzerland (April ‘17) 57Network Based Computing Laboratory International Workshop on Communication Architectures at Extreme Scale (ExaComm) ExaComm series is being held with Int’l Supercomputing Conference (ISC) since 2015 ExaComm 2017 will be held in conjunction with ISC ’17 http://nowlab.cse.ohio-state.edu/exacomm/ Keynote Talk: Jeffrey S. Vetter, ORNL Best Paper Award Technical Paper Submission Deadline: Friday, April 14, 2017
  • 58. HPCAC-Switzerland (April ‘17) 58Network Based Computing Laboratory panda@cse.ohio-state.edu Thank You! The High-Performance Deep Learning Project http://hidl.cse.ohio-state.edu/ Network-Based Computing Laboratory http://nowlab.cse.ohio-state.edu/ The MVAPICH2 Project http://mvapich.cse.ohio-state.edu/