SlideShare a Scribd company logo
1 of 48
Download to read offline
High Performance and Scalable MPI+X Library for
Emerging HPC Clusters
Dhabaleswar K. (DK) Panda
The Ohio State University
E-mail: panda@cse.ohio-state.edu
http://www.cse.ohio-state.edu/~panda
Talk at Intel HPC Developer Conference (SC ‘16)
by
Khaled Hamidouche
The Ohio State University
E-mail: hamidouc@cse.ohio-state.edu
http://www.cse.ohio-state.edu/~hamidouc
Intel HPC Dev Conf (SC ‘16) 2Network Based Computing Laboratory
High-End Computing (HEC): ExaFlop & ExaByte
100-200
PFlops in
2016-2018
1 EFlops in
2023-2024?
10K-20K
EBytes in
2016-2018
40K EBytes
in 2020 ?
ExaFlop & HPC•
ExaByte & BigData•
Intel HPC Dev Conf (SC ‘16) 3Network Based Computing Laboratory
Trends for Commodity Computing Clusters in the Top 500
List (http://www.top500.org)
0
10
20
30
40
50
60
70
80
90
100
0
50
100
150
200
250
300
350
400
450
500
PercentageofClusters
NumberofClusters
Timeline
Percentage of Clusters
Number of Clusters
85%
Intel HPC Dev Conf (SC ‘16) 4Network Based Computing Laboratory
Drivers of Modern HPC Cluster Architectures
Tianhe – 2 Titan Stampede Tianhe – 1A
• Multi-core/many-core technologies
• Remote Direct Memory Access (RDMA)-enabled networking (InfiniBand and RoCE)
• Solid State Drives (SSDs), Non-Volatile Random-Access Memory (NVRAM), NVMe-SSD
• Accelerators (NVIDIA GPGPUs and Intel Xeon Phi)
Accelerators / Coprocessors
high compute density, high
performance/watt
>1 TFlop DP on a chip
High Performance Interconnects -
InfiniBand
<1usec latency, 100Gbps Bandwidth>Multi-core Processors SSD, NVMe-SSD, NVRAM
Intel HPC Dev Conf (SC ‘16) 5Network Based Computing Laboratory
Designing Communication Libraries for Multi-Petaflop and
Exaflop Systems: Challenges
Programming Models
MPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP,
OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.
Application Kernels/Applications
Networking Technologies
(InfiniBand, 40/100GigE,
Aries, and OmniPath)
Multi/Many-core
Architectures
Accelerators
(NVIDIA and MIC)
Middleware
Co-Design
Opportunities
and
Challenges
across Various
Layers
Performance
Scalability
Fault-
Resilience
Communication Library or Runtime for Programming Models
Point-to-point
Communicatio
n
Collective
Communicatio
n
Energy-
Awareness
Synchronizatio
n and Locks
I/O and
File Systems
Fault
Tolerance
Intel HPC Dev Conf (SC ‘16) 6Network Based Computing Laboratory
Exascale Programming models
Next-Generation Programming models
MPI+X
X= ?
OpenMP, OpenACC, CUDA, PGAS, Tasks….
Highly-Threaded
Systems (KNL)
Irregular
Communications
Heterogeneous
Computing with
Accelerators
• The community believes
exascale programming model
will be MPI+X
• But what is X?
– Can it be just OpenMP?
• Many different environments
and systems are emerging
– Different `X’ will satisfy the
respective needs
Intel HPC Dev Conf (SC ‘16) 7Network Based Computing Laboratory
• Scalability for million to billion processors
– Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided)
– Scalable job start-up
• Scalable Collective communication
– Offload
– Non-blocking
– Topology-aware
• Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores)
– Multiple end-points per node
• Support for efficient multi-threading (OpenMP)
• Integrated Support for GPGPUs and Accelerators (CUDA)
• Fault-tolerance/resiliency
• QoS support for communication and I/O
• Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM,
CAF, …)
• Virtualization
• Energy-Awareness
MPI+X Programming model: Broad Challenges at Exascale
Intel HPC Dev Conf (SC ‘16) 8Network Based Computing Laboratory
• Extreme Low Memory Footprint
– Memory per core continues to decrease
• D-L-A Framework
– Discover
• Overall network topology (fat-tree, 3D, …), Network topology for processes for a given job
• Node architecture, Health of network and node
– Learn
• Impact on performance and scalability
• Potential for failure
– Adapt
• Internal protocols and algorithms
• Process mapping
• Fault-tolerance solutions
– Low overhead techniques while delivering performance, scalability and fault-tolerance
Additional Challenges for Designing Exascale Software Libraries
Intel HPC Dev Conf (SC ‘16) 9Network Based Computing Laboratory
Overview of the MVAPICH2 Project
• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)
– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.0), Started in 2001, First version available in 2002
– MVAPICH2-X (MPI + PGAS), Available since 2011
– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014
– Support for Virtualization (MVAPICH2-Virt), Available since 2015
– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015
– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015
– Used by more than 2,690 organizations in 83 countries
– More than 402,000 (> 0.4 million) downloads from the OSU site directly
– Empowering many TOP500 clusters (Nov ‘16 ranking)
• 1st ranked 10,649,640-core cluster (Sunway TaihuLight) at NSC, Wuxi, China
• 13th ranked 241,108-core cluster (Pleiades) at NASA
• 17th ranked 519,640-core cluster (Stampede) at TACC
• 40th ranked 76,032-core cluster (Tsubame 2.5) at Tokyo Institute of Technology and many others
– Available with software stacks of many vendors and Linux Distros (RedHat and SuSE)
– http://mvapich.cse.ohio-state.edu
• Empowering Top500 systems for over a decade
– System-X from Virginia Tech (3rd in Nov 2003, 2,200 processors, 12.25 TFlops) ->
Sunway TaihuLight at NSC, Wuxi, China (1st in Nov’16, 10,649,640 cores, 93 PFlops)
Intel HPC Dev Conf (SC ‘16) 10Network Based Computing Laboratory
• Hybrid MPI+OpenMP Models for Highly-threaded Systems
• Hybrid MPI+PGAS Models for Irregular Applications
• Hybrid MPI+GPGPUs and OpenSHMEM for Heterogeneous
Computing with Accelerators
Outline
Intel HPC Dev Conf (SC ‘16) 11Network Based Computing Laboratory
• Systems like KNL
• MPI+OpenMP is seen as the best fit
– 1 MPI process per socket for Multi-core
– 4-8 MPI processes per KNL
– Each MPI process will launch OpenMP threads
• However, current MPI runtimes are not “efficiently” handling the hybrid
– Most of the application use Funneled mode: Only the MPI processes perform
communication
– Communication phases are the bottleneck
• Multi-endpoint based designs
– Transparently use threads inside MPI runtime
– Increase the concurrency
Highly Threaded Systems
Intel HPC Dev Conf (SC ‘16) 12Network Based Computing Laboratory
• MPI-4 will enhance the thread support
– Endpoint proposal in the Forum
– Application threads will be able to efficiently perform communication
– Endpoint is the communication entity that maps to a thread
• Idea is to have multiple addressable communication entities within a single process
• No context switch between application and runtime => better performance
• OpenMP 4.5 is more powerful than just traditional data parallelism
– Supports task parallelism since OpenMP 3.0
– Supports heterogeneous computing with accelerator targets since OpenMP 4.0
– Supports explicit SIMD and threads affinity pragmas since OpenMP 4.0
MPI and OpenMP
Intel HPC Dev Conf (SC ‘16) 13Network Based Computing Laboratory
• Lock-free Communication
– Threads have their own resources
• Dynamically adapt the number of threads
– Avoid resource contention
– Depends on application pattern and system
performance
• Both intra- and inter-nodes communication
– Threads boost both channels
• New MEP-Aware collectives
• Applicable to the endpoint proposal in MPI-4
MEP-based design: MVAPICH2 Approach
Multi-endpoint Runtime
Request
Handling
Progress
Engine
Comm.
Resources
Management
Endpoint Controller
Collective
Optimized Algorithm
Point-to-Point
MPI/OpenMP Program
MPI Collective
(MPI_Alltoallv,
MPI_Allgatherv...)
*Transparent support
MPI Point-to-Point
(MPI_Isend, MPI_Irecv,
MPI_Waitall...)
*OpenMP Pragma needed
Lock-free Communication Components
M. Luo, X. Lu, K. Hamidouche, K. Kandalla and D. K. Panda, Initial Study of Multi-Endpoint Runtime for MPI+OpenMP Hybrid
Applications on Multi-Core Systems. International Symposium on Principles and Practice of Parallel Programming (PPoPP '14).
Intel HPC Dev Conf (SC ‘16) 14Network Based Computing Laboratory
Performance Benefits: OSU Micro-Benchmarks (OMB) level
0
20
40
60
80
100
120
140
160
180
16 64 256 1K 4K 16K 64K 256K
Latency(us)
Message size
Orig Multi-threaded Runtime
Proposed Multi-Endpoint Runtime
Process based Runtime
0
100
200
300
400
500
600
700
800
900
1000
1 4 16 64 256 1K 2K
Latency(us)
Message size
orig
mep
0
100
200
300
400
500
600
700
800
900
1000
1 4 16 64 256 1K 2K
Latency(us)
Message size
orig
mep
Multi-pairs Bcast Alltoallv
• Reduces the latency from 40us to 1.85 us (21X)
• Achieves the same as Processes
• 40% improvement on latency for Bcast on 4,096 cores
• 30% improvement on latency for Alltoall on 4,096 cores
Intel HPC Dev Conf (SC ‘16) 15Network Based Computing Laboratory
Performance Benefits: Application Kernel level
0
20
40
60
80
100
120
140
160
180
200
CG LU MG
Time(s)
NAS Benchmark with 256 nodes (4,096 cores) CLASS E
Orig
mep
0
5
10
15
2K 4K
ExecutionTime
# of cores and input size
P3DFFT
Orig MEP
• 6.3% improvement for MG, 11.7% improvement for CG, and 12.6% improvement for LU on 4,096
cores.
• With P3DFFT, we are able to observe a 30% improvement in communication time and 13.5%
improvement in the total execution time.
Intel HPC Dev Conf (SC ‘16) 16Network Based Computing Laboratory
• On-load approach
– Takes advantage of the idle cores
– Dynamically configurable
– Takes advantage of highly multithreaded cores
– Takes advantage of MCDRAM of KNL processors
• Applicable to other programming models such as PGAS, Task-based, etc.
• Provides portability, performance, and applicability to runtime as well as
applications in a transparent manner
Enhanced Designs for KNL: MVAPICH2 Approach
Intel HPC Dev Conf (SC ‘16) 17Network Based Computing Laboratory
Performance Benefits of the Enhanced Designs
• New designs to exploit high concurrency and MCDRAM of KNL
• Significant improvements for large message sizes
• Benefits seen in varying message size as well as varying MPI processes
Very Large Message Bi-directional Bandwidth16-process Intra-node All-to-AllIntra-node Broadcast with 64MB Message
0
2000
4000
6000
8000
10000
2M 4M 8M 16M 32M 64M
Bandwidth(MB/s)
Message size
MVAPICH2 MVAPICH2-Optimized
0
10000
20000
30000
40000
50000
60000
4 8 16
Latency(us)
No. of processes
MVAPICH2 MVAPICH2-Optimized
27%
0
50000
100000
150000
200000
250000
300000
1M 2M 4M 8M 16M 32M
Latency(us)
Message size
MVAPICH2 MVAPICH2-Optimized
17.2%
52%
Intel HPC Dev Conf (SC ‘16) 18Network Based Computing Laboratory
Performance Benefits of the Enhanced Designs
0
10000
20000
30000
40000
50000
60000
1M 2M 4M 8M 16M 32M 64M
Bandwidth(MB/s)
Message Size (bytes)
MV2_Opt_DRAM MV2_Opt_MCDRAM
MV2_Def_DRAM MV2_Def_MCDRAM
30%
0
50
100
150
200
250
300
4:268 4:204 4:64
Time(s)
MPI Processes : OMP Threads
MV2_Def_DRAM MV2_Opt_DRAM
15%
Multi-Bandwidth using 32 MPI processesCNTK: MLP Training Time using MNIST (BS:64)
• Benefits observed on training time of Multi-level Perceptron (MLP) model on MNIST dataset
using CNTK Deep Learning Framework
Enhanced Designs will be available in upcoming MVAPICH2 releases
Intel HPC Dev Conf (SC ‘16) 19Network Based Computing Laboratory
• Hybrid MPI+OpenMP Models for Highly-threaded Systems
• Hybrid MPI+PGAS Models for Irregular Applications
• Hybrid MPI+GPGPUs and OpenSHMEM for Heterogeneous
Computing with Accelerators
Outline
Intel HPC Dev Conf (SC ‘16) 20Network Based Computing Laboratory
Maturity of Runtimes and Application Requirements
• MPI has been the most popular model for a long time
- Available on every major machine
- Portability, performance and scaling
- Most parallel HPC code is designed using MPI
- Simplicity - structured and iterative communication patterns
• PGAS Models
- Increasing interest in community
- Simple shared memory abstractions and one-sided communication
- Easier to express irregular communication
• Need for hybrid MPI + PGAS
- Application can have kernels with different communication characteristics
- Porting only part of the applications to reduce programming effort
Intel HPC Dev Conf (SC ‘16) 21Network Based Computing Laboratory
Hybrid (MPI+PGAS) Programming
• Application sub-kernels can be re-written in MPI/PGAS based on communication
characteristics
• Benefits:
– Best of Distributed Computing Model
– Best of Shared Memory Computing Model
Kernel 1
MPI
Kernel 2
MPI
Kernel 3
MPI
Kernel N
MPI
HPC Application
Kernel 2
PGAS
Kernel N
PGAS
Intel HPC Dev Conf (SC ‘16) 22Network Based Computing Laboratory
Current Approaches for Hybrid Programming
• Need more network and
memory resources
• Might lead to deadlock!
• Layering one programming model over another
– Poor performance due to semantics mismatch
– MPI-3 RMA tries to address
• Separate runtime for each programming model
Hybrid (OpenSHMEM + MPI) Applications
OpenSHMEM Runtime MPI Runtime
InfiniBand, RoCE, iWARP
OpenSHMEM Calls MPI Class
Intel HPC Dev Conf (SC ‘16) 23Network Based Computing Laboratory
The Need for a Unified Runtime
• Deadlock when a message is sitting in one runtime, but application calls the other runtime
• Prescription to avoid this is to barrier in one mode (either OpenSHMEM or MPI) before entering
the other
• Or runtimes require dedicated progress threads
• Bad performance!!
• Similar issues for MPI + UPC applications over individual runtimes
shmem_int_fadd (data at p1);
/* operate on data */
MPI_Barrier(comm);
/*
local
computation
*/
MPI_Barrier(comm);
P0 P1
OpenSHMEM
Runtime
MPI Runtime OpenSHMEM
Runtime
MPI Runtime
Active Msg
Intel HPC Dev Conf (SC ‘16) 24Network Based Computing Laboratory
MVAPICH2-X for Hybrid MPI + PGAS Applications
MPI, OpenSHMEM, UPC, CAF, UPC++ or Hybrid (MPI +
PGAS) Applications
Unified MVAPICH2-X Runtime
InfiniBand, RoCE, iWARP
OpenSHMEM Calls MPI CallsUPC Calls
• Unified communication runtime for MPI, UPC, OpenSHMEM, CAF, UPC++ available with MVAPICH2-
X 1.9 onwards! (since 2012)
– http://mvapich.cse.ohio-state.edu
• Feature Highlights
– Supports MPI(+OpenMP), OpenSHMEM, UPC, CAF, UPC++, MPI(+OpenMP) + OpenSHMEM, MPI(+OpenMP)
+ UPC
– MPI-3 compliant, OpenSHMEM v1.0 standard compliant, UPC v1.2 standard compliant (with initial support
for UPC 1.3), CAF 2008 standard (OpenUH), UPC++
– Scalable Inter-node and intra-node communication – point-to-point and collectives
CAF Calls UPC++ Calls
Intel HPC Dev Conf (SC ‘16) 25Network Based Computing Laboratory
OpenSHMEM Atomic Operations: Performance
0
5
10
15
20
25
30
fadd finc add inc cswap swap
Time(us)
UH-SHMEM MV2X-SHMEM Scalable-SHMEM OMPI-SHMEM
• OSU OpenSHMEM micro-benchmarks (OMB v4.1)
• MV2-X SHMEM performs up to 40% better compared to UH-SHMEM
Intel HPC Dev Conf (SC ‘16) 26Network Based Computing Laboratory
0
1000
2000
3000
4000
64
128
256
512
1K
2K
4K
8K
16K
32K
64K
128K
Time(ms)
Message Size
UPC-GASNet
UPC-OSU
0
50
100
150
200
250
64
128
256
512
1K
2K
4K
8K
16K
32K
64K
128K
256K
Time(ms)
Message Size
UPC-GASNet
UPC-OSU 2X
UPC Collectives Performance
Broadcast (2,048 processes) Scatter (2,048 processes)
Gather (2,048 processes) Exchange (2,048 processes)
0
5000
10000
15000
64
128
256
512
1K
2K
4K
8K
16K
32K
64K
128K
256K
Time(us)
Message Size
UPC-GASNet
UPC-OSU
0
100
200
300
64
128
256
512
1K
2K
4K
8K
16K
32K
64K
128K
256K
Time(ms)
Message Size
UPC-GASNet
UPC-OSU
25X
2X
35%
J. Jose, K. Hamidouche, J. Zhang, A. Venkatesh, and D. K. Panda, Optimizing Collective Communication in UPC (HiPS’14, in association with
IPDPS’14)
Intel HPC Dev Conf (SC ‘16) 27Network Based Computing Laboratory
Performance Evaluations for CAF model
0
1000
2000
3000
4000
5000
6000
Bandwidth(MB/s)
Message Size (byte)
GASNet-IBV
GASNet-MPI
MV2X
0
1000
2000
3000
4000
5000
6000
Bandwidth(MB/s)
Message Size (byte)
GASNet-IBV
GASNet-MPI
MV2X
0
2
4
6
8
10
GASNet-IBV GASNet-MPI MV2X
Latency
(ms)
0
2
4
6
8
10
GASNet-IBV GASNet-MPI MV2X
Latency(ms)
0 100 200 300
bt.D.256
cg.D.256
ep.D.256
ft.D.256
mg.D.256
sp.D.256
GASNet-IBV GASNet-MPI MV2X
Time (sec)
Get NAS-CAFPut
• Micro-benchmark improvement (MV2X vs. GASNet-IBV, UH CAF test-suite)
– Put bandwidth: 3.5X improvement on 4KB; Put latency: reduce 29% on 4B
• Application performance improvement (NAS-CAF one-sided implementation)
– Reduce the execution time by 12% (SP.D.256), 18% (BT.D.256)
3.5X
29%
12%
18%
J. Lin, K. Hamidouche, X. Lu, M. Li and D. K. Panda, High-performance Co-array Fortran support with MVAPICH2-X: Initial
experience and evaluation, HIPS’15
Intel HPC Dev Conf (SC ‘16) 28Network Based Computing Laboratory
UPC++ Collectives Performance
MPI + {UPC++}
application
GASNet Interfaces
UPC++
Runtime
Network
Conduit (MPI)
MVAPICH2-X
Unified
communication
Runtime (UCR)
MPI + {UPC++}
application
UPC++ Runtime
MPI
Interfaces
• Full and native support for hybrid MPI + UPC++ applications
• Better performance compared to IBV and MPI conduits
• OSU Micro-benchmarks (OMB) support for UPC++
• Available since MVAPICH2-X 2.2RC1
0
5000
10000
15000
20000
25000
30000
35000
40000
Time(us)
Message Size (bytes)
GASNet_MPI
GASNET_IBV
MV2-X
14x
Inter-node Broadcast (64 nodes 1:ppn)
J. M. Hashmi, K. Hamidouche, and D. K. Panda, Enabling
Performance Efficient Runtime Support for hybrid
MPI+UPC++ Programming Models, IEEE International
Conference on High Performance Computing and
Communications (HPCC 2016)
Intel HPC Dev Conf (SC ‘16) 29Network Based Computing Laboratory
Application Level Performance with Graph500 and Sort
Graph500 Execution Time
J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming
Models, International Supercomputing Conference (ISC’13), June 2013
0
5
10
15
20
25
30
35
4K 8K 16K
Time(s)
No. of Processes
MPI-Simple
MPI-CSC
MPI-CSR
Hybrid (MPI+OpenSHMEM)
13X
7.6X
• Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design
• 8,192 processes
- 2.4X improvement over MPI-CSR
- 7.6X improvement over MPI-Simple
• 16,384 processes
- 1.5X improvement over MPI-CSR
- 13X improvement over MPI-Simple
Sort Execution Time
0
500
1000
1500
2000
2500
3000
500GB-512 1TB-1K 2TB-2K 4TB-4K
Time(seconds)
Input Data - No. of Processes
MPI Hybrid
51%
• Performance of Hybrid (MPI+OpenSHMEM) Sort
Application
• 4,096 processes, 4 TB Input Size
- MPI – 2408 sec; 0.16 TB/min
- Hybrid – 1172 sec; 0.36 TB/min
- 51% improvement over MPI-design
J. Jose, S. Potluri, H. Subramoni, X. Lu, K. Hamidouche, K. Schulz, H. Sundar and D. Panda Designing Scalable Out-of-core Sorting with
Hybrid MPI+PGAS Programming Models, PGAS’14
Intel HPC Dev Conf (SC ‘16) 30Network Based Computing Laboratory
MiniMD – Total Execution Time
• Hybrid design performs better than MPI implementation
• 1,024 processes
- 17% improvement over MPI version
• Strong Scaling
Input size: 128 * 128 * 128
Performance Strong Scaling
0
1000
2000
3000
512 1,024
Hybrid-Barrier MPI-Original Hybrid-Advanced
17%
0
1000
2000
3000
256 512 1,024
Hybrid-Barrier MPI-Original Hybrid-Advanced
Time(ms)
Time(ms)
# of Cores # of Cores
Intel HPC Dev Conf (SC ‘16) 31Network Based Computing Laboratory
Accelerating MaTEx k-NN with Hybrid MPI and OpenSHMEM
KDD (2.5GB) on 512 cores
9.0%
KDD-tranc (30MB) on 256 cores
27.6%
• Benchmark: KDD Cup 2010 (8,407,752 records, 2 classes, k=5)
• For truncated KDD workload on 256 cores, reduce 27.6% execution time
• For full KDD workload on 512 cores, reduce 9.0% execution time
J. Lin, K. Hamidouche, J. Zhang, X. Lu, A. Vishnu, D. Panda. Accelerating k-NN Algorithm with Hybrid MPI and OpenSHMEM,
OpenSHMEM 2015
• MaTEx: MPI-based Machine learning algorithm library
• k-NN: a popular supervised algorithm for classification
• Hybrid designs:
– Overlapped Data Flow; One-sided Data Transfer; Circular-buffer Structure
Intel HPC Dev Conf (SC ‘16) 32Network Based Computing Laboratory
• Hybrid MPI+OpenMP Models for Highly-threaded Systems
• Hybrid MPI+PGAS Models for Irregular Applications
• Hybrid MPI+GPGPUs and OpenSHMEM for Heterogeneous
Computing with Accelerators
Outline
Intel HPC Dev Conf (SC ‘16) 33Network Based Computing Laboratory
At Sender:
At Receiver:
MPI_Recv(r_devbuf, size, …);
inside
MVAPICH2
• Standard MPI interfaces used for unified data movement
• Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)
• Overlaps data movement from GPU with RDMA transfers
High Performance and High Productivity
MPI_Send(s_devbuf, size, …);
GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU
Intel HPC Dev Conf (SC ‘16) 34Network Based Computing Laboratory
CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.2 Releases
• Support for MPI communication from NVIDIA GPU device memory
• High performance RDMA-based inter-node point-to-point communication
(GPU-GPU, GPU-Host and Host-GPU)
• High performance intra-node point-to-point communication for multi-GPU
adapters/node (GPU-GPU, GPU-Host and Host-GPU)
• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node
communication for multiple GPU adapters/node
• Optimized and tuned collectives for GPU device buffers
• MPI datatype support for point-to-point and collective communication from
GPU device buffers
• Unified memory
Intel HPC Dev Conf (SC ‘16) 35Network Based Computing Laboratory
0
1000
2000
3000
4000
1 4 16 64 256 1K 4K
MV2-GDR2.2
MV2-GDR2.0b
MV2 w/o GDR
GPU-GPU Internode Bi-Bandwidth
Message Size (bytes)
Bi-Bandwidth(MB/s)
0
5
10
15
20
25
30
0 2 8 32 128 512 2K
MV2-GDR2.2 MV2-GDR2.0b
MV2 w/o GDR
GPU-GPU internode latency
Message Size (bytes)
Latency(us)
MVAPICH2-GDR-2.2
Intel Ivy Bridge (E5-2680 v2) node - 20 cores
NVIDIA Tesla K40c GPU
Mellanox Connect-X4 EDR HCA
CUDA 8.0
Mellanox OFED 3.0 with GPU-Direct-RDMA
10x
2X
11x
Performance of MVAPICH2-GPU with GPU-Direct RDMA (GDR)
2.18us
0
500
1000
1500
2000
2500
3000
1 4 16 64 256 1K 4K
MV2-GDR2.2
MV2-GDR2.0b
MV2 w/o GDR
GPU-GPU Internode Bandwidth
Message Size (bytes)
Bandwidth
(MB/s)
11X
2X
3X
Intel HPC Dev Conf (SC ‘16) 36Network Based Computing Laboratory
• Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB)
• HoomdBlue Version 1.0.5
• GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768
MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768
MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384
Application-Level Evaluation (HOOMD-blue)
0
500
1000
1500
2000
2500
4 8 16 32
AverageTimeStepsper
second(TPS)
Number of Processes
MV2 MV2+GDR
0
500
1000
1500
2000
2500
3000
3500
4 8 16 32
AverageTimeStepsper
second(TPS)
Number of Processes
64K Particles 256K Particles
2X
2X
Intel HPC Dev Conf (SC ‘16) 37Network Based Computing Laboratory
Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland
0
0.2
0.4
0.6
0.8
1
1.2
16 32 64 96NormalizedExecutionTime
Number of GPUs
CSCS GPU cluster
Default Callback-based Event-based
0
0.2
0.4
0.6
0.8
1
1.2
4 8 16 32
NormalizedExecutionTime
Number of GPUs
Wilkes GPU Cluster
Default Callback-based Event-based
• 2X improvement on 32 GPUs nodes
• 30% improvement on 96 GPU nodes (8 GPUs/node)
C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data
Movement Processing on Modern GPU-enabled Systems, IPDPS’16
On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application
Cosmo model: http://www2.cosmo-model.org/content
/tasks/operational/meteoSwiss/
Intel HPC Dev Conf (SC ‘16) 38Network Based Computing Laboratory
Need for Non-Uniform Memory Allocation in OpenSHMEM for
Heterogeneous Architectures
• MIC cores have limited
memory per core
• OpenSHMEM relies on
symmetric memory,
allocated using shmalloc()
• shmalloc() allocates same amount of memory on all PEs
• For applications running in symmetric mode, this limits the total heap size
• Similar issues for applications (even host-only) with memory load imbalance
(Graph500, Out-of-Core Sort, etc.)
• How to allocate different amounts of memory on host and MIC cores, and still be
able to communicate?
MIC MemoryHost Memory
Host Cores MIC Cores
Memory per core
Intel HPC Dev Conf (SC ‘16) 39Network Based Computing Laboratory
OpenSHMEM Design for MIC Clusters
OpenSHMEM
Applications
Multi/Many-Core Architectures
with memory heterogeneity
MVAPICH2-X OpenSHMEM Runtime
InfiniBand Networks
OpenSHMEM
Programming Model
InfiniBand
Channel
SCIF Channel
Shared Memory/
CMA Channel
Proxy based Communication
Extensions
Application
Co-Design
Symmetric Memory Manager
• Non-Uniform Memory Allocation:
– Team-based Memory Allocation
(Proposed Extensions)
– Address Structure for non-uniform memory allocations
void shmem_team_create(shmem_team_t team, int *ranks,
int size, shmem_team_t *newteam);
void shmem_team_destroy(shmem_team_t *team);
void shmem_team_split(shmem_team_t team, int color,
int key, shmem_team_t *newteam);
int shmem_team_rank(shmem_team_t team);
int shmem_team_size(shmem_team_t team);
void *shmalloc_team (shmem_team_t team, size_t size);
void shfree_team(shmem_team_t team, void *addr);
Intel HPC Dev Conf (SC ‘16) 40Network Based Computing Laboratory
HOST2
Proxy-based Designs for OpenSHMEM
OpenSHMEM Put using Active Proxy OpenSHMEM Get using Active Proxy
HOST1
MIC1
H
C
A
HOST2
MIC2
H
C
A
(1) IB REQ
(2) SCIF
Read
(2) IB
Write
(3) IB
FIN
HOST1
MIC1
H
C
A
MIC2
H
C
A
(3) IB
FIN
(2) SCIF
Read
(2) IB
Write
(1) IB
REQ
• Current generation architectures impose limitations on read bandwidth when HCA
reads from MIC memory
– Impacts both put and get operation performance
• Solution: Pipelined data transfer by proxy running on host using IB and SCIF channels
• Improves latency and bandwidth!
Intel HPC Dev Conf (SC ‘16) 41Network Based Computing Laboratory
OpenSHMEM Put/Get Performance
OpenSHMEM Put Latency OpenSHMEM Get Latency
0
1000
2000
3000
4000
5000
1
4
16
64
256
1K
4K
16K
64K
256K
1M
4M
Latency(us)
Message Size
MV2X-Def
MV2X-Opt
0
1000
2000
3000
4000
5000
1
4
16
64
256
1K
4K
16K
64K
256K
1M
4M
Latency(us)
Message Size
MV2X-Def
MV2X-Opt
• Proxy-based designs alleviate hardware limitations
• Put Latency of 4M message: Default: 3911us, Optimized: 838us
• Get Latency of 4M message: Default: 3889us, Optimized: 837us
4.5X 4.6X
Intel HPC Dev Conf (SC ‘16) 42Network Based Computing Laboratory
Performance Evaluations using Graph500
Native Mode (8 procs/MIC) Symmetric Mode (16 Host+16MIC)
• Graph500 Execution Time (Native Mode):
– 8 processes per MIC node
– At 512 processes , Default: 5.17s, Optimized: 4.96s
– Performance Improvement from MIC-aware collectives design
• Graph500 Execution Time (Symmetric Mode):
– 16 processes on each Host and MIC node
– At 1,024 processes, Default: 15.91s, Optimized: 12.41s
– Performance Improvement from MIC-aware collectives and proxy-based designs
0
2
4
6
64 128 256 512
ExecutionTime(s)
Number of Processes
MV2X-Def MV2X-Opt
0
5
10
15
20
25
128 256 512 1024
ExecutionTime(s)
Number of Processes
MV2X-Def MV2X-Opt
28%
Intel HPC Dev Conf (SC ‘16) 43Network Based Computing Laboratory
Graph500 Evaluations with Extensions
• Redesigned Graph500 using MIC to overlap computation/communication
– Data Transfer to MIC memory; MIC cores pre-processes received data
– Host processes traverses vertices, and sends out new vertices
• Graph500 Execution time at 1,024 processes:
– 16 processes on each Host and MIC node
– Host-Only: .33s, Host+MIC with Extensions: .26s
• Magnitudes of improvement compared to default symmetric mode
– Default Symmetric Mode: 12.1s, Host+MIC Extensions: 0.16s
0
0.2
0.4
0.6
0.8
128 256 512 1024ExecutionTime(s)
Number of Processes
Host Host+MIC (extensions)
26%
J. Jose, K. Hamidouche, X. Lu, S. Potluri, J. Zhang, K. Tomko and D. K. Panda, High Performance OpenSHMEM for Intel MIC Clusters: Extensions,
Runtime Designs and Application Co-Design, IEEE International Conference on Cluster Computing (CLUSTER '14) (Best Paper Finalist)
Intel HPC Dev Conf (SC ‘16) 44Network Based Computing Laboratory
• Architectures for Exascale systems are evolving
• Exascale systems will be constrained by
– Power
– Memory per core
– Data movement cost
– Faults
• Programming Models, Runtimes and Middleware need to be designed for
– Scalability
– Performance
– Fault-resilience
– Energy-awareness
– Programmability
– Productivity
• High Performance and Scalable MPI+X libraries are needed
• Highlighted some of the approaches taken by the MVAPICH2 project
• Need continuous innovation to have the right MPI+X libraries for Exascale
systems
Looking into the Future ….
Intel HPC Dev Conf (SC ‘16) 45Network Based Computing Laboratory
Funding Acknowledgments
Funding Support by
Equipment Support by
Intel HPC Dev Conf (SC ‘16) 46Network Based Computing Laboratory
Personnel Acknowledgments
Current Students
– A. Awan (Ph.D.)
– M. Bayatpour (Ph.D.)
– S. Chakraborthy (Ph.D.)
– C.-H. Chu (Ph.D.)
Past Students
– A. Augustine (M.S.)
– P. Balaji (Ph.D.)
– S. Bhagvat (M.S.)
– A. Bhat (M.S.)
– D. Buntinas (Ph.D.)
– L. Chai (Ph.D.)
– B. Chandrasekharan (M.S.)
– N. Dandapanthula (M.S.)
– V. Dhanraj (M.S.)
– T. Gangadharappa (M.S.)
– K. Gopalakrishnan (M.S.)
– R. Rajachandrasekar (Ph.D.)
– G. Santhanaraman (Ph.D.)
– A. Singh (Ph.D.)
– J. Sridhar (M.S.)
– S. Sur (Ph.D.)
– H. Subramoni (Ph.D.)
– K. Vaidyanathan (Ph.D.)
– A. Vishnu (Ph.D.)
– J. Wu (Ph.D.)
– W. Yu (Ph.D.)
Past Research Scientist
– S. Sur
Past Post-Docs
– D. Banerjee
– X. Besseron
– H.-W. Jin
– W. Huang (Ph.D.)
– W. Jiang (M.S.)
– J. Jose (Ph.D.)
– S. Kini (M.S.)
– M. Koop (Ph.D.)
– K. Kulkarni (M.S.)
– R. Kumar (M.S.)
– S. Krishnamoorthy (M.S.)
– K. Kandalla (Ph.D.)
– P. Lai (M.S.)
– J. Liu (Ph.D.)
– M. Luo (Ph.D.)
– A. Mamidala (Ph.D.)
– G. Marsh (M.S.)
– V. Meshram (M.S.)
– A. Moody (M.S.)
– S. Naravula (Ph.D.)
– R. Noronha (Ph.D.)
– X. Ouyang (Ph.D.)
– S. Pai (M.S.)
– S. Potluri (Ph.D.)
– S. Guganani (Ph.D.)
– J. Hashmi (Ph.D.)
– N. Islam (Ph.D.)
– M. Li (Ph.D.)
– J. Lin
– M. Luo
– E. Mancini
Current Research Scientists
– K. Hamidouche
– X. Lu
– H. Subramoni
Past Programmers
– D. Bureddy
– M. Arnold
– J. Perkins
Current Research Specialist
– J. Smith
– M. Rahman (Ph.D.)
– D. Shankar (Ph.D.)
– A. Venkatesh (Ph.D.)
– J. Zhang (Ph.D.)
– S. Marcarelli
– J. Vienne
– H. Wang
Intel HPC Dev Conf (SC ‘16) 47Network Based Computing Laboratory
• Three Conference Tutorials (IB+HSE, IB+HSE Advanced, Big Data)
• HP-CAST
• Technical Papers (SC main conference; Doctoral Showcase; Poster; PDSW-
DISC, PAW, COMHPC, and ESPM2 Workshops)
• Booth Presentations (Mellanox, NVIDIA, NRL, PGAS)
• HPC Connection Workshop
• Will be stationed at Ohio Supercomputer Center/OH-TECH Booth (#1107)
– Multiple presentations and demos
• More Details from http://mvapich.cse.ohio-state.edu/talks/
OSU Team Will be Participating in Multiple Events at SC ‘16
Intel HPC Dev Conf (SC ‘16) 48Network Based Computing Laboratory
panda@cse.ohio-state.edu, hamidouch@cse.ohio-state.edu
Thank You!
Network-Based Computing Laboratory
http://nowlab.cse.ohio-state.edu/
The MVAPICH Project
http://mvapich.cse.ohio-state.edu/

More Related Content

What's hot

CUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computingCUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computinginside-BigData.com
 
BKK16-404B Data Analytics and Machine Learning- from Node to Cluster
BKK16-404B Data Analytics and Machine Learning- from Node to ClusterBKK16-404B Data Analytics and Machine Learning- from Node to Cluster
BKK16-404B Data Analytics and Machine Learning- from Node to ClusterLinaro
 
MIT's experience on OpenPOWER/POWER 9 platform
MIT's experience on OpenPOWER/POWER 9 platformMIT's experience on OpenPOWER/POWER 9 platform
MIT's experience on OpenPOWER/POWER 9 platformGanesan Narayanasamy
 
Some experiences for porting application to Intel Xeon Phi
Some experiences for porting application to Intel Xeon PhiSome experiences for porting application to Intel Xeon Phi
Some experiences for porting application to Intel Xeon PhiMaho Nakata
 
Exploring the Programming Models for the LUMI Supercomputer
Exploring the Programming Models for the LUMI Supercomputer Exploring the Programming Models for the LUMI Supercomputer
Exploring the Programming Models for the LUMI Supercomputer George Markomanolis
 
BKK16-408B Data Analytics and Machine Learning From Node to Cluster
BKK16-408B Data Analytics and Machine Learning From Node to ClusterBKK16-408B Data Analytics and Machine Learning From Node to Cluster
BKK16-408B Data Analytics and Machine Learning From Node to ClusterLinaro
 
Performance Optimization of Deep Learning Frameworks Caffe* and Tensorflow* f...
Performance Optimization of Deep Learning Frameworks Caffe* and Tensorflow* f...Performance Optimization of Deep Learning Frameworks Caffe* and Tensorflow* f...
Performance Optimization of Deep Learning Frameworks Caffe* and Tensorflow* f...Intel® Software
 
AI OpenPOWER Academia Discussion Group
AI OpenPOWER Academia Discussion Group AI OpenPOWER Academia Discussion Group
AI OpenPOWER Academia Discussion Group Ganesan Narayanasamy
 
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...Ganesan Narayanasamy
 
On the Capability and Achievable Performance of FPGAs for HPC Applications
On the Capability and Achievable Performance of FPGAs for HPC ApplicationsOn the Capability and Achievable Performance of FPGAs for HPC Applications
On the Capability and Achievable Performance of FPGAs for HPC ApplicationsWim Vanderbauwhede
 
AI is Impacting HPC Everywhere
AI is Impacting HPC EverywhereAI is Impacting HPC Everywhere
AI is Impacting HPC Everywhereinside-BigData.com
 
Assisting User’s Transition to Titan’s Accelerated Architecture
Assisting User’s Transition to Titan’s Accelerated ArchitectureAssisting User’s Transition to Titan’s Accelerated Architecture
Assisting User’s Transition to Titan’s Accelerated Architectureinside-BigData.com
 
Accelerate Machine Learning Software on Intel Architecture
Accelerate Machine Learning Software on Intel Architecture Accelerate Machine Learning Software on Intel Architecture
Accelerate Machine Learning Software on Intel Architecture Intel® Software
 
SCFE 2020 OpenCAPI presentation as part of OpenPWOER Tutorial
SCFE 2020 OpenCAPI presentation as part of OpenPWOER TutorialSCFE 2020 OpenCAPI presentation as part of OpenPWOER Tutorial
SCFE 2020 OpenCAPI presentation as part of OpenPWOER TutorialGanesan Narayanasamy
 
Easy and High Performance GPU Programming for Java Programmers
Easy and High Performance GPU Programming for Java ProgrammersEasy and High Performance GPU Programming for Java Programmers
Easy and High Performance GPU Programming for Java ProgrammersKazuaki Ishizaki
 
Omp tutorial cpugpu_programming_cdac
Omp tutorial cpugpu_programming_cdacOmp tutorial cpugpu_programming_cdac
Omp tutorial cpugpu_programming_cdacGanesan Narayanasamy
 
Advanced spark deep learning
Advanced spark deep learningAdvanced spark deep learning
Advanced spark deep learningAdam Gibson
 

What's hot (20)

Overview of HPC Interconnects
Overview of HPC InterconnectsOverview of HPC Interconnects
Overview of HPC Interconnects
 
CUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computingCUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computing
 
BKK16-404B Data Analytics and Machine Learning- from Node to Cluster
BKK16-404B Data Analytics and Machine Learning- from Node to ClusterBKK16-404B Data Analytics and Machine Learning- from Node to Cluster
BKK16-404B Data Analytics and Machine Learning- from Node to Cluster
 
MIT's experience on OpenPOWER/POWER 9 platform
MIT's experience on OpenPOWER/POWER 9 platformMIT's experience on OpenPOWER/POWER 9 platform
MIT's experience on OpenPOWER/POWER 9 platform
 
Some experiences for porting application to Intel Xeon Phi
Some experiences for porting application to Intel Xeon PhiSome experiences for porting application to Intel Xeon Phi
Some experiences for porting application to Intel Xeon Phi
 
Exploring the Programming Models for the LUMI Supercomputer
Exploring the Programming Models for the LUMI Supercomputer Exploring the Programming Models for the LUMI Supercomputer
Exploring the Programming Models for the LUMI Supercomputer
 
IBM HPC Transformation with AI
IBM HPC Transformation with AI IBM HPC Transformation with AI
IBM HPC Transformation with AI
 
BKK16-408B Data Analytics and Machine Learning From Node to Cluster
BKK16-408B Data Analytics and Machine Learning From Node to ClusterBKK16-408B Data Analytics and Machine Learning From Node to Cluster
BKK16-408B Data Analytics and Machine Learning From Node to Cluster
 
Performance Optimization of Deep Learning Frameworks Caffe* and Tensorflow* f...
Performance Optimization of Deep Learning Frameworks Caffe* and Tensorflow* f...Performance Optimization of Deep Learning Frameworks Caffe* and Tensorflow* f...
Performance Optimization of Deep Learning Frameworks Caffe* and Tensorflow* f...
 
AI OpenPOWER Academia Discussion Group
AI OpenPOWER Academia Discussion Group AI OpenPOWER Academia Discussion Group
AI OpenPOWER Academia Discussion Group
 
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...
 
On the Capability and Achievable Performance of FPGAs for HPC Applications
On the Capability and Achievable Performance of FPGAs for HPC ApplicationsOn the Capability and Achievable Performance of FPGAs for HPC Applications
On the Capability and Achievable Performance of FPGAs for HPC Applications
 
AI is Impacting HPC Everywhere
AI is Impacting HPC EverywhereAI is Impacting HPC Everywhere
AI is Impacting HPC Everywhere
 
Assisting User’s Transition to Titan’s Accelerated Architecture
Assisting User’s Transition to Titan’s Accelerated ArchitectureAssisting User’s Transition to Titan’s Accelerated Architecture
Assisting User’s Transition to Titan’s Accelerated Architecture
 
Accelerate Machine Learning Software on Intel Architecture
Accelerate Machine Learning Software on Intel Architecture Accelerate Machine Learning Software on Intel Architecture
Accelerate Machine Learning Software on Intel Architecture
 
SCFE 2020 OpenCAPI presentation as part of OpenPWOER Tutorial
SCFE 2020 OpenCAPI presentation as part of OpenPWOER TutorialSCFE 2020 OpenCAPI presentation as part of OpenPWOER Tutorial
SCFE 2020 OpenCAPI presentation as part of OpenPWOER Tutorial
 
Easy and High Performance GPU Programming for Java Programmers
Easy and High Performance GPU Programming for Java ProgrammersEasy and High Performance GPU Programming for Java Programmers
Easy and High Performance GPU Programming for Java Programmers
 
Omp tutorial cpugpu_programming_cdac
Omp tutorial cpugpu_programming_cdacOmp tutorial cpugpu_programming_cdac
Omp tutorial cpugpu_programming_cdac
 
Advanced spark deep learning
Advanced spark deep learningAdvanced spark deep learning
Advanced spark deep learning
 
Current Trends in HPC
Current Trends in HPCCurrent Trends in HPC
Current Trends in HPC
 

Similar to High Performance MPI+X Library for Emerging HPC Clusters

Designing HPC & Deep Learning Middleware for Exascale Systems
Designing HPC & Deep Learning Middleware for Exascale SystemsDesigning HPC & Deep Learning Middleware for Exascale Systems
Designing HPC & Deep Learning Middleware for Exascale Systemsinside-BigData.com
 
Scallable Distributed Deep Learning on OpenPOWER systems
Scallable Distributed Deep Learning on OpenPOWER systemsScallable Distributed Deep Learning on OpenPOWER systems
Scallable Distributed Deep Learning on OpenPOWER systemsGanesan Narayanasamy
 
Panda scalable hpc_bestpractices_tue100418
Panda scalable hpc_bestpractices_tue100418Panda scalable hpc_bestpractices_tue100418
Panda scalable hpc_bestpractices_tue100418inside-BigData.com
 
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale SystemsDesigning Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systemsinside-BigData.com
 
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...inside-BigData.com
 
Scalable and Distributed DNN Training on Modern HPC Systems
Scalable and Distributed DNN Training on Modern HPC SystemsScalable and Distributed DNN Training on Modern HPC Systems
Scalable and Distributed DNN Training on Modern HPC Systemsinside-BigData.com
 
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems
Designing HPC, Deep Learning, and Cloud Middleware for Exascale SystemsDesigning HPC, Deep Learning, and Cloud Middleware for Exascale Systems
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systemsinside-BigData.com
 
Accelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing TechnologiesAccelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing TechnologiesIntel® Software
 
Communication Frameworks for HPC and Big Data
Communication Frameworks for HPC and Big DataCommunication Frameworks for HPC and Big Data
Communication Frameworks for HPC and Big Datainside-BigData.com
 
Accelerating TensorFlow with RDMA for high-performance deep learning
Accelerating TensorFlow with RDMA for high-performance deep learningAccelerating TensorFlow with RDMA for high-performance deep learning
Accelerating TensorFlow with RDMA for high-performance deep learningDataWorks Summit
 
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future PlansMVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plansinside-BigData.com
 
UCX: An Open Source Framework for HPC Network APIs and Beyond
UCX: An Open Source Framework for HPC Network APIs and BeyondUCX: An Open Source Framework for HPC Network APIs and Beyond
UCX: An Open Source Framework for HPC Network APIs and BeyondEd Dodds
 
Designing High-Performance and Scalable Middleware for HPC, AI and Data Science
Designing High-Performance and Scalable Middleware for HPC, AI and Data ScienceDesigning High-Performance and Scalable Middleware for HPC, AI and Data Science
Designing High-Performance and Scalable Middleware for HPC, AI and Data ScienceObject Automation
 
OpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsOpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsHPCC Systems
 
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...DataWorks Summit/Hadoop Summit
 
Real time machine learning proposers day v3
Real time machine learning proposers day v3Real time machine learning proposers day v3
Real time machine learning proposers day v3mustafa sarac
 
Designing High performance & Scalable Middleware for HPC
Designing High performance & Scalable Middleware for HPCDesigning High performance & Scalable Middleware for HPC
Designing High performance & Scalable Middleware for HPCObject Automation
 
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...OpenStack
 
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...inside-BigData.com
 
Improving Efficiency of Machine Learning Algorithms using HPCC Systems
Improving Efficiency of Machine Learning Algorithms using HPCC SystemsImproving Efficiency of Machine Learning Algorithms using HPCC Systems
Improving Efficiency of Machine Learning Algorithms using HPCC SystemsHPCC Systems
 

Similar to High Performance MPI+X Library for Emerging HPC Clusters (20)

Designing HPC & Deep Learning Middleware for Exascale Systems
Designing HPC & Deep Learning Middleware for Exascale SystemsDesigning HPC & Deep Learning Middleware for Exascale Systems
Designing HPC & Deep Learning Middleware for Exascale Systems
 
Scallable Distributed Deep Learning on OpenPOWER systems
Scallable Distributed Deep Learning on OpenPOWER systemsScallable Distributed Deep Learning on OpenPOWER systems
Scallable Distributed Deep Learning on OpenPOWER systems
 
Panda scalable hpc_bestpractices_tue100418
Panda scalable hpc_bestpractices_tue100418Panda scalable hpc_bestpractices_tue100418
Panda scalable hpc_bestpractices_tue100418
 
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale SystemsDesigning Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
 
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
 
Scalable and Distributed DNN Training on Modern HPC Systems
Scalable and Distributed DNN Training on Modern HPC SystemsScalable and Distributed DNN Training on Modern HPC Systems
Scalable and Distributed DNN Training on Modern HPC Systems
 
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems
Designing HPC, Deep Learning, and Cloud Middleware for Exascale SystemsDesigning HPC, Deep Learning, and Cloud Middleware for Exascale Systems
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems
 
Accelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing TechnologiesAccelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing Technologies
 
Communication Frameworks for HPC and Big Data
Communication Frameworks for HPC and Big DataCommunication Frameworks for HPC and Big Data
Communication Frameworks for HPC and Big Data
 
Accelerating TensorFlow with RDMA for high-performance deep learning
Accelerating TensorFlow with RDMA for high-performance deep learningAccelerating TensorFlow with RDMA for high-performance deep learning
Accelerating TensorFlow with RDMA for high-performance deep learning
 
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future PlansMVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
 
UCX: An Open Source Framework for HPC Network APIs and Beyond
UCX: An Open Source Framework for HPC Network APIs and BeyondUCX: An Open Source Framework for HPC Network APIs and Beyond
UCX: An Open Source Framework for HPC Network APIs and Beyond
 
Designing High-Performance and Scalable Middleware for HPC, AI and Data Science
Designing High-Performance and Scalable Middleware for HPC, AI and Data ScienceDesigning High-Performance and Scalable Middleware for HPC, AI and Data Science
Designing High-Performance and Scalable Middleware for HPC, AI and Data Science
 
OpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsOpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC Systems
 
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
 
Real time machine learning proposers day v3
Real time machine learning proposers day v3Real time machine learning proposers day v3
Real time machine learning proposers day v3
 
Designing High performance & Scalable Middleware for HPC
Designing High performance & Scalable Middleware for HPCDesigning High performance & Scalable Middleware for HPC
Designing High performance & Scalable Middleware for HPC
 
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
 
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
 
Improving Efficiency of Machine Learning Algorithms using HPCC Systems
Improving Efficiency of Machine Learning Algorithms using HPCC SystemsImproving Efficiency of Machine Learning Algorithms using HPCC Systems
Improving Efficiency of Machine Learning Algorithms using HPCC Systems
 

More from Intel® Software

AI for All: Biology is eating the world & AI is eating Biology
AI for All: Biology is eating the world & AI is eating Biology AI for All: Biology is eating the world & AI is eating Biology
AI for All: Biology is eating the world & AI is eating Biology Intel® Software
 
Python Data Science and Machine Learning at Scale with Intel and Anaconda
Python Data Science and Machine Learning at Scale with Intel and AnacondaPython Data Science and Machine Learning at Scale with Intel and Anaconda
Python Data Science and Machine Learning at Scale with Intel and AnacondaIntel® Software
 
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSci
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSciStreamline End-to-End AI Pipelines with Intel, Databricks, and OmniSci
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSciIntel® Software
 
AI for good: Scaling AI in science, healthcare, and more.
AI for good: Scaling AI in science, healthcare, and more.AI for good: Scaling AI in science, healthcare, and more.
AI for good: Scaling AI in science, healthcare, and more.Intel® Software
 
Software AI Accelerators: The Next Frontier | Software for AI Optimization Su...
Software AI Accelerators: The Next Frontier | Software for AI Optimization Su...Software AI Accelerators: The Next Frontier | Software for AI Optimization Su...
Software AI Accelerators: The Next Frontier | Software for AI Optimization Su...Intel® Software
 
Advanced Techniques to Accelerate Model Tuning | Software for AI Optimization...
Advanced Techniques to Accelerate Model Tuning | Software for AI Optimization...Advanced Techniques to Accelerate Model Tuning | Software for AI Optimization...
Advanced Techniques to Accelerate Model Tuning | Software for AI Optimization...Intel® Software
 
Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...
Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...
Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...Intel® Software
 
AWS & Intel Webinar Series - Accelerating AI Research
AWS & Intel Webinar Series - Accelerating AI ResearchAWS & Intel Webinar Series - Accelerating AI Research
AWS & Intel Webinar Series - Accelerating AI ResearchIntel® Software
 
Intel AIDC Houston Summit - Overview Slides
Intel AIDC Houston Summit - Overview SlidesIntel AIDC Houston Summit - Overview Slides
Intel AIDC Houston Summit - Overview SlidesIntel® Software
 
AIDC NY: BODO AI Presentation - 09.19.2019
AIDC NY: BODO AI Presentation - 09.19.2019AIDC NY: BODO AI Presentation - 09.19.2019
AIDC NY: BODO AI Presentation - 09.19.2019Intel® Software
 
AIDC NY: Applications of Intel AI by QuEST Global - 09.19.2019
AIDC NY: Applications of Intel AI by QuEST Global - 09.19.2019AIDC NY: Applications of Intel AI by QuEST Global - 09.19.2019
AIDC NY: Applications of Intel AI by QuEST Global - 09.19.2019Intel® Software
 
Advanced Single Instruction Multiple Data (SIMD) Programming with Intel® Impl...
Advanced Single Instruction Multiple Data (SIMD) Programming with Intel® Impl...Advanced Single Instruction Multiple Data (SIMD) Programming with Intel® Impl...
Advanced Single Instruction Multiple Data (SIMD) Programming with Intel® Impl...Intel® Software
 
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...Intel® Software
 
Bring Intelligent Motion Using Reinforcement Learning Engines | SIGGRAPH 2019...
Bring Intelligent Motion Using Reinforcement Learning Engines | SIGGRAPH 2019...Bring Intelligent Motion Using Reinforcement Learning Engines | SIGGRAPH 2019...
Bring Intelligent Motion Using Reinforcement Learning Engines | SIGGRAPH 2019...Intel® Software
 
RenderMan*: The Role of Open Shading Language (OSL) with Intel® Advanced Vect...
RenderMan*: The Role of Open Shading Language (OSL) with Intel® Advanced Vect...RenderMan*: The Role of Open Shading Language (OSL) with Intel® Advanced Vect...
RenderMan*: The Role of Open Shading Language (OSL) with Intel® Advanced Vect...Intel® Software
 
AIDC India - Intel Movidius / Open Vino Slides
AIDC India - Intel Movidius / Open Vino SlidesAIDC India - Intel Movidius / Open Vino Slides
AIDC India - Intel Movidius / Open Vino SlidesIntel® Software
 
AIDC India - AI Vision Slides
AIDC India - AI Vision SlidesAIDC India - AI Vision Slides
AIDC India - AI Vision SlidesIntel® Software
 
Enhance and Accelerate Your AI and Machine Learning Solution | SIGGRAPH 2019 ...
Enhance and Accelerate Your AI and Machine Learning Solution | SIGGRAPH 2019 ...Enhance and Accelerate Your AI and Machine Learning Solution | SIGGRAPH 2019 ...
Enhance and Accelerate Your AI and Machine Learning Solution | SIGGRAPH 2019 ...Intel® Software
 

More from Intel® Software (20)

AI for All: Biology is eating the world & AI is eating Biology
AI for All: Biology is eating the world & AI is eating Biology AI for All: Biology is eating the world & AI is eating Biology
AI for All: Biology is eating the world & AI is eating Biology
 
Python Data Science and Machine Learning at Scale with Intel and Anaconda
Python Data Science and Machine Learning at Scale with Intel and AnacondaPython Data Science and Machine Learning at Scale with Intel and Anaconda
Python Data Science and Machine Learning at Scale with Intel and Anaconda
 
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSci
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSciStreamline End-to-End AI Pipelines with Intel, Databricks, and OmniSci
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSci
 
AI for good: Scaling AI in science, healthcare, and more.
AI for good: Scaling AI in science, healthcare, and more.AI for good: Scaling AI in science, healthcare, and more.
AI for good: Scaling AI in science, healthcare, and more.
 
Software AI Accelerators: The Next Frontier | Software for AI Optimization Su...
Software AI Accelerators: The Next Frontier | Software for AI Optimization Su...Software AI Accelerators: The Next Frontier | Software for AI Optimization Su...
Software AI Accelerators: The Next Frontier | Software for AI Optimization Su...
 
Advanced Techniques to Accelerate Model Tuning | Software for AI Optimization...
Advanced Techniques to Accelerate Model Tuning | Software for AI Optimization...Advanced Techniques to Accelerate Model Tuning | Software for AI Optimization...
Advanced Techniques to Accelerate Model Tuning | Software for AI Optimization...
 
Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...
Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...
Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...
 
AWS & Intel Webinar Series - Accelerating AI Research
AWS & Intel Webinar Series - Accelerating AI ResearchAWS & Intel Webinar Series - Accelerating AI Research
AWS & Intel Webinar Series - Accelerating AI Research
 
Intel Developer Program
Intel Developer ProgramIntel Developer Program
Intel Developer Program
 
Intel AIDC Houston Summit - Overview Slides
Intel AIDC Houston Summit - Overview SlidesIntel AIDC Houston Summit - Overview Slides
Intel AIDC Houston Summit - Overview Slides
 
AIDC NY: BODO AI Presentation - 09.19.2019
AIDC NY: BODO AI Presentation - 09.19.2019AIDC NY: BODO AI Presentation - 09.19.2019
AIDC NY: BODO AI Presentation - 09.19.2019
 
AIDC NY: Applications of Intel AI by QuEST Global - 09.19.2019
AIDC NY: Applications of Intel AI by QuEST Global - 09.19.2019AIDC NY: Applications of Intel AI by QuEST Global - 09.19.2019
AIDC NY: Applications of Intel AI by QuEST Global - 09.19.2019
 
Advanced Single Instruction Multiple Data (SIMD) Programming with Intel® Impl...
Advanced Single Instruction Multiple Data (SIMD) Programming with Intel® Impl...Advanced Single Instruction Multiple Data (SIMD) Programming with Intel® Impl...
Advanced Single Instruction Multiple Data (SIMD) Programming with Intel® Impl...
 
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...
 
Bring Intelligent Motion Using Reinforcement Learning Engines | SIGGRAPH 2019...
Bring Intelligent Motion Using Reinforcement Learning Engines | SIGGRAPH 2019...Bring Intelligent Motion Using Reinforcement Learning Engines | SIGGRAPH 2019...
Bring Intelligent Motion Using Reinforcement Learning Engines | SIGGRAPH 2019...
 
RenderMan*: The Role of Open Shading Language (OSL) with Intel® Advanced Vect...
RenderMan*: The Role of Open Shading Language (OSL) with Intel® Advanced Vect...RenderMan*: The Role of Open Shading Language (OSL) with Intel® Advanced Vect...
RenderMan*: The Role of Open Shading Language (OSL) with Intel® Advanced Vect...
 
AIDC India - AI on IA
AIDC India  - AI on IAAIDC India  - AI on IA
AIDC India - AI on IA
 
AIDC India - Intel Movidius / Open Vino Slides
AIDC India - Intel Movidius / Open Vino SlidesAIDC India - Intel Movidius / Open Vino Slides
AIDC India - Intel Movidius / Open Vino Slides
 
AIDC India - AI Vision Slides
AIDC India - AI Vision SlidesAIDC India - AI Vision Slides
AIDC India - AI Vision Slides
 
Enhance and Accelerate Your AI and Machine Learning Solution | SIGGRAPH 2019 ...
Enhance and Accelerate Your AI and Machine Learning Solution | SIGGRAPH 2019 ...Enhance and Accelerate Your AI and Machine Learning Solution | SIGGRAPH 2019 ...
Enhance and Accelerate Your AI and Machine Learning Solution | SIGGRAPH 2019 ...
 

Recently uploaded

Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?Antenna Manufacturer Coco
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 

Recently uploaded (20)

Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 

High Performance MPI+X Library for Emerging HPC Clusters

  • 1. High Performance and Scalable MPI+X Library for Emerging HPC Clusters Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda Talk at Intel HPC Developer Conference (SC ‘16) by Khaled Hamidouche The Ohio State University E-mail: hamidouc@cse.ohio-state.edu http://www.cse.ohio-state.edu/~hamidouc
  • 2. Intel HPC Dev Conf (SC ‘16) 2Network Based Computing Laboratory High-End Computing (HEC): ExaFlop & ExaByte 100-200 PFlops in 2016-2018 1 EFlops in 2023-2024? 10K-20K EBytes in 2016-2018 40K EBytes in 2020 ? ExaFlop & HPC• ExaByte & BigData•
  • 3. Intel HPC Dev Conf (SC ‘16) 3Network Based Computing Laboratory Trends for Commodity Computing Clusters in the Top 500 List (http://www.top500.org) 0 10 20 30 40 50 60 70 80 90 100 0 50 100 150 200 250 300 350 400 450 500 PercentageofClusters NumberofClusters Timeline Percentage of Clusters Number of Clusters 85%
  • 4. Intel HPC Dev Conf (SC ‘16) 4Network Based Computing Laboratory Drivers of Modern HPC Cluster Architectures Tianhe – 2 Titan Stampede Tianhe – 1A • Multi-core/many-core technologies • Remote Direct Memory Access (RDMA)-enabled networking (InfiniBand and RoCE) • Solid State Drives (SSDs), Non-Volatile Random-Access Memory (NVRAM), NVMe-SSD • Accelerators (NVIDIA GPGPUs and Intel Xeon Phi) Accelerators / Coprocessors high compute density, high performance/watt >1 TFlop DP on a chip High Performance Interconnects - InfiniBand <1usec latency, 100Gbps Bandwidth>Multi-core Processors SSD, NVMe-SSD, NVRAM
  • 5. Intel HPC Dev Conf (SC ‘16) 5Network Based Computing Laboratory Designing Communication Libraries for Multi-Petaflop and Exaflop Systems: Challenges Programming Models MPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP, OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc. Application Kernels/Applications Networking Technologies (InfiniBand, 40/100GigE, Aries, and OmniPath) Multi/Many-core Architectures Accelerators (NVIDIA and MIC) Middleware Co-Design Opportunities and Challenges across Various Layers Performance Scalability Fault- Resilience Communication Library or Runtime for Programming Models Point-to-point Communicatio n Collective Communicatio n Energy- Awareness Synchronizatio n and Locks I/O and File Systems Fault Tolerance
  • 6. Intel HPC Dev Conf (SC ‘16) 6Network Based Computing Laboratory Exascale Programming models Next-Generation Programming models MPI+X X= ? OpenMP, OpenACC, CUDA, PGAS, Tasks…. Highly-Threaded Systems (KNL) Irregular Communications Heterogeneous Computing with Accelerators • The community believes exascale programming model will be MPI+X • But what is X? – Can it be just OpenMP? • Many different environments and systems are emerging – Different `X’ will satisfy the respective needs
  • 7. Intel HPC Dev Conf (SC ‘16) 7Network Based Computing Laboratory • Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided) – Scalable job start-up • Scalable Collective communication – Offload – Non-blocking – Topology-aware • Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores) – Multiple end-points per node • Support for efficient multi-threading (OpenMP) • Integrated Support for GPGPUs and Accelerators (CUDA) • Fault-tolerance/resiliency • QoS support for communication and I/O • Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM, CAF, …) • Virtualization • Energy-Awareness MPI+X Programming model: Broad Challenges at Exascale
  • 8. Intel HPC Dev Conf (SC ‘16) 8Network Based Computing Laboratory • Extreme Low Memory Footprint – Memory per core continues to decrease • D-L-A Framework – Discover • Overall network topology (fat-tree, 3D, …), Network topology for processes for a given job • Node architecture, Health of network and node – Learn • Impact on performance and scalability • Potential for failure – Adapt • Internal protocols and algorithms • Process mapping • Fault-tolerance solutions – Low overhead techniques while delivering performance, scalability and fault-tolerance Additional Challenges for Designing Exascale Software Libraries
  • 9. Intel HPC Dev Conf (SC ‘16) 9Network Based Computing Laboratory Overview of the MVAPICH2 Project • High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE) – MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.0), Started in 2001, First version available in 2002 – MVAPICH2-X (MPI + PGAS), Available since 2011 – Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014 – Support for Virtualization (MVAPICH2-Virt), Available since 2015 – Support for Energy-Awareness (MVAPICH2-EA), Available since 2015 – Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015 – Used by more than 2,690 organizations in 83 countries – More than 402,000 (> 0.4 million) downloads from the OSU site directly – Empowering many TOP500 clusters (Nov ‘16 ranking) • 1st ranked 10,649,640-core cluster (Sunway TaihuLight) at NSC, Wuxi, China • 13th ranked 241,108-core cluster (Pleiades) at NASA • 17th ranked 519,640-core cluster (Stampede) at TACC • 40th ranked 76,032-core cluster (Tsubame 2.5) at Tokyo Institute of Technology and many others – Available with software stacks of many vendors and Linux Distros (RedHat and SuSE) – http://mvapich.cse.ohio-state.edu • Empowering Top500 systems for over a decade – System-X from Virginia Tech (3rd in Nov 2003, 2,200 processors, 12.25 TFlops) -> Sunway TaihuLight at NSC, Wuxi, China (1st in Nov’16, 10,649,640 cores, 93 PFlops)
  • 10. Intel HPC Dev Conf (SC ‘16) 10Network Based Computing Laboratory • Hybrid MPI+OpenMP Models for Highly-threaded Systems • Hybrid MPI+PGAS Models for Irregular Applications • Hybrid MPI+GPGPUs and OpenSHMEM for Heterogeneous Computing with Accelerators Outline
  • 11. Intel HPC Dev Conf (SC ‘16) 11Network Based Computing Laboratory • Systems like KNL • MPI+OpenMP is seen as the best fit – 1 MPI process per socket for Multi-core – 4-8 MPI processes per KNL – Each MPI process will launch OpenMP threads • However, current MPI runtimes are not “efficiently” handling the hybrid – Most of the application use Funneled mode: Only the MPI processes perform communication – Communication phases are the bottleneck • Multi-endpoint based designs – Transparently use threads inside MPI runtime – Increase the concurrency Highly Threaded Systems
  • 12. Intel HPC Dev Conf (SC ‘16) 12Network Based Computing Laboratory • MPI-4 will enhance the thread support – Endpoint proposal in the Forum – Application threads will be able to efficiently perform communication – Endpoint is the communication entity that maps to a thread • Idea is to have multiple addressable communication entities within a single process • No context switch between application and runtime => better performance • OpenMP 4.5 is more powerful than just traditional data parallelism – Supports task parallelism since OpenMP 3.0 – Supports heterogeneous computing with accelerator targets since OpenMP 4.0 – Supports explicit SIMD and threads affinity pragmas since OpenMP 4.0 MPI and OpenMP
  • 13. Intel HPC Dev Conf (SC ‘16) 13Network Based Computing Laboratory • Lock-free Communication – Threads have their own resources • Dynamically adapt the number of threads – Avoid resource contention – Depends on application pattern and system performance • Both intra- and inter-nodes communication – Threads boost both channels • New MEP-Aware collectives • Applicable to the endpoint proposal in MPI-4 MEP-based design: MVAPICH2 Approach Multi-endpoint Runtime Request Handling Progress Engine Comm. Resources Management Endpoint Controller Collective Optimized Algorithm Point-to-Point MPI/OpenMP Program MPI Collective (MPI_Alltoallv, MPI_Allgatherv...) *Transparent support MPI Point-to-Point (MPI_Isend, MPI_Irecv, MPI_Waitall...) *OpenMP Pragma needed Lock-free Communication Components M. Luo, X. Lu, K. Hamidouche, K. Kandalla and D. K. Panda, Initial Study of Multi-Endpoint Runtime for MPI+OpenMP Hybrid Applications on Multi-Core Systems. International Symposium on Principles and Practice of Parallel Programming (PPoPP '14).
  • 14. Intel HPC Dev Conf (SC ‘16) 14Network Based Computing Laboratory Performance Benefits: OSU Micro-Benchmarks (OMB) level 0 20 40 60 80 100 120 140 160 180 16 64 256 1K 4K 16K 64K 256K Latency(us) Message size Orig Multi-threaded Runtime Proposed Multi-Endpoint Runtime Process based Runtime 0 100 200 300 400 500 600 700 800 900 1000 1 4 16 64 256 1K 2K Latency(us) Message size orig mep 0 100 200 300 400 500 600 700 800 900 1000 1 4 16 64 256 1K 2K Latency(us) Message size orig mep Multi-pairs Bcast Alltoallv • Reduces the latency from 40us to 1.85 us (21X) • Achieves the same as Processes • 40% improvement on latency for Bcast on 4,096 cores • 30% improvement on latency for Alltoall on 4,096 cores
  • 15. Intel HPC Dev Conf (SC ‘16) 15Network Based Computing Laboratory Performance Benefits: Application Kernel level 0 20 40 60 80 100 120 140 160 180 200 CG LU MG Time(s) NAS Benchmark with 256 nodes (4,096 cores) CLASS E Orig mep 0 5 10 15 2K 4K ExecutionTime # of cores and input size P3DFFT Orig MEP • 6.3% improvement for MG, 11.7% improvement for CG, and 12.6% improvement for LU on 4,096 cores. • With P3DFFT, we are able to observe a 30% improvement in communication time and 13.5% improvement in the total execution time.
  • 16. Intel HPC Dev Conf (SC ‘16) 16Network Based Computing Laboratory • On-load approach – Takes advantage of the idle cores – Dynamically configurable – Takes advantage of highly multithreaded cores – Takes advantage of MCDRAM of KNL processors • Applicable to other programming models such as PGAS, Task-based, etc. • Provides portability, performance, and applicability to runtime as well as applications in a transparent manner Enhanced Designs for KNL: MVAPICH2 Approach
  • 17. Intel HPC Dev Conf (SC ‘16) 17Network Based Computing Laboratory Performance Benefits of the Enhanced Designs • New designs to exploit high concurrency and MCDRAM of KNL • Significant improvements for large message sizes • Benefits seen in varying message size as well as varying MPI processes Very Large Message Bi-directional Bandwidth16-process Intra-node All-to-AllIntra-node Broadcast with 64MB Message 0 2000 4000 6000 8000 10000 2M 4M 8M 16M 32M 64M Bandwidth(MB/s) Message size MVAPICH2 MVAPICH2-Optimized 0 10000 20000 30000 40000 50000 60000 4 8 16 Latency(us) No. of processes MVAPICH2 MVAPICH2-Optimized 27% 0 50000 100000 150000 200000 250000 300000 1M 2M 4M 8M 16M 32M Latency(us) Message size MVAPICH2 MVAPICH2-Optimized 17.2% 52%
  • 18. Intel HPC Dev Conf (SC ‘16) 18Network Based Computing Laboratory Performance Benefits of the Enhanced Designs 0 10000 20000 30000 40000 50000 60000 1M 2M 4M 8M 16M 32M 64M Bandwidth(MB/s) Message Size (bytes) MV2_Opt_DRAM MV2_Opt_MCDRAM MV2_Def_DRAM MV2_Def_MCDRAM 30% 0 50 100 150 200 250 300 4:268 4:204 4:64 Time(s) MPI Processes : OMP Threads MV2_Def_DRAM MV2_Opt_DRAM 15% Multi-Bandwidth using 32 MPI processesCNTK: MLP Training Time using MNIST (BS:64) • Benefits observed on training time of Multi-level Perceptron (MLP) model on MNIST dataset using CNTK Deep Learning Framework Enhanced Designs will be available in upcoming MVAPICH2 releases
  • 19. Intel HPC Dev Conf (SC ‘16) 19Network Based Computing Laboratory • Hybrid MPI+OpenMP Models for Highly-threaded Systems • Hybrid MPI+PGAS Models for Irregular Applications • Hybrid MPI+GPGPUs and OpenSHMEM for Heterogeneous Computing with Accelerators Outline
  • 20. Intel HPC Dev Conf (SC ‘16) 20Network Based Computing Laboratory Maturity of Runtimes and Application Requirements • MPI has been the most popular model for a long time - Available on every major machine - Portability, performance and scaling - Most parallel HPC code is designed using MPI - Simplicity - structured and iterative communication patterns • PGAS Models - Increasing interest in community - Simple shared memory abstractions and one-sided communication - Easier to express irregular communication • Need for hybrid MPI + PGAS - Application can have kernels with different communication characteristics - Porting only part of the applications to reduce programming effort
  • 21. Intel HPC Dev Conf (SC ‘16) 21Network Based Computing Laboratory Hybrid (MPI+PGAS) Programming • Application sub-kernels can be re-written in MPI/PGAS based on communication characteristics • Benefits: – Best of Distributed Computing Model – Best of Shared Memory Computing Model Kernel 1 MPI Kernel 2 MPI Kernel 3 MPI Kernel N MPI HPC Application Kernel 2 PGAS Kernel N PGAS
  • 22. Intel HPC Dev Conf (SC ‘16) 22Network Based Computing Laboratory Current Approaches for Hybrid Programming • Need more network and memory resources • Might lead to deadlock! • Layering one programming model over another – Poor performance due to semantics mismatch – MPI-3 RMA tries to address • Separate runtime for each programming model Hybrid (OpenSHMEM + MPI) Applications OpenSHMEM Runtime MPI Runtime InfiniBand, RoCE, iWARP OpenSHMEM Calls MPI Class
  • 23. Intel HPC Dev Conf (SC ‘16) 23Network Based Computing Laboratory The Need for a Unified Runtime • Deadlock when a message is sitting in one runtime, but application calls the other runtime • Prescription to avoid this is to barrier in one mode (either OpenSHMEM or MPI) before entering the other • Or runtimes require dedicated progress threads • Bad performance!! • Similar issues for MPI + UPC applications over individual runtimes shmem_int_fadd (data at p1); /* operate on data */ MPI_Barrier(comm); /* local computation */ MPI_Barrier(comm); P0 P1 OpenSHMEM Runtime MPI Runtime OpenSHMEM Runtime MPI Runtime Active Msg
  • 24. Intel HPC Dev Conf (SC ‘16) 24Network Based Computing Laboratory MVAPICH2-X for Hybrid MPI + PGAS Applications MPI, OpenSHMEM, UPC, CAF, UPC++ or Hybrid (MPI + PGAS) Applications Unified MVAPICH2-X Runtime InfiniBand, RoCE, iWARP OpenSHMEM Calls MPI CallsUPC Calls • Unified communication runtime for MPI, UPC, OpenSHMEM, CAF, UPC++ available with MVAPICH2- X 1.9 onwards! (since 2012) – http://mvapich.cse.ohio-state.edu • Feature Highlights – Supports MPI(+OpenMP), OpenSHMEM, UPC, CAF, UPC++, MPI(+OpenMP) + OpenSHMEM, MPI(+OpenMP) + UPC – MPI-3 compliant, OpenSHMEM v1.0 standard compliant, UPC v1.2 standard compliant (with initial support for UPC 1.3), CAF 2008 standard (OpenUH), UPC++ – Scalable Inter-node and intra-node communication – point-to-point and collectives CAF Calls UPC++ Calls
  • 25. Intel HPC Dev Conf (SC ‘16) 25Network Based Computing Laboratory OpenSHMEM Atomic Operations: Performance 0 5 10 15 20 25 30 fadd finc add inc cswap swap Time(us) UH-SHMEM MV2X-SHMEM Scalable-SHMEM OMPI-SHMEM • OSU OpenSHMEM micro-benchmarks (OMB v4.1) • MV2-X SHMEM performs up to 40% better compared to UH-SHMEM
  • 26. Intel HPC Dev Conf (SC ‘16) 26Network Based Computing Laboratory 0 1000 2000 3000 4000 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K Time(ms) Message Size UPC-GASNet UPC-OSU 0 50 100 150 200 250 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K 256K Time(ms) Message Size UPC-GASNet UPC-OSU 2X UPC Collectives Performance Broadcast (2,048 processes) Scatter (2,048 processes) Gather (2,048 processes) Exchange (2,048 processes) 0 5000 10000 15000 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K 256K Time(us) Message Size UPC-GASNet UPC-OSU 0 100 200 300 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K 256K Time(ms) Message Size UPC-GASNet UPC-OSU 25X 2X 35% J. Jose, K. Hamidouche, J. Zhang, A. Venkatesh, and D. K. Panda, Optimizing Collective Communication in UPC (HiPS’14, in association with IPDPS’14)
  • 27. Intel HPC Dev Conf (SC ‘16) 27Network Based Computing Laboratory Performance Evaluations for CAF model 0 1000 2000 3000 4000 5000 6000 Bandwidth(MB/s) Message Size (byte) GASNet-IBV GASNet-MPI MV2X 0 1000 2000 3000 4000 5000 6000 Bandwidth(MB/s) Message Size (byte) GASNet-IBV GASNet-MPI MV2X 0 2 4 6 8 10 GASNet-IBV GASNet-MPI MV2X Latency (ms) 0 2 4 6 8 10 GASNet-IBV GASNet-MPI MV2X Latency(ms) 0 100 200 300 bt.D.256 cg.D.256 ep.D.256 ft.D.256 mg.D.256 sp.D.256 GASNet-IBV GASNet-MPI MV2X Time (sec) Get NAS-CAFPut • Micro-benchmark improvement (MV2X vs. GASNet-IBV, UH CAF test-suite) – Put bandwidth: 3.5X improvement on 4KB; Put latency: reduce 29% on 4B • Application performance improvement (NAS-CAF one-sided implementation) – Reduce the execution time by 12% (SP.D.256), 18% (BT.D.256) 3.5X 29% 12% 18% J. Lin, K. Hamidouche, X. Lu, M. Li and D. K. Panda, High-performance Co-array Fortran support with MVAPICH2-X: Initial experience and evaluation, HIPS’15
  • 28. Intel HPC Dev Conf (SC ‘16) 28Network Based Computing Laboratory UPC++ Collectives Performance MPI + {UPC++} application GASNet Interfaces UPC++ Runtime Network Conduit (MPI) MVAPICH2-X Unified communication Runtime (UCR) MPI + {UPC++} application UPC++ Runtime MPI Interfaces • Full and native support for hybrid MPI + UPC++ applications • Better performance compared to IBV and MPI conduits • OSU Micro-benchmarks (OMB) support for UPC++ • Available since MVAPICH2-X 2.2RC1 0 5000 10000 15000 20000 25000 30000 35000 40000 Time(us) Message Size (bytes) GASNet_MPI GASNET_IBV MV2-X 14x Inter-node Broadcast (64 nodes 1:ppn) J. M. Hashmi, K. Hamidouche, and D. K. Panda, Enabling Performance Efficient Runtime Support for hybrid MPI+UPC++ Programming Models, IEEE International Conference on High Performance Computing and Communications (HPCC 2016)
  • 29. Intel HPC Dev Conf (SC ‘16) 29Network Based Computing Laboratory Application Level Performance with Graph500 and Sort Graph500 Execution Time J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models, International Supercomputing Conference (ISC’13), June 2013 0 5 10 15 20 25 30 35 4K 8K 16K Time(s) No. of Processes MPI-Simple MPI-CSC MPI-CSR Hybrid (MPI+OpenSHMEM) 13X 7.6X • Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design • 8,192 processes - 2.4X improvement over MPI-CSR - 7.6X improvement over MPI-Simple • 16,384 processes - 1.5X improvement over MPI-CSR - 13X improvement over MPI-Simple Sort Execution Time 0 500 1000 1500 2000 2500 3000 500GB-512 1TB-1K 2TB-2K 4TB-4K Time(seconds) Input Data - No. of Processes MPI Hybrid 51% • Performance of Hybrid (MPI+OpenSHMEM) Sort Application • 4,096 processes, 4 TB Input Size - MPI – 2408 sec; 0.16 TB/min - Hybrid – 1172 sec; 0.36 TB/min - 51% improvement over MPI-design J. Jose, S. Potluri, H. Subramoni, X. Lu, K. Hamidouche, K. Schulz, H. Sundar and D. Panda Designing Scalable Out-of-core Sorting with Hybrid MPI+PGAS Programming Models, PGAS’14
  • 30. Intel HPC Dev Conf (SC ‘16) 30Network Based Computing Laboratory MiniMD – Total Execution Time • Hybrid design performs better than MPI implementation • 1,024 processes - 17% improvement over MPI version • Strong Scaling Input size: 128 * 128 * 128 Performance Strong Scaling 0 1000 2000 3000 512 1,024 Hybrid-Barrier MPI-Original Hybrid-Advanced 17% 0 1000 2000 3000 256 512 1,024 Hybrid-Barrier MPI-Original Hybrid-Advanced Time(ms) Time(ms) # of Cores # of Cores
  • 31. Intel HPC Dev Conf (SC ‘16) 31Network Based Computing Laboratory Accelerating MaTEx k-NN with Hybrid MPI and OpenSHMEM KDD (2.5GB) on 512 cores 9.0% KDD-tranc (30MB) on 256 cores 27.6% • Benchmark: KDD Cup 2010 (8,407,752 records, 2 classes, k=5) • For truncated KDD workload on 256 cores, reduce 27.6% execution time • For full KDD workload on 512 cores, reduce 9.0% execution time J. Lin, K. Hamidouche, J. Zhang, X. Lu, A. Vishnu, D. Panda. Accelerating k-NN Algorithm with Hybrid MPI and OpenSHMEM, OpenSHMEM 2015 • MaTEx: MPI-based Machine learning algorithm library • k-NN: a popular supervised algorithm for classification • Hybrid designs: – Overlapped Data Flow; One-sided Data Transfer; Circular-buffer Structure
  • 32. Intel HPC Dev Conf (SC ‘16) 32Network Based Computing Laboratory • Hybrid MPI+OpenMP Models for Highly-threaded Systems • Hybrid MPI+PGAS Models for Irregular Applications • Hybrid MPI+GPGPUs and OpenSHMEM for Heterogeneous Computing with Accelerators Outline
  • 33. Intel HPC Dev Conf (SC ‘16) 33Network Based Computing Laboratory At Sender: At Receiver: MPI_Recv(r_devbuf, size, …); inside MVAPICH2 • Standard MPI interfaces used for unified data movement • Takes advantage of Unified Virtual Addressing (>= CUDA 4.0) • Overlaps data movement from GPU with RDMA transfers High Performance and High Productivity MPI_Send(s_devbuf, size, …); GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU
  • 34. Intel HPC Dev Conf (SC ‘16) 34Network Based Computing Laboratory CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.2 Releases • Support for MPI communication from NVIDIA GPU device memory • High performance RDMA-based inter-node point-to-point communication (GPU-GPU, GPU-Host and Host-GPU) • High performance intra-node point-to-point communication for multi-GPU adapters/node (GPU-GPU, GPU-Host and Host-GPU) • Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node communication for multiple GPU adapters/node • Optimized and tuned collectives for GPU device buffers • MPI datatype support for point-to-point and collective communication from GPU device buffers • Unified memory
  • 35. Intel HPC Dev Conf (SC ‘16) 35Network Based Computing Laboratory 0 1000 2000 3000 4000 1 4 16 64 256 1K 4K MV2-GDR2.2 MV2-GDR2.0b MV2 w/o GDR GPU-GPU Internode Bi-Bandwidth Message Size (bytes) Bi-Bandwidth(MB/s) 0 5 10 15 20 25 30 0 2 8 32 128 512 2K MV2-GDR2.2 MV2-GDR2.0b MV2 w/o GDR GPU-GPU internode latency Message Size (bytes) Latency(us) MVAPICH2-GDR-2.2 Intel Ivy Bridge (E5-2680 v2) node - 20 cores NVIDIA Tesla K40c GPU Mellanox Connect-X4 EDR HCA CUDA 8.0 Mellanox OFED 3.0 with GPU-Direct-RDMA 10x 2X 11x Performance of MVAPICH2-GPU with GPU-Direct RDMA (GDR) 2.18us 0 500 1000 1500 2000 2500 3000 1 4 16 64 256 1K 4K MV2-GDR2.2 MV2-GDR2.0b MV2 w/o GDR GPU-GPU Internode Bandwidth Message Size (bytes) Bandwidth (MB/s) 11X 2X 3X
  • 36. Intel HPC Dev Conf (SC ‘16) 36Network Based Computing Laboratory • Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB) • HoomdBlue Version 1.0.5 • GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768 MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768 MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384 Application-Level Evaluation (HOOMD-blue) 0 500 1000 1500 2000 2500 4 8 16 32 AverageTimeStepsper second(TPS) Number of Processes MV2 MV2+GDR 0 500 1000 1500 2000 2500 3000 3500 4 8 16 32 AverageTimeStepsper second(TPS) Number of Processes 64K Particles 256K Particles 2X 2X
  • 37. Intel HPC Dev Conf (SC ‘16) 37Network Based Computing Laboratory Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland 0 0.2 0.4 0.6 0.8 1 1.2 16 32 64 96NormalizedExecutionTime Number of GPUs CSCS GPU cluster Default Callback-based Event-based 0 0.2 0.4 0.6 0.8 1 1.2 4 8 16 32 NormalizedExecutionTime Number of GPUs Wilkes GPU Cluster Default Callback-based Event-based • 2X improvement on 32 GPUs nodes • 30% improvement on 96 GPU nodes (8 GPUs/node) C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data Movement Processing on Modern GPU-enabled Systems, IPDPS’16 On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application Cosmo model: http://www2.cosmo-model.org/content /tasks/operational/meteoSwiss/
  • 38. Intel HPC Dev Conf (SC ‘16) 38Network Based Computing Laboratory Need for Non-Uniform Memory Allocation in OpenSHMEM for Heterogeneous Architectures • MIC cores have limited memory per core • OpenSHMEM relies on symmetric memory, allocated using shmalloc() • shmalloc() allocates same amount of memory on all PEs • For applications running in symmetric mode, this limits the total heap size • Similar issues for applications (even host-only) with memory load imbalance (Graph500, Out-of-Core Sort, etc.) • How to allocate different amounts of memory on host and MIC cores, and still be able to communicate? MIC MemoryHost Memory Host Cores MIC Cores Memory per core
  • 39. Intel HPC Dev Conf (SC ‘16) 39Network Based Computing Laboratory OpenSHMEM Design for MIC Clusters OpenSHMEM Applications Multi/Many-Core Architectures with memory heterogeneity MVAPICH2-X OpenSHMEM Runtime InfiniBand Networks OpenSHMEM Programming Model InfiniBand Channel SCIF Channel Shared Memory/ CMA Channel Proxy based Communication Extensions Application Co-Design Symmetric Memory Manager • Non-Uniform Memory Allocation: – Team-based Memory Allocation (Proposed Extensions) – Address Structure for non-uniform memory allocations void shmem_team_create(shmem_team_t team, int *ranks, int size, shmem_team_t *newteam); void shmem_team_destroy(shmem_team_t *team); void shmem_team_split(shmem_team_t team, int color, int key, shmem_team_t *newteam); int shmem_team_rank(shmem_team_t team); int shmem_team_size(shmem_team_t team); void *shmalloc_team (shmem_team_t team, size_t size); void shfree_team(shmem_team_t team, void *addr);
  • 40. Intel HPC Dev Conf (SC ‘16) 40Network Based Computing Laboratory HOST2 Proxy-based Designs for OpenSHMEM OpenSHMEM Put using Active Proxy OpenSHMEM Get using Active Proxy HOST1 MIC1 H C A HOST2 MIC2 H C A (1) IB REQ (2) SCIF Read (2) IB Write (3) IB FIN HOST1 MIC1 H C A MIC2 H C A (3) IB FIN (2) SCIF Read (2) IB Write (1) IB REQ • Current generation architectures impose limitations on read bandwidth when HCA reads from MIC memory – Impacts both put and get operation performance • Solution: Pipelined data transfer by proxy running on host using IB and SCIF channels • Improves latency and bandwidth!
  • 41. Intel HPC Dev Conf (SC ‘16) 41Network Based Computing Laboratory OpenSHMEM Put/Get Performance OpenSHMEM Put Latency OpenSHMEM Get Latency 0 1000 2000 3000 4000 5000 1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M Latency(us) Message Size MV2X-Def MV2X-Opt 0 1000 2000 3000 4000 5000 1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M Latency(us) Message Size MV2X-Def MV2X-Opt • Proxy-based designs alleviate hardware limitations • Put Latency of 4M message: Default: 3911us, Optimized: 838us • Get Latency of 4M message: Default: 3889us, Optimized: 837us 4.5X 4.6X
  • 42. Intel HPC Dev Conf (SC ‘16) 42Network Based Computing Laboratory Performance Evaluations using Graph500 Native Mode (8 procs/MIC) Symmetric Mode (16 Host+16MIC) • Graph500 Execution Time (Native Mode): – 8 processes per MIC node – At 512 processes , Default: 5.17s, Optimized: 4.96s – Performance Improvement from MIC-aware collectives design • Graph500 Execution Time (Symmetric Mode): – 16 processes on each Host and MIC node – At 1,024 processes, Default: 15.91s, Optimized: 12.41s – Performance Improvement from MIC-aware collectives and proxy-based designs 0 2 4 6 64 128 256 512 ExecutionTime(s) Number of Processes MV2X-Def MV2X-Opt 0 5 10 15 20 25 128 256 512 1024 ExecutionTime(s) Number of Processes MV2X-Def MV2X-Opt 28%
  • 43. Intel HPC Dev Conf (SC ‘16) 43Network Based Computing Laboratory Graph500 Evaluations with Extensions • Redesigned Graph500 using MIC to overlap computation/communication – Data Transfer to MIC memory; MIC cores pre-processes received data – Host processes traverses vertices, and sends out new vertices • Graph500 Execution time at 1,024 processes: – 16 processes on each Host and MIC node – Host-Only: .33s, Host+MIC with Extensions: .26s • Magnitudes of improvement compared to default symmetric mode – Default Symmetric Mode: 12.1s, Host+MIC Extensions: 0.16s 0 0.2 0.4 0.6 0.8 128 256 512 1024ExecutionTime(s) Number of Processes Host Host+MIC (extensions) 26% J. Jose, K. Hamidouche, X. Lu, S. Potluri, J. Zhang, K. Tomko and D. K. Panda, High Performance OpenSHMEM for Intel MIC Clusters: Extensions, Runtime Designs and Application Co-Design, IEEE International Conference on Cluster Computing (CLUSTER '14) (Best Paper Finalist)
  • 44. Intel HPC Dev Conf (SC ‘16) 44Network Based Computing Laboratory • Architectures for Exascale systems are evolving • Exascale systems will be constrained by – Power – Memory per core – Data movement cost – Faults • Programming Models, Runtimes and Middleware need to be designed for – Scalability – Performance – Fault-resilience – Energy-awareness – Programmability – Productivity • High Performance and Scalable MPI+X libraries are needed • Highlighted some of the approaches taken by the MVAPICH2 project • Need continuous innovation to have the right MPI+X libraries for Exascale systems Looking into the Future ….
  • 45. Intel HPC Dev Conf (SC ‘16) 45Network Based Computing Laboratory Funding Acknowledgments Funding Support by Equipment Support by
  • 46. Intel HPC Dev Conf (SC ‘16) 46Network Based Computing Laboratory Personnel Acknowledgments Current Students – A. Awan (Ph.D.) – M. Bayatpour (Ph.D.) – S. Chakraborthy (Ph.D.) – C.-H. Chu (Ph.D.) Past Students – A. Augustine (M.S.) – P. Balaji (Ph.D.) – S. Bhagvat (M.S.) – A. Bhat (M.S.) – D. Buntinas (Ph.D.) – L. Chai (Ph.D.) – B. Chandrasekharan (M.S.) – N. Dandapanthula (M.S.) – V. Dhanraj (M.S.) – T. Gangadharappa (M.S.) – K. Gopalakrishnan (M.S.) – R. Rajachandrasekar (Ph.D.) – G. Santhanaraman (Ph.D.) – A. Singh (Ph.D.) – J. Sridhar (M.S.) – S. Sur (Ph.D.) – H. Subramoni (Ph.D.) – K. Vaidyanathan (Ph.D.) – A. Vishnu (Ph.D.) – J. Wu (Ph.D.) – W. Yu (Ph.D.) Past Research Scientist – S. Sur Past Post-Docs – D. Banerjee – X. Besseron – H.-W. Jin – W. Huang (Ph.D.) – W. Jiang (M.S.) – J. Jose (Ph.D.) – S. Kini (M.S.) – M. Koop (Ph.D.) – K. Kulkarni (M.S.) – R. Kumar (M.S.) – S. Krishnamoorthy (M.S.) – K. Kandalla (Ph.D.) – P. Lai (M.S.) – J. Liu (Ph.D.) – M. Luo (Ph.D.) – A. Mamidala (Ph.D.) – G. Marsh (M.S.) – V. Meshram (M.S.) – A. Moody (M.S.) – S. Naravula (Ph.D.) – R. Noronha (Ph.D.) – X. Ouyang (Ph.D.) – S. Pai (M.S.) – S. Potluri (Ph.D.) – S. Guganani (Ph.D.) – J. Hashmi (Ph.D.) – N. Islam (Ph.D.) – M. Li (Ph.D.) – J. Lin – M. Luo – E. Mancini Current Research Scientists – K. Hamidouche – X. Lu – H. Subramoni Past Programmers – D. Bureddy – M. Arnold – J. Perkins Current Research Specialist – J. Smith – M. Rahman (Ph.D.) – D. Shankar (Ph.D.) – A. Venkatesh (Ph.D.) – J. Zhang (Ph.D.) – S. Marcarelli – J. Vienne – H. Wang
  • 47. Intel HPC Dev Conf (SC ‘16) 47Network Based Computing Laboratory • Three Conference Tutorials (IB+HSE, IB+HSE Advanced, Big Data) • HP-CAST • Technical Papers (SC main conference; Doctoral Showcase; Poster; PDSW- DISC, PAW, COMHPC, and ESPM2 Workshops) • Booth Presentations (Mellanox, NVIDIA, NRL, PGAS) • HPC Connection Workshop • Will be stationed at Ohio Supercomputer Center/OH-TECH Booth (#1107) – Multiple presentations and demos • More Details from http://mvapich.cse.ohio-state.edu/talks/ OSU Team Will be Participating in Multiple Events at SC ‘16
  • 48. Intel HPC Dev Conf (SC ‘16) 48Network Based Computing Laboratory panda@cse.ohio-state.edu, hamidouch@cse.ohio-state.edu Thank You! Network-Based Computing Laboratory http://nowlab.cse.ohio-state.edu/ The MVAPICH Project http://mvapich.cse.ohio-state.edu/