SlideShare ist ein Scribd-Unternehmen logo
1 von 31
Pak Lui, Application Performance Manager
September 12, 2013
Advancing Applications Performance With InfiniBand
© 2013 Mellanox Technologies 2
Mellanox Overview
 Leading provider of high-throughput, low-latency server and storage interconnect
• FDR 56Gb/s InfiniBand and 10/40/56GbE
• Reduces application wait-time for data
• Dramatically increases ROI on data center infrastructure
 Company headquarters:
• Yokneam, Israel; Sunnyvale, California
• ~1,200 employees* worldwide
 Solid financial position
• Record revenue in FY12; $500.8M, up 93% year-over-year
• Q2’13 revenue of $98.2M
• Q3’13 guidance ~$104M to $109M
• Cash + investments @ 6/30/13 = $411.3M
Ticker: MLNX
* As of June 2013
© 2013 Mellanox Technologies 3
Providing End-to-End Interconnect Solutions
ICs Switches/GatewaysAdapter Cards Cables/Modules
Comprehensive End-to-End InfiniBand and Ethernet Solutions Portfolio
Long-Haul Systems
MXM
Mellanox Messaging
Acceleration
FCA
Fabric Collectives
Acceleration
Management
UFM
Unified Fabric Management
Storage and Data
VSA
Storage Accelerator
(iSCSI)
UDA
Unstructured Data
Accelerator
Comprehensive End-to-End Software Accelerators and Managment
© 2013 Mellanox Technologies 4
Virtual Protocol Interconnect (VPI) Technology
64 ports 10GbE
36 ports 40/56GbE
48 10GbE + 12 40/56GbE
36 ports IB up to 56Gb/s
8 VPI subnets
Switch OS Layer
Mezzanine Card
VPI Adapter VPI Switch
Ethernet: 10/40/56 Gb/s
InfiniBand:10/20/40/56 Gb/s
Unified Fabric Manager
Networking Storage Clustering Management
Applications
Acceleration Engines
LOM Adapter Card
3.0
From data center to
campus and metro
connectivity
© 2013 Mellanox Technologies 5
MetroDX™ and MetroX™
 MetroX™ and MetroDX™ extends InfiniBand and Ethernet RDMA reach
 Fastest interconnect over 40Gb/s InfiniBand or Ethernet links
 Supporting multiple distances
 Simple management to control distant sites
 Low-cost, low-power , long-haul solution
40Gb/s over Campus and Metro
© 2013 Mellanox Technologies 6
Data Center Expansion Example – Disaster Recovery
© 2013 Mellanox Technologies 7
Key Elements in a Data Center Interconnect
StorageServers
Adapter and IC Adapter and IC
Cables, Silicon Photonics, Parallel Optical Modules
Switch and IC
Applications
© 2013 Mellanox Technologies 8
IPtronics and Kotura Complete Mellanox’s 100Gb/s+ Technology
Recent Acquisitions of Kotura and IPtronics Enable Mellanox to Deliver
Complete High-Speed Optical Interconnect Solutions for 100Gb/s and Beyond
© 2013 Mellanox Technologies 9
Mellanox InfiniBand Paves the Road to Exascale
© 2013 Mellanox Technologies 10
Meteo France
© 2013 Mellanox Technologies 11
 20K InfiniBand nodes
 Mellanox end-to-end FDR and QDR InfiniBand
 Supports variety of scientific and engineering projects
• Coupled atmosphere-ocean models
• Future space vehicle design
• Large-scale dark matter halos and galaxy evolution
NASA Ames Research Center Pleiades
Asian Monsoon Water Cycle
High-Resolution Climate Simulations
© 2013 Mellanox Technologies 12
 “Yellowstone” system
 72,288 processor cores, 4,518 nodes
 Mellanox end-to-end FDR InfiniBand, CLOS (full fat tree) network, single plane
NCAR (National Center for Atmospheric Research)
© 2013 Mellanox Technologies 13
Applications Performance (Courtesy of the HPC Advisory Council)
284%
1556% 282%Intel Ivy Bridge
182%
173%Intel Sandy Bridge
© 2013 Mellanox Technologies 14
Applications Performance (Courtesy of the HPC Advisory Council)
392%
135%
© 2013 Mellanox Technologies 15
Dominant in Enterprise Back-End Storage Interconnects
SMB Direct
© 2013 Mellanox Technologies 16
Leading Interconnect, Leading Performance
Latency
5usec
2.5usec
1.3usec
0.7usec
<0.5usec
200Gb/s
100Gb/s
56Gb/s
40Gb/s
20Gb/s
10Gb/s
2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017
2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017
Bandwidth
Same Software Interface
0.6usec
© 2013 Mellanox Technologies 17
Architectural Foundation for Exascale Computing
Connect-IB
© 2013 Mellanox Technologies 18
 World’s first 100Gb/s interconnect adapter
• PCIe 3.0 x16, dual FDR 56Gb/s InfiniBand ports to provide >100Gb/s
 Highest InfiniBand message rate: 137 million messages per second
• 4X higher than other InfiniBand solutions
 <0.7 micro-second application latency
 Supports GPUDirect RDMA for direct GPU-to-GPU communication
 Unmatchable Storage Performance
• 8,000,000 IOPs (1QP), 18,500,000 IOPs (32 QPs)
 New Innovative Transport – Dynamically Connected Transport Service
 Supports Scalable HPC with MPI, SHMEM and PGAS/UPC offloads
Connect-IB : The Exascale Foundation
Enter the World of Boundless Performance
© 2013 Mellanox Technologies 19
Connect-IB Memory Scalability
1
1,000
1,000,000
1,000,000,000
InfiniHost, RC 2002 InfiniHost-III, SRQ 2005 ConnectX, XRC 2008 Connect-IB, DCT 2012
8 nodes
2K nodes
10K nodes
100K nodes
HostMemoryConsumption(MB)
© 2013 Mellanox Technologies 20
Dynamically Connected Transport Advantages
© 2013 Mellanox Technologies 21
FDR InfiniBand Delivers Highest Application Performance
0
50
100
150
QDR InfiniBand FDR InfiniBand
MessageRate(Million)
Message Rate
© 2013 Mellanox Technologies 22
MXM, FCA
Scalable Communication
© 2013 Mellanox Technologies 23
Mellanox ScalableHPC Accelerate Parallel Applications
InfiniBand Verbs API
MXM
• Reliable Messaging Optimized for Mellanox HCA
• Hybrid Transport Mechanism
• Efficient Memory Registration
• Receive Side Tag Matching
FCA
• Topology Aware Collective Optimization
• Hardware Multicast
• Separate Virtual Fabric for Collectives
• CORE-Direct Hardware Offload
Memory
P1
Memory
P2
Memory
P3
MPI
Memory
P1 P2 P3
SHMEM
Logical Shared Memory
Memory
P1 P2 P3
PGAS
Memory Memory
Logical Shared Memory
© 2013 Mellanox Technologies 24
MXM v2.0 - Highlights
 Transport library integrated with OpenMPI, OpenSHMEM, BUPC, Mvapich2
 More solutions will be added in the future
 Utilizing Mellanox offload engines
 Supported APIs (both sync/async): AM, p2p, atomics, synchronization
 Supported transports: RC, UD, DC, RoCE, SHMEM
 Supported built-in mechanisms: tag matching, progress thread, memory
registration cache, fast path send for small messages, zero copy, flow control
 Supported data transfer protocols: Eager Send/Recv, Eager RDMA, Rendezvous
© 2013 Mellanox Technologies 25
Mellanox FCA Collective Scalability
0.0
10.0
20.0
30.0
40.0
50.0
60.0
70.0
80.0
90.0
100.0
0 500 1000 1500 2000 2500
Latency(us)
Processes (PPN=8)
Barrier Collective
Without FCA With FCA
0
500
1000
1500
2000
2500
3000
0 500 1000 1500 2000 2500
Latency(us)
processes (PPN=8)
Reduce Collective
Without FCA With FCA
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
0 500 1000 1500 2000 2500
Bandwidth(KB*processes)
Processes (PPN=8)
8-Byte Broadcast
Without FCA With FCA
© 2013 Mellanox Technologies 26
FDR InfiniBand Delivers Highest Application Performance
© 2013 Mellanox Technologies 27
GPU Direct
© 2013 Mellanox Technologies 28
GPUDirect RDMA
TransmitReceive
CPU
GPUChip
set
GPU
Memory
InfiniBand
System
Memory
1
CPU
GPU Chip
set
GPU
Memory
InfiniBand
System
Memory
1
GPUDirect RDMA
CPU
GPUChip
set
GPU
Memory
InfiniBand
System
Memory
1
CPU
GPU Chip
set
GPU
Memory
InfiniBand
System
Memory
1
GPUDirect 1.0
© 2013 Mellanox Technologies 29
GPU-GPU Internode MPI Latency
0
5
10
15
20
25
30
35
1 4 16 64 256 1K 4K
MVAPICH2-1.9 MVAPICH2-1.9-GDR
Small Message Latency
Message Size (bytes)
Latency(us)
LowerisBetter
19.78
69 %
6.12
Preliminary Performance of MVAPICH2 with GPUDirect RDMA
69% Lower Latency
GPU-GPU Internode MPI Bandwidth
0
100
200
300
400
500
600
700
800
900
1 4 16 64 256 1K 4K
MVAPICH2-1.9 MVAPICH2-1.9-GDR
Message Size (bytes)
Bandwidth(MB/s)
Small Message Bandwidth
3x
HigherisBetter
3X Increase in Throughput
Source: Prof. DK Panda
© 2013 Mellanox Technologies 30
Execution Time of HSG
(Heisenberg Spin Glass)
Application with 2 GPU Nodes
Source: Prof. DK Panda
Preliminary Performance of MVAPICH2 with GPU-Direct-RDMA
© 2013 Mellanox Technologies 31
Thank You

Weitere ähnliche Inhalte

Was ist angesagt?

MetroX™ – Mellanox Long Haul Solutions
MetroX™ – Mellanox Long Haul SolutionsMetroX™ – Mellanox Long Haul Solutions
MetroX™ – Mellanox Long Haul SolutionsMellanox Technologies
 
Interconnect Your Future with Connect-IB
Interconnect Your Future with Connect-IBInterconnect Your Future with Connect-IB
Interconnect Your Future with Connect-IBMellanox Technologies
 
Open Ethernet: an open-source approach to modern network design
Open Ethernet: an open-source approach to modern network designOpen Ethernet: an open-source approach to modern network design
Open Ethernet: an open-source approach to modern network designAlexander Petrovskiy
 
Ahead of the NFV Curve with Truly Scale-out Network Function Cloudification
Ahead of the NFV Curve with Truly Scale-out Network Function CloudificationAhead of the NFV Curve with Truly Scale-out Network Function Cloudification
Ahead of the NFV Curve with Truly Scale-out Network Function CloudificationMellanox Technologies
 
Interconnect Your Future With Mellanox
Interconnect Your Future With MellanoxInterconnect Your Future With Mellanox
Interconnect Your Future With MellanoxMellanox Technologies
 
InfiniBand Essentials Every HPC Expert Must Know
InfiniBand Essentials Every HPC Expert Must KnowInfiniBand Essentials Every HPC Expert Must Know
InfiniBand Essentials Every HPC Expert Must KnowMellanox Technologies
 
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Cloud Native Day Tel Aviv
 
Interop Tokyo 2014 -- Mellanox Demonstrations
Interop Tokyo 2014 -- Mellanox DemonstrationsInterop Tokyo 2014 -- Mellanox Demonstrations
Interop Tokyo 2014 -- Mellanox DemonstrationsMellanox Technologies
 
InfiniBand Strengthens Leadership as the Interconnect Of Choice
InfiniBand Strengthens Leadership as the Interconnect Of ChoiceInfiniBand Strengthens Leadership as the Interconnect Of Choice
InfiniBand Strengthens Leadership as the Interconnect Of ChoiceMellanox Technologies
 
Announcing the Mellanox ConnectX-5 100G InfiniBand Adapter
Announcing the Mellanox ConnectX-5 100G InfiniBand AdapterAnnouncing the Mellanox ConnectX-5 100G InfiniBand Adapter
Announcing the Mellanox ConnectX-5 100G InfiniBand Adapterinside-BigData.com
 
I/O virtualization with InfiniBand and 40 Gigabit Ethernet
I/O virtualization with InfiniBand and 40 Gigabit EthernetI/O virtualization with InfiniBand and 40 Gigabit Ethernet
I/O virtualization with InfiniBand and 40 Gigabit EthernetMellanox Technologies
 
Конференция Brocade. 2
Конференция Brocade. 2Конференция Brocade. 2
Конференция Brocade. 2SkillFactory
 
Конференция Brocade. 1. Новые тренды в сетях ЦОД: Программно-определяемые сет...
Конференция Brocade. 1. Новые тренды в сетях ЦОД: Программно-определяемые сет...Конференция Brocade. 1. Новые тренды в сетях ЦОД: Программно-определяемые сет...
Конференция Brocade. 1. Новые тренды в сетях ЦОД: Программно-определяемые сет...SkillFactory
 
IBTA Releases Updated Specification for RoCEv2
IBTA Releases Updated Specification for RoCEv2IBTA Releases Updated Specification for RoCEv2
IBTA Releases Updated Specification for RoCEv2inside-BigData.com
 
Brocade Ethernet Fabrics and the ODDC
Brocade Ethernet Fabrics and the ODDCBrocade Ethernet Fabrics and the ODDC
Brocade Ethernet Fabrics and the ODDCEMC Nederland
 
uCPE and VNFs Explained
uCPE and VNFs ExplaineduCPE and VNFs Explained
uCPE and VNFs ExplainedAlan Percy
 

Was ist angesagt? (20)

MetroX™ – Mellanox Long Haul Solutions
MetroX™ – Mellanox Long Haul SolutionsMetroX™ – Mellanox Long Haul Solutions
MetroX™ – Mellanox Long Haul Solutions
 
Interconnect Your Future with Connect-IB
Interconnect Your Future with Connect-IBInterconnect Your Future with Connect-IB
Interconnect Your Future with Connect-IB
 
Open Ethernet: an open-source approach to modern network design
Open Ethernet: an open-source approach to modern network designOpen Ethernet: an open-source approach to modern network design
Open Ethernet: an open-source approach to modern network design
 
Interconnect Product Portfolio
Interconnect Product PortfolioInterconnect Product Portfolio
Interconnect Product Portfolio
 
Ahead of the NFV Curve with Truly Scale-out Network Function Cloudification
Ahead of the NFV Curve with Truly Scale-out Network Function CloudificationAhead of the NFV Curve with Truly Scale-out Network Function Cloudification
Ahead of the NFV Curve with Truly Scale-out Network Function Cloudification
 
Interconnect Your Future With Mellanox
Interconnect Your Future With MellanoxInterconnect Your Future With Mellanox
Interconnect Your Future With Mellanox
 
The Generation of Open Ethernet
The Generation of Open Ethernet The Generation of Open Ethernet
The Generation of Open Ethernet
 
InfiniBand Essentials Every HPC Expert Must Know
InfiniBand Essentials Every HPC Expert Must KnowInfiniBand Essentials Every HPC Expert Must Know
InfiniBand Essentials Every HPC Expert Must Know
 
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
 
Interop Tokyo 2014 -- Mellanox Demonstrations
Interop Tokyo 2014 -- Mellanox DemonstrationsInterop Tokyo 2014 -- Mellanox Demonstrations
Interop Tokyo 2014 -- Mellanox Demonstrations
 
Mellanox 2013 Analyst Day
Mellanox 2013 Analyst DayMellanox 2013 Analyst Day
Mellanox 2013 Analyst Day
 
Mellanox's Sales Strategy
Mellanox's Sales StrategyMellanox's Sales Strategy
Mellanox's Sales Strategy
 
InfiniBand Strengthens Leadership as the Interconnect Of Choice
InfiniBand Strengthens Leadership as the Interconnect Of ChoiceInfiniBand Strengthens Leadership as the Interconnect Of Choice
InfiniBand Strengthens Leadership as the Interconnect Of Choice
 
Announcing the Mellanox ConnectX-5 100G InfiniBand Adapter
Announcing the Mellanox ConnectX-5 100G InfiniBand AdapterAnnouncing the Mellanox ConnectX-5 100G InfiniBand Adapter
Announcing the Mellanox ConnectX-5 100G InfiniBand Adapter
 
I/O virtualization with InfiniBand and 40 Gigabit Ethernet
I/O virtualization with InfiniBand and 40 Gigabit EthernetI/O virtualization with InfiniBand and 40 Gigabit Ethernet
I/O virtualization with InfiniBand and 40 Gigabit Ethernet
 
Конференция Brocade. 2
Конференция Brocade. 2Конференция Brocade. 2
Конференция Brocade. 2
 
Конференция Brocade. 1. Новые тренды в сетях ЦОД: Программно-определяемые сет...
Конференция Brocade. 1. Новые тренды в сетях ЦОД: Программно-определяемые сет...Конференция Brocade. 1. Новые тренды в сетях ЦОД: Программно-определяемые сет...
Конференция Brocade. 1. Новые тренды в сетях ЦОД: Программно-определяемые сет...
 
IBTA Releases Updated Specification for RoCEv2
IBTA Releases Updated Specification for RoCEv2IBTA Releases Updated Specification for RoCEv2
IBTA Releases Updated Specification for RoCEv2
 
Brocade Ethernet Fabrics and the ODDC
Brocade Ethernet Fabrics and the ODDCBrocade Ethernet Fabrics and the ODDC
Brocade Ethernet Fabrics and the ODDC
 
uCPE and VNFs Explained
uCPE and VNFs ExplaineduCPE and VNFs Explained
uCPE and VNFs Explained
 

Ähnlich wie Advancing Applications Performance With InfiniBand

Co-Design Architecture for Exascale
Co-Design Architecture for ExascaleCo-Design Architecture for Exascale
Co-Design Architecture for Exascaleinside-BigData.com
 
Mellanox Announcements at SC15
Mellanox Announcements at SC15Mellanox Announcements at SC15
Mellanox Announcements at SC15inside-BigData.com
 
Storage, Cloud, Web 2.0, Big Data Driving Growth
Storage, Cloud, Web 2.0, Big Data Driving GrowthStorage, Cloud, Web 2.0, Big Data Driving Growth
Storage, Cloud, Web 2.0, Big Data Driving GrowthMellanox Technologies
 
OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안
OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안
OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안NAIM Networks, Inc.
 
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreAdvanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreinside-BigData.com
 
Mellanox Announces HDR 200 Gb/s InfiniBand Solutions
Mellanox Announces HDR 200 Gb/s InfiniBand SolutionsMellanox Announces HDR 200 Gb/s InfiniBand Solutions
Mellanox Announces HDR 200 Gb/s InfiniBand Solutionsinside-BigData.com
 
Mellanox for OpenStack - OpenStack最新情報セミナー 2014年10月
Mellanox for OpenStack  - OpenStack最新情報セミナー 2014年10月Mellanox for OpenStack  - OpenStack最新情報セミナー 2014年10月
Mellanox for OpenStack - OpenStack最新情報セミナー 2014年10月VirtualTech Japan Inc.
 
Mellnox Interconnect presentation in OpenPOWER Brazil workshop
Mellnox Interconnect presentation in OpenPOWER Brazil workshopMellnox Interconnect presentation in OpenPOWER Brazil workshop
Mellnox Interconnect presentation in OpenPOWER Brazil workshopGanesan Narayanasamy
 
InfiniBand In-Network Computing Technology and Roadmap
InfiniBand In-Network Computing Technology and RoadmapInfiniBand In-Network Computing Technology and Roadmap
InfiniBand In-Network Computing Technology and Roadmapinside-BigData.com
 
Ceph Day London 2014 - Ceph Over High-Performance Networks
Ceph Day London 2014 - Ceph Over High-Performance Networks Ceph Day London 2014 - Ceph Over High-Performance Networks
Ceph Day London 2014 - Ceph Over High-Performance Networks Ceph Community
 
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...Ceph Community
 
Ceph Day New York 2014: Ceph over High Performance Networks
Ceph Day New York 2014: Ceph over High Performance NetworksCeph Day New York 2014: Ceph over High Performance Networks
Ceph Day New York 2014: Ceph over High Performance NetworksCeph Community
 
Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...
Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...
Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...Ceph Community
 
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...OpenStack Korea Community
 
Accelerating 5G enterprise networks with edge computing and latency assurance
Accelerating 5G enterprise networks with edge computing and latency assuranceAccelerating 5G enterprise networks with edge computing and latency assurance
Accelerating 5G enterprise networks with edge computing and latency assuranceADVA
 
OpenNebula - Mellanox Considerations for Smart Cloud
OpenNebula - Mellanox Considerations for Smart CloudOpenNebula - Mellanox Considerations for Smart Cloud
OpenNebula - Mellanox Considerations for Smart CloudOpenNebula Project
 

Ähnlich wie Advancing Applications Performance With InfiniBand (20)

Mellanox IBM
Mellanox IBMMellanox IBM
Mellanox IBM
 
Co-Design Architecture for Exascale
Co-Design Architecture for ExascaleCo-Design Architecture for Exascale
Co-Design Architecture for Exascale
 
Mellanox Announcements at SC15
Mellanox Announcements at SC15Mellanox Announcements at SC15
Mellanox Announcements at SC15
 
Storage, Cloud, Web 2.0, Big Data Driving Growth
Storage, Cloud, Web 2.0, Big Data Driving GrowthStorage, Cloud, Web 2.0, Big Data Driving Growth
Storage, Cloud, Web 2.0, Big Data Driving Growth
 
OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안
OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안
OVNC 2015-Open Ethernet과 SDN을 통한 Mellanox의 차세대 네트워크 혁신 방안
 
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreAdvanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
 
Mellanox Announces HDR 200 Gb/s InfiniBand Solutions
Mellanox Announces HDR 200 Gb/s InfiniBand SolutionsMellanox Announces HDR 200 Gb/s InfiniBand Solutions
Mellanox Announces HDR 200 Gb/s InfiniBand Solutions
 
Mellanox for OpenStack - OpenStack最新情報セミナー 2014年10月
Mellanox for OpenStack  - OpenStack最新情報セミナー 2014年10月Mellanox for OpenStack  - OpenStack最新情報セミナー 2014年10月
Mellanox for OpenStack - OpenStack最新情報セミナー 2014年10月
 
Mellnox Interconnect presentation in OpenPOWER Brazil workshop
Mellnox Interconnect presentation in OpenPOWER Brazil workshopMellnox Interconnect presentation in OpenPOWER Brazil workshop
Mellnox Interconnect presentation in OpenPOWER Brazil workshop
 
InfiniBand In-Network Computing Technology and Roadmap
InfiniBand In-Network Computing Technology and RoadmapInfiniBand In-Network Computing Technology and Roadmap
InfiniBand In-Network Computing Technology and Roadmap
 
Ceph Day London 2014 - Ceph Over High-Performance Networks
Ceph Day London 2014 - Ceph Over High-Performance Networks Ceph Day London 2014 - Ceph Over High-Performance Networks
Ceph Day London 2014 - Ceph Over High-Performance Networks
 
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
 
Ceph Day New York 2014: Ceph over High Performance Networks
Ceph Day New York 2014: Ceph over High Performance NetworksCeph Day New York 2014: Ceph over High Performance Networks
Ceph Day New York 2014: Ceph over High Performance Networks
 
Interconnect your future
Interconnect your futureInterconnect your future
Interconnect your future
 
Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...
Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...
Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...
 
Mellanox Market Leading Solutions
Mellanox Market Leading SolutionsMellanox Market Leading Solutions
Mellanox Market Leading Solutions
 
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
 
CloudX on OpenStack
CloudX on OpenStackCloudX on OpenStack
CloudX on OpenStack
 
Accelerating 5G enterprise networks with edge computing and latency assurance
Accelerating 5G enterprise networks with edge computing and latency assuranceAccelerating 5G enterprise networks with edge computing and latency assurance
Accelerating 5G enterprise networks with edge computing and latency assurance
 
OpenNebula - Mellanox Considerations for Smart Cloud
OpenNebula - Mellanox Considerations for Smart CloudOpenNebula - Mellanox Considerations for Smart Cloud
OpenNebula - Mellanox Considerations for Smart Cloud
 

Mehr von Mellanox Technologies

Mehr von Mellanox Technologies (9)

InfiniBand FAQ
InfiniBand FAQInfiniBand FAQ
InfiniBand FAQ
 
CloudX – Expand Your Cloud into the Future
CloudX – Expand Your Cloud into the FutureCloudX – Expand Your Cloud into the Future
CloudX – Expand Your Cloud into the Future
 
Mellanox High Performance Networks for Ceph
Mellanox High Performance Networks for CephMellanox High Performance Networks for Ceph
Mellanox High Performance Networks for Ceph
 
Become a Supercomputer Hero
Become a Supercomputer HeroBecome a Supercomputer Hero
Become a Supercomputer Hero
 
Unified Fabric Manager - HP Insight CMU Connector
Unified Fabric Manager - HP Insight CMU ConnectorUnified Fabric Manager - HP Insight CMU Connector
Unified Fabric Manager - HP Insight CMU Connector
 
Print 'N Fly - SC13
Print 'N Fly - SC13Print 'N Fly - SC13
Print 'N Fly - SC13
 
Mellanox's Operational Excellence
Mellanox's Operational ExcellenceMellanox's Operational Excellence
Mellanox's Operational Excellence
 
Mellanox Financial Overview
Mellanox Financial OverviewMellanox Financial Overview
Mellanox Financial Overview
 
Big Data Benchmarking with RDMA solutions
Big Data Benchmarking with RDMA solutions Big Data Benchmarking with RDMA solutions
Big Data Benchmarking with RDMA solutions
 

Kürzlich hochgeladen

Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontologyjohnbeverley2021
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Victor Rentea
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Bhuvaneswari Subramani
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024The Digital Insurer
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesrafiqahmad00786416
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDropbox
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusZilliz
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelDeepika Singh
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxRustici Software
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Angeliki Cooney
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...Zilliz
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businesspanagenda
 

Kürzlich hochgeladen (20)

Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 

Advancing Applications Performance With InfiniBand

  • 1. Pak Lui, Application Performance Manager September 12, 2013 Advancing Applications Performance With InfiniBand
  • 2. © 2013 Mellanox Technologies 2 Mellanox Overview  Leading provider of high-throughput, low-latency server and storage interconnect • FDR 56Gb/s InfiniBand and 10/40/56GbE • Reduces application wait-time for data • Dramatically increases ROI on data center infrastructure  Company headquarters: • Yokneam, Israel; Sunnyvale, California • ~1,200 employees* worldwide  Solid financial position • Record revenue in FY12; $500.8M, up 93% year-over-year • Q2’13 revenue of $98.2M • Q3’13 guidance ~$104M to $109M • Cash + investments @ 6/30/13 = $411.3M Ticker: MLNX * As of June 2013
  • 3. © 2013 Mellanox Technologies 3 Providing End-to-End Interconnect Solutions ICs Switches/GatewaysAdapter Cards Cables/Modules Comprehensive End-to-End InfiniBand and Ethernet Solutions Portfolio Long-Haul Systems MXM Mellanox Messaging Acceleration FCA Fabric Collectives Acceleration Management UFM Unified Fabric Management Storage and Data VSA Storage Accelerator (iSCSI) UDA Unstructured Data Accelerator Comprehensive End-to-End Software Accelerators and Managment
  • 4. © 2013 Mellanox Technologies 4 Virtual Protocol Interconnect (VPI) Technology 64 ports 10GbE 36 ports 40/56GbE 48 10GbE + 12 40/56GbE 36 ports IB up to 56Gb/s 8 VPI subnets Switch OS Layer Mezzanine Card VPI Adapter VPI Switch Ethernet: 10/40/56 Gb/s InfiniBand:10/20/40/56 Gb/s Unified Fabric Manager Networking Storage Clustering Management Applications Acceleration Engines LOM Adapter Card 3.0 From data center to campus and metro connectivity
  • 5. © 2013 Mellanox Technologies 5 MetroDX™ and MetroX™  MetroX™ and MetroDX™ extends InfiniBand and Ethernet RDMA reach  Fastest interconnect over 40Gb/s InfiniBand or Ethernet links  Supporting multiple distances  Simple management to control distant sites  Low-cost, low-power , long-haul solution 40Gb/s over Campus and Metro
  • 6. © 2013 Mellanox Technologies 6 Data Center Expansion Example – Disaster Recovery
  • 7. © 2013 Mellanox Technologies 7 Key Elements in a Data Center Interconnect StorageServers Adapter and IC Adapter and IC Cables, Silicon Photonics, Parallel Optical Modules Switch and IC Applications
  • 8. © 2013 Mellanox Technologies 8 IPtronics and Kotura Complete Mellanox’s 100Gb/s+ Technology Recent Acquisitions of Kotura and IPtronics Enable Mellanox to Deliver Complete High-Speed Optical Interconnect Solutions for 100Gb/s and Beyond
  • 9. © 2013 Mellanox Technologies 9 Mellanox InfiniBand Paves the Road to Exascale
  • 10. © 2013 Mellanox Technologies 10 Meteo France
  • 11. © 2013 Mellanox Technologies 11  20K InfiniBand nodes  Mellanox end-to-end FDR and QDR InfiniBand  Supports variety of scientific and engineering projects • Coupled atmosphere-ocean models • Future space vehicle design • Large-scale dark matter halos and galaxy evolution NASA Ames Research Center Pleiades Asian Monsoon Water Cycle High-Resolution Climate Simulations
  • 12. © 2013 Mellanox Technologies 12  “Yellowstone” system  72,288 processor cores, 4,518 nodes  Mellanox end-to-end FDR InfiniBand, CLOS (full fat tree) network, single plane NCAR (National Center for Atmospheric Research)
  • 13. © 2013 Mellanox Technologies 13 Applications Performance (Courtesy of the HPC Advisory Council) 284% 1556% 282%Intel Ivy Bridge 182% 173%Intel Sandy Bridge
  • 14. © 2013 Mellanox Technologies 14 Applications Performance (Courtesy of the HPC Advisory Council) 392% 135%
  • 15. © 2013 Mellanox Technologies 15 Dominant in Enterprise Back-End Storage Interconnects SMB Direct
  • 16. © 2013 Mellanox Technologies 16 Leading Interconnect, Leading Performance Latency 5usec 2.5usec 1.3usec 0.7usec <0.5usec 200Gb/s 100Gb/s 56Gb/s 40Gb/s 20Gb/s 10Gb/s 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 Bandwidth Same Software Interface 0.6usec
  • 17. © 2013 Mellanox Technologies 17 Architectural Foundation for Exascale Computing Connect-IB
  • 18. © 2013 Mellanox Technologies 18  World’s first 100Gb/s interconnect adapter • PCIe 3.0 x16, dual FDR 56Gb/s InfiniBand ports to provide >100Gb/s  Highest InfiniBand message rate: 137 million messages per second • 4X higher than other InfiniBand solutions  <0.7 micro-second application latency  Supports GPUDirect RDMA for direct GPU-to-GPU communication  Unmatchable Storage Performance • 8,000,000 IOPs (1QP), 18,500,000 IOPs (32 QPs)  New Innovative Transport – Dynamically Connected Transport Service  Supports Scalable HPC with MPI, SHMEM and PGAS/UPC offloads Connect-IB : The Exascale Foundation Enter the World of Boundless Performance
  • 19. © 2013 Mellanox Technologies 19 Connect-IB Memory Scalability 1 1,000 1,000,000 1,000,000,000 InfiniHost, RC 2002 InfiniHost-III, SRQ 2005 ConnectX, XRC 2008 Connect-IB, DCT 2012 8 nodes 2K nodes 10K nodes 100K nodes HostMemoryConsumption(MB)
  • 20. © 2013 Mellanox Technologies 20 Dynamically Connected Transport Advantages
  • 21. © 2013 Mellanox Technologies 21 FDR InfiniBand Delivers Highest Application Performance 0 50 100 150 QDR InfiniBand FDR InfiniBand MessageRate(Million) Message Rate
  • 22. © 2013 Mellanox Technologies 22 MXM, FCA Scalable Communication
  • 23. © 2013 Mellanox Technologies 23 Mellanox ScalableHPC Accelerate Parallel Applications InfiniBand Verbs API MXM • Reliable Messaging Optimized for Mellanox HCA • Hybrid Transport Mechanism • Efficient Memory Registration • Receive Side Tag Matching FCA • Topology Aware Collective Optimization • Hardware Multicast • Separate Virtual Fabric for Collectives • CORE-Direct Hardware Offload Memory P1 Memory P2 Memory P3 MPI Memory P1 P2 P3 SHMEM Logical Shared Memory Memory P1 P2 P3 PGAS Memory Memory Logical Shared Memory
  • 24. © 2013 Mellanox Technologies 24 MXM v2.0 - Highlights  Transport library integrated with OpenMPI, OpenSHMEM, BUPC, Mvapich2  More solutions will be added in the future  Utilizing Mellanox offload engines  Supported APIs (both sync/async): AM, p2p, atomics, synchronization  Supported transports: RC, UD, DC, RoCE, SHMEM  Supported built-in mechanisms: tag matching, progress thread, memory registration cache, fast path send for small messages, zero copy, flow control  Supported data transfer protocols: Eager Send/Recv, Eager RDMA, Rendezvous
  • 25. © 2013 Mellanox Technologies 25 Mellanox FCA Collective Scalability 0.0 10.0 20.0 30.0 40.0 50.0 60.0 70.0 80.0 90.0 100.0 0 500 1000 1500 2000 2500 Latency(us) Processes (PPN=8) Barrier Collective Without FCA With FCA 0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 Latency(us) processes (PPN=8) Reduce Collective Without FCA With FCA 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 0 500 1000 1500 2000 2500 Bandwidth(KB*processes) Processes (PPN=8) 8-Byte Broadcast Without FCA With FCA
  • 26. © 2013 Mellanox Technologies 26 FDR InfiniBand Delivers Highest Application Performance
  • 27. © 2013 Mellanox Technologies 27 GPU Direct
  • 28. © 2013 Mellanox Technologies 28 GPUDirect RDMA TransmitReceive CPU GPUChip set GPU Memory InfiniBand System Memory 1 CPU GPU Chip set GPU Memory InfiniBand System Memory 1 GPUDirect RDMA CPU GPUChip set GPU Memory InfiniBand System Memory 1 CPU GPU Chip set GPU Memory InfiniBand System Memory 1 GPUDirect 1.0
  • 29. © 2013 Mellanox Technologies 29 GPU-GPU Internode MPI Latency 0 5 10 15 20 25 30 35 1 4 16 64 256 1K 4K MVAPICH2-1.9 MVAPICH2-1.9-GDR Small Message Latency Message Size (bytes) Latency(us) LowerisBetter 19.78 69 % 6.12 Preliminary Performance of MVAPICH2 with GPUDirect RDMA 69% Lower Latency GPU-GPU Internode MPI Bandwidth 0 100 200 300 400 500 600 700 800 900 1 4 16 64 256 1K 4K MVAPICH2-1.9 MVAPICH2-1.9-GDR Message Size (bytes) Bandwidth(MB/s) Small Message Bandwidth 3x HigherisBetter 3X Increase in Throughput Source: Prof. DK Panda
  • 30. © 2013 Mellanox Technologies 30 Execution Time of HSG (Heisenberg Spin Glass) Application with 2 GPU Nodes Source: Prof. DK Panda Preliminary Performance of MVAPICH2 with GPU-Direct-RDMA
  • 31. © 2013 Mellanox Technologies 31 Thank You