Preparing to program Aurora at Exascale - Early experiences and future directions

inside-BigData.com
inside-BigData.comPresident of insideHPC Media um inside-BigData.com
www.anl.gov
Argonne Leadership Computing Facility
IWOCL, Apr. 28, 2020
Preparing to Program Aurora at
Exascale
Hal Finkel, et al.
Scientifc
Supercomputing
Computing for large, tightly-coupled problems.
Lots of computational
capability paired with
lots of high-performance memory. High computational density paired with a
high-throughput low-latency network.
What is (traditional) supercomputing?
https://www.alcf.anl.gov/files/alcfscibro2015.pdf
Many Scientifc Domains
http://crd.lbl.gov/assets/pubs_presos/CDS/ATG/WassermanSOTON.pdf
Common Algorithm Classes in HPC
http://crd.lbl.gov/assets/pubs_presos/CDS/ATG/WassermanSOTON.pdf
Common Algorithm Classes in HPC
Upcoming Hardware
http://www.nextplatform.com/2015/11/30/inside-future-knights-landing-xeon-phi-systems/
https://forum.beyond3d.com/threads/nvidia-pascal-speculation-thread.55552/page-4
“Many Core” CPUs
GPUs
All of our upcoming systems use GPUs!
Toward The Future of Supercomputing
9
(https://science.osti.gov/-/media/ascr/ascac/pdf/meetings/201909/20190923_ASCAC-Helland-Barbara-Helland.pdf)
Upcoming Systems
10
Aurora: A High-level View
 Intel-Cray machine arriving at Argonne in 2021
 Sustained Performance > 1Exafoos
 Intel Xeon orocessors and Intel Xe GPUs
 2 Xeons (Saoohire Raoids)
 6 GPUs (Ponte Vecchio [PVC])
 Greater than 10 PB of total memory
 Cray Slingshot fabric and Shasta olatform
 Filesystem
 Distributed Asynchronous Object Store (DAOS)
 ≥ 230 PB of storage caoacity
 Bandwidth of > 25 TB/s
 Lustre
 150 PB of storage caoacity
 Bandwidth of ~1TB/s
11
Aurora Compute Node
2 Intel Xeon (Saoohire Raoids)
orocessors
6 Xe Architecture based GPUs
(Ponte Vecchio)
 All to all connection
 Low latency and high bandwidth
8 Slingshot Fabric endooints
Unifed Memory Architecture
across CPUs and GPUs
Unified Memory and
GPU ↔ GPU connectivity…
Important implications for the
programming model!
Programming Models
(for Aurora)
13
Three Pillars
SimulatinSimulatin DataData LearningLearning
DirectvesDirectves
Parallel RuntmesParallel Runtmes
Silver LibrariesSilver Libraries
HPC LanguagesHPC Languages
Big Data StackBig Data Stack
Statstcal LibrariesStatstcal Libraries
Priductvity LanguagesPriductvity Languages
DatabasesDatabases
DL FramewirksDL Framewirks
Linear Algebra LibrariesLinear Algebra Libraries
Statstcal LibrariesStatstcal Libraries
Priductvity LanguagesPriductvity Languages
Math Libraries, C++ Standard Library, libcMath Libraries, C++ Standard Library, libc
I/O, MessagingI/O, Messaging
SchedulerScheduler
Linux Kernel, POSIXLinux Kernel, POSIX
Cimpilers, Perfirmance Tiils, DebuggersCimpilers, Perfirmance Tiils, Debuggers
Cintainers, VisualizatinCintainers, Visualizatin
14
MPI on Aurora
• Intel MPI & Cray MPI
• MPI 3.0 standard comoliant
• The MPI library will be thread safe
• Allow aoolications to use MPI from individual threads
• Efcient MPIHTꔰREADHMUTIPLE (locking ootimitations)
• Asynchronous orogress in all tyoes of nonblocking communication
• Nonblocking send-receive and collectives
• One-sided ooerations
• ꔰardware and tooology ootimited collective imolementations
• Suooorts MPI tools interface
• Control variables
MPICꔰ
Cꔰ4
libfabric
Slingshot
orovider
ꔰardware
OFI
MPICꔰ
Cꔰ4
libfabric
Slingshot
orovider
ꔰardware
OFI
15
Intel Fortran for Aurora
Fortran 2008
OoenMP 5
A signifcant amount of the code run on oresent day
machines is written in Fortran.
Most new code develooment seems to have shifted to
other languages (mainly C++).
16
oneAPI
Industry soecifcation from Intel (
httos://www.oneaoi.com/soec/)
 Language and libraries to target orogramming across
diverse architectures (DPC++, APIs, low level interface)
Intel oneAPI oroducts and toolkits (
httos://software.intel.com/ONEAPI)
 Imolementations of the oneAPI soecifcation and
analysis and debug tools to helo orogramming
17
Intel MKL – Math Kernel Library
ꔰighly tuned algorithms
 FFT
 Linear algebra (BLAS, LAPACK)
 Soarse solvers
 Statistical functions
 Vector math
 Random number generators
Ootimited for every Intel olatform
oneAPI MKL (oneMKL)
 httos://software.intel.com/en-us/oneaoi/mkl
oneAPI beta includes
DPC++ support
18
AI and Analytics
Libraries to suooort AI and Analytics
 OneAPI Deeo Neural Network Library (oneDNN)
 ꔰigh Performance Primitives to accelerate deeo learning frameworks
 Powers Tensorfow, PyTorch, MXNet, Intel Cafe, and more
 Running on Gen9 today (via OoenCL)
 oneAPI Data Analytics Library (oneDAL)
 Classical Machine Learning Algorithms
 Easy to use one line daal4oy Python interfaces
 Powers Scikit-Learn
 Aoache Soark MLlib
19
Heterogenous System Programming Models
Aoolications will be using a variety of orogramming models for Exascale:
 CUDA
 OoenCL
 ꔰIP
 OoenACC
 OoenMP
 DPC++/SYCL
 Kokkos
 Raja
Not all systems will suooort all models
Libraries may helo you abstract some orogramming models.
20
OpenMP 5
OoenMP 5 constructs will orovide directives based orogramming model for Intel GPUs
Available for C, C++, and Fortran
A oortable model exoected to be suooorted on a variety of olatforms (Aurora, Frontier,
Perlmutter, …)
Ootimited for Aurora
For Aurora, OoenACC codes could be converted into OoenMP
 ALCF staf will assist with conversion, training, and best oractices
 Automated translation oossible through the clacc conversion tool (for C/C++)
https://wwwsipenmpsirg/
21
OpenMP 4.5/5: for Aurora
OoenMP 4.5/5 soecifcation has signifcant uodates to allow for imoroved suooort of
accelerator devices
Distributng iteratons of the loop to
threads
Ofoading code to run on accelerator Controlling data transfer between
devices
#pragma omp target [clause[[,] clause],…]
structured-block
#pragma omp declare target
declaratons-defniton-seq
#pragma omp declare variant*(variant-
func-id) clause new-line
functon defniton or declaraton
#pragma omp teams [clause[[,] clause],…]
structured-block
#pragma omp distribute [clause[[,] clause],
…]
for-loops
#pragma omp loop* [clause[[,] clause],…]
for-loops
map ([map-type:] list )
map-type:=allic | tifrim | frim | ti |
…
#pragma omp target data [clause[[,] clause],…]
structured-block
#pragma omp target update [clause[[,] clause],
…]
* denites OMP 5Envirinment variables
• Cintril default device thriugh
OMP_DEFAULT_DEVICE
• Cintril ifiad with
OMP_TARGET_OFFLOAD
Runtme suppirt riutnes:
• viid omp_set_default_device(int dev_num)
• int omp_get_default_device(viid)
• int omp_get_num_devices(viid)
• int omp_get_num_teams(viid)
22
DPC++ (Data Parallel C++) and SYCL
SYCL
 Khronos standard soecifcation
 SYCL is a C++ based abstraction layer (standard C++11)
 Builds on OoenCL concepts (but single-source)
 SYCL is designed to be as close to standard C++ as
possible
Current Imolementations of SYCL:
 ComouteCPP™ (www.codeolay.com)
 Intel SYCL (github.com/intel/llvm)
 triSYCL (github.com/triSYCL/triSYCL)
 hioSYCL (github.com/illuhad/hioSYCL)
 Runs on today’s CPUs and nVidia, AMD, Intel GPUs
SYCL 1.2.1 or later
C++11 or
later
23
DPC++ (Data Parallel C++) and SYCL
SYCL
 Khronos standard soecifcation
 SYCL is a C++ based abstraction layer (standard C++11)
 Builds on OoenCL concepts (but single-source)
 SYCL is designed to be as close to standard C++ as
possible
Current Imolementations of SYCL:
 ComouteCPP™ (www.codeolay.com)
 Intel SYCL (github.com/intel/llvm)
 triSYCL (github.com/triSYCL/triSYCL)
 hioSYCL (github.com/illuhad/hioSYCL)
 Runs on today’s CPUs and nVidia, AMD, Intel GPUs
DPC++
 Part of Intel oneAPI soecifcation
 Intel extension of SYCL to suooort new innovative features
 Incoroorates SYCL 1.2.1 soecifcation and Unifed Shared
Memory
 Add language or runtime extensions as needed to meet
user needs
Intel DPC++
SYCL 1.2.1 or later
C++11 or
later
Extensions Descripton
Unifed Shared
Memiry (USM)
defnes piinter-based memiry accesses and
management interfacess
In-irder queues
defnes simple in-irder semantcs fir queues,
ti simplify cimmin ciding patternss
Reductin
privides reductin abstractin ti the ND-
range firm if parallel_firs
Optinal lambda
name
remives requirement ti manually name
lambdas that defne kernelss
Subgriups
defnes a griuping if wirk-items within a
wirk-griups
Data fiw pipes
enables efcient First-In, First-Out (FIFO)
cimmunicatin (FPGA-inly)
httos://soec.oneaoi.com/oneAPI/Elements/docoo/docooHroot.html#extensions-table
24
OpenMP 5
Host DeviceTransfer data and
execution control
extern void init(float*, float*, int);
extern void output(float*, int);
void vec_mult(float*p, float*v1, float*v2, int N)
{
  int i;
  init(v1, v2, N);
  #pragma omp target teams distribute parallel for simd 
map(to: v1[0:N], v2[0:N]) map(from: p[0:N])
  for (i=0; i<N; i++)
  {
      p[i] = v1[i]*v2[i];
  }
  output(p, N);
}
Creates
teams if
threads
in the
target
device
Distributes iteratins ti the
threads, where each thread
uses SIMD parallelism
Cintrilling data
transfer
25
void foo( int *A ) {
default_selector selector; // Selectors determine which device to dispatch to.
{
queue myQueue(selector); // Create queue to submit work to, based on selector
// Wrap data in a sycl::buffer
buffer<cl_int, 1> bufferA(A, 1024);
 
myQueue.submit([&](handler& cgh) {
//Create an accessor for the sycl buffer.
auto writeResult = bufferA.get_access<access::mode::discard_write>(cgh);
 
// Kernel
cgh.parallel_for<class hello_world>(range<1>{1024}, [=](id<1> idx) {
writeResult[idx] = idx[0];
}); // End of the kernel function
}); // End of the queue commands
} // End of scope, wait for the queued work to stop.
...
}
Get a device
SYCL bufer
using hist
piinter
Kernel
Queue iut if
scipe
Data accessir
SYCL Examples
Host DeviceTransfer data and
execution control
Performance Portability
A performance-portable application...
1) Is Portable
2) Runs on diverse architectures with reasonable performance
Performance Portability
Science Problem
Choose Algorithms
Optimize Algorithms
Knowledge of
System Architecture
and Tools
Run high-performance code!
Implement and Test Algorithms
The Development Workfow?
Science Problem
Choose Algorithms
Optimize Algorithms
Knowledge of
System Architecture
and Tools
Run high-performance code!
Implement and Test Algorithms
The Development Workfow?
Science Problem
Choose Algorithms
For the Target Architectures
Optimize Algorithms Knowledge of
System Architecture
and Tools
Run high-performance code!
Implement and Test Algorithms
Trade-offs between:
●
Basis functions
●
Resolution
●
Lagrangian vs. Eulerian representations
●
Renormalization and regularization schemes
●
Solver techniques
●
Evolved vs computed degrees of freedom
●
And more…
Cannot be made by a compiler!
Real Workfow...
Does this mean that performance portability is impossible?
No, but it does mean that performance-portable
applications tend to be highly parameterizable.
Performance Portability is Possible!
http://llvm-hpc2-workshop.github.io/slides/Tian.pdf
In 2015, many codes use OpenMP
directly to express parallelism.
A minority of applications use
abstraction libraries
(TBB and Thrust on this chart)
On the Usage of Abstract Models
But this is changing…
●
We're seeing even greater adoption of OpenMP, but…
●
Many applications are not using OpenMP directly.
Abstraction libraries are gaining in popularity.
●
Well established libraries such as TBB and Thrust.
●
RAJA (https://github.com/LLNL/RAJA)
●
Kokkos (https://github.com/kokkos)
Use of C++ Lambdas.
Can use OpenMP and/or other compiler directives
under the hood, but probably DPC++/HIP/CUDA.
On the Usage of Abstract Models
And starting with C++17, the standard library has parallel algorithms
too...
// For example:
std::sort(std::execution::par_unseq, vec.begin(), vec.end()); // parallel and vectorized
On the Usage of Abstract Models
Why can't programmers just write the code optimally?
●
Because what is optimal is different on different architectures.
●
Because programmers use abstraction layers and may not be able to write the optimal code
directly:
in library1:
void foo() {
std::for_each(std::execution::par_unseq, vec1.begin(), vec1.end(), ...);
}
in library2:
void bar() {
std::for_each(std::execution::par_unseq, vec2.begin(), vec2.end(), ...);
}
foo(); bar();
Compiler Optimizations for Parallel Code...
void foo(double * restrict a, double * restrict b, etc.) {
#pragma omp parallel for
for (i = 0; i < n; ++i) {
a[i] = e[i]*(b[i]*c[i] + d[i]) + f[i];
m[i] = q[i]*(n[i]*o[i] + p[i]) + r[i];
}
}
void foo(double * restrict a, double * restrict b, etc.) {
#pragma omp parallel for
for (i = 0; i < n; ++i) {
a[i] = e[i]*(b[i]*c[i] + d[i]) + f[i];
}
#pragma omp parallel for
for (i = 0; i < n; ++i) {
m[i] = q[i]*(n[i]*o[i] + p[i]) + r[i];
}
}
Split the loop Or should we fuse instead?
Compiler Optimizations for Parallel Code...
void foo(double * restrict a, double * restrict b, etc.) {
#pragma omp parallel for
for (i = 0; i < n; ++i) {
a[i] = e[i]*(b[i]*c[i] + d[i]) + f[i];
}
#pragma omp parallel for
for (i = 0; i < n; ++i) {
m[i] = q[i]*(n[i]*o[i] + p[i]) + r[i];
}
}
void foo(double * restrict a, double * restrict b, etc.) {
#pragma omp parallel
{
#pragma omp for
for (i = 0; i < n; ++i) {
a[i] = e[i]*(b[i]*c[i] + d[i]) + f[i];
}
#pragma omp for
for (i = 0; i < n; ++i) {
m[i] = q[i]*(n[i]*o[i] + p[i]) + r[i];
}
}
}
(we might want to fuse
the parallel regions)
Compiler Optimizations for Parallel Code...
Rodinia - hotspot3D
./3D 512 8 100 ../data/hotspot3D/power_512x8 ../data/hotspot3D/temp_512x8
Intel core i9, 10 cores, 20 threads, 51 runs, with and without
● aa => alias attribute propagation
● ap => argument privatization
Compiler Understanding Parallelism: It Can Help
(Work by Johannes Doerfert,
see our IWOMP 2018 paper
Base Version
Compiler understands parallelism enough
to get better pointer aliasing results.
It is really hard for compilers to change memory layouts and generally determine
what memory is needed where. The Kokkos C++ library has memory placement and layout policies:
View<const double ***, Layout, Space , MemoryTraits<RandomAccess>> name (...);
https://trilinos.org/oldsite/events/trilinos_user_group_2013/presentations/2013-11-TUG-Kokkos-Tutorial.pdf
Constant random-access data might be put into
texture memory on a GPU, for example.
Using the right memory layout
and placement helps a lot!
Memory Layout and Placement
As you might imagine, nothing is perfect yet...
So Where Does This Leave Us?
OpenMP DPC++ Kokkos / RAJA
Language Simple directives have
yielded to complicated
directives
Modern C++, simple
cases will become
simpler over time
Modern C++
Default Execution
Model
Fork-Join Work Queue
(Probably better for
expressing scalable
parallelism)
Fork-Join
Compiler Optimization
Potential
High Low
(Dynamic work queue
obscures structure)
Medium
(Greatly depends on
underlying backend)
As you might imagine, nothing is perfect yet...
So Where Does This Leave Us?
OpenMP DPC++ Kokkos / RAJA
Integrate With Highly-
Parameterized Code
Low / Medium High High
Helps With Data
Layout
No No
(Not Yet)
Yes
Good Accelerator-to-
Accelerator Transfer /
Dispatch
No
(Not Yet)
No
(Not Yet)
No
(Not Yet)
Conclusion
●
Future supercomputers will continue to advance scientific progress in a variety of domains.
●
Applications will rely on high-performance libraries as well as parallel-programming models.
●
DPC++/SYCL will be a critical programming model on future HPC platforms.
●
We will continue to understand the extent to which compiler optimizations assist the
development of portably-performant applications vs. the ability to explicitly parameterize and
dynamically compose the implementations of algorithms.
●
Parallel programming models will continue to evolve: support for data layouts and less-host-
centric models will be explored.
Conclusions
44
Acknowledgements
Argonne Leadershio Comouting Facility and Comoutational Science
Division Staf
This research was suooorted by the Exascale Comouting Project (17-SC-
20-SC), a collaborative efort of two U.S. Deoartment of Energy
organitations (Ofce of Science and the National Nuclear Security
Administration) resoonsible for the olanning and oreoaration of a caoable
exascale ecosystem, including software, aoolications, hardware,
advanced system engineering and early testbed olatforms, in suooort of
the nation’s exascale comouting imoerative.
This research used resources of the Argonne Leadershio Comouting
Facility, which is a DOE Ofce of Science User Facility suooorted under
Contract DE-AC02-06Cꔰ11357.
Thank You
1 von 45

Más contenido relacionado

Was ist angesagt?(20)

IBM HPC Transformation with AI IBM HPC Transformation with AI
IBM HPC Transformation with AI
Ganesan Narayanasamy540 views
OpenPOWER System Marconi100OpenPOWER System Marconi100
OpenPOWER System Marconi100
Ganesan Narayanasamy490 views
IBM Data Centric Systems & OpenPOWERIBM Data Centric Systems & OpenPOWER
IBM Data Centric Systems & OpenPOWER
inside-BigData.com846 views
DOME 64-bit μDataCenterDOME 64-bit μDataCenter
DOME 64-bit μDataCenter
inside-BigData.com1.1K views
TAU E4S ON OpenPOWER /POWER9 platformTAU E4S ON OpenPOWER /POWER9 platform
TAU E4S ON OpenPOWER /POWER9 platform
Ganesan Narayanasamy347 views
Covid-19 Response Capability with Power SystemsCovid-19 Response Capability with Power Systems
Covid-19 Response Capability with Power Systems
Ganesan Narayanasamy545 views
Programming Models for Exascale SystemsProgramming Models for Exascale Systems
Programming Models for Exascale Systems
inside-BigData.com671 views
BXI: Bull eXascale InterconnectBXI: Bull eXascale Interconnect
BXI: Bull eXascale Interconnect
inside-BigData.com2.1K views

Similar a Preparing to program Aurora at Exascale - Early experiences and future directions

Virtual platformVirtual platform
Virtual platformsean chen
761 views27 Folien
MulticoreMulticore
MulticoreBirgit Plötzeneder
541 views39 Folien

Similar a Preparing to program Aurora at Exascale - Early experiences and future directions(20)

Más de inside-BigData.com(20)

Major Market Shifts in ITMajor Market Shifts in IT
Major Market Shifts in IT
inside-BigData.com5K views
Transforming Private 5G NetworksTransforming Private 5G Networks
Transforming Private 5G Networks
inside-BigData.com1.3K views
HPC Impact: EDA Telemetry Neural NetworksHPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural Networks
inside-BigData.com309 views
Machine Learning for Weather ForecastsMachine Learning for Weather Forecasts
Machine Learning for Weather Forecasts
inside-BigData.com2.9K views
HPC AI Advisory Council UpdateHPC AI Advisory Council Update
HPC AI Advisory Council Update
inside-BigData.com427 views
Scaling TCO in a Post Moore's EraScaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's Era
inside-BigData.com447 views
Introducing HPC with a Raspberry Pi ClusterIntroducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi Cluster
inside-BigData.com3K views
Data Parallel Deep LearningData Parallel Deep Learning
Data Parallel Deep Learning
inside-BigData.com588 views
Making Supernovae with JetsMaking Supernovae with Jets
Making Supernovae with Jets
inside-BigData.com131 views
Adaptive Linear Solvers and EigensolversAdaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and Eigensolvers
inside-BigData.com268 views
FPGAs and Machine LearningFPGAs and Machine Learning
FPGAs and Machine Learning
inside-BigData.com4.4K views
Deep Learning State of the Art (2020)Deep Learning State of the Art (2020)
Deep Learning State of the Art (2020)
inside-BigData.com12.2K views

Último(20)

The Research Portal of Catalonia: Growing more (information) & more (services)The Research Portal of Catalonia: Growing more (information) & more (services)
The Research Portal of Catalonia: Growing more (information) & more (services)
CSUC - Consorci de Serveis Universitaris de Catalunya59 views
Web Dev - 1 PPT.pdfWeb Dev - 1 PPT.pdf
Web Dev - 1 PPT.pdf
gdsczhcet49 views
Green Leaf Consulting: Capabilities DeckGreen Leaf Consulting: Capabilities Deck
Green Leaf Consulting: Capabilities Deck
GreenLeafConsulting177 views
Liqid: Composable CXL PreviewLiqid: Composable CXL Preview
Liqid: Composable CXL Preview
CXL Forum120 views
Java Platform Approach 1.0 - Picnic MeetupJava Platform Approach 1.0 - Picnic Meetup
Java Platform Approach 1.0 - Picnic Meetup
Rick Ossendrijver24 views

Preparing to program Aurora at Exascale - Early experiences and future directions

  • 1. www.anl.gov Argonne Leadership Computing Facility IWOCL, Apr. 28, 2020 Preparing to Program Aurora at Exascale Hal Finkel, et al.
  • 3. Computing for large, tightly-coupled problems. Lots of computational capability paired with lots of high-performance memory. High computational density paired with a high-throughput low-latency network. What is (traditional) supercomputing?
  • 10. 10 Aurora: A High-level View  Intel-Cray machine arriving at Argonne in 2021  Sustained Performance > 1Exafoos  Intel Xeon orocessors and Intel Xe GPUs  2 Xeons (Saoohire Raoids)  6 GPUs (Ponte Vecchio [PVC])  Greater than 10 PB of total memory  Cray Slingshot fabric and Shasta olatform  Filesystem  Distributed Asynchronous Object Store (DAOS)  ≥ 230 PB of storage caoacity  Bandwidth of > 25 TB/s  Lustre  150 PB of storage caoacity  Bandwidth of ~1TB/s
  • 11. 11 Aurora Compute Node 2 Intel Xeon (Saoohire Raoids) orocessors 6 Xe Architecture based GPUs (Ponte Vecchio)  All to all connection  Low latency and high bandwidth 8 Slingshot Fabric endooints Unifed Memory Architecture across CPUs and GPUs Unified Memory and GPU ↔ GPU connectivity… Important implications for the programming model!
  • 13. 13 Three Pillars SimulatinSimulatin DataData LearningLearning DirectvesDirectves Parallel RuntmesParallel Runtmes Silver LibrariesSilver Libraries HPC LanguagesHPC Languages Big Data StackBig Data Stack Statstcal LibrariesStatstcal Libraries Priductvity LanguagesPriductvity Languages DatabasesDatabases DL FramewirksDL Framewirks Linear Algebra LibrariesLinear Algebra Libraries Statstcal LibrariesStatstcal Libraries Priductvity LanguagesPriductvity Languages Math Libraries, C++ Standard Library, libcMath Libraries, C++ Standard Library, libc I/O, MessagingI/O, Messaging SchedulerScheduler Linux Kernel, POSIXLinux Kernel, POSIX Cimpilers, Perfirmance Tiils, DebuggersCimpilers, Perfirmance Tiils, Debuggers Cintainers, VisualizatinCintainers, Visualizatin
  • 14. 14 MPI on Aurora • Intel MPI & Cray MPI • MPI 3.0 standard comoliant • The MPI library will be thread safe • Allow aoolications to use MPI from individual threads • Efcient MPIHTꔰREADHMUTIPLE (locking ootimitations) • Asynchronous orogress in all tyoes of nonblocking communication • Nonblocking send-receive and collectives • One-sided ooerations • ꔰardware and tooology ootimited collective imolementations • Suooorts MPI tools interface • Control variables MPICꔰ Cꔰ4 libfabric Slingshot orovider ꔰardware OFI MPICꔰ Cꔰ4 libfabric Slingshot orovider ꔰardware OFI
  • 15. 15 Intel Fortran for Aurora Fortran 2008 OoenMP 5 A signifcant amount of the code run on oresent day machines is written in Fortran. Most new code develooment seems to have shifted to other languages (mainly C++).
  • 16. 16 oneAPI Industry soecifcation from Intel ( httos://www.oneaoi.com/soec/)  Language and libraries to target orogramming across diverse architectures (DPC++, APIs, low level interface) Intel oneAPI oroducts and toolkits ( httos://software.intel.com/ONEAPI)  Imolementations of the oneAPI soecifcation and analysis and debug tools to helo orogramming
  • 17. 17 Intel MKL – Math Kernel Library ꔰighly tuned algorithms  FFT  Linear algebra (BLAS, LAPACK)  Soarse solvers  Statistical functions  Vector math  Random number generators Ootimited for every Intel olatform oneAPI MKL (oneMKL)  httos://software.intel.com/en-us/oneaoi/mkl oneAPI beta includes DPC++ support
  • 18. 18 AI and Analytics Libraries to suooort AI and Analytics  OneAPI Deeo Neural Network Library (oneDNN)  ꔰigh Performance Primitives to accelerate deeo learning frameworks  Powers Tensorfow, PyTorch, MXNet, Intel Cafe, and more  Running on Gen9 today (via OoenCL)  oneAPI Data Analytics Library (oneDAL)  Classical Machine Learning Algorithms  Easy to use one line daal4oy Python interfaces  Powers Scikit-Learn  Aoache Soark MLlib
  • 19. 19 Heterogenous System Programming Models Aoolications will be using a variety of orogramming models for Exascale:  CUDA  OoenCL  ꔰIP  OoenACC  OoenMP  DPC++/SYCL  Kokkos  Raja Not all systems will suooort all models Libraries may helo you abstract some orogramming models.
  • 20. 20 OpenMP 5 OoenMP 5 constructs will orovide directives based orogramming model for Intel GPUs Available for C, C++, and Fortran A oortable model exoected to be suooorted on a variety of olatforms (Aurora, Frontier, Perlmutter, …) Ootimited for Aurora For Aurora, OoenACC codes could be converted into OoenMP  ALCF staf will assist with conversion, training, and best oractices  Automated translation oossible through the clacc conversion tool (for C/C++) https://wwwsipenmpsirg/
  • 21. 21 OpenMP 4.5/5: for Aurora OoenMP 4.5/5 soecifcation has signifcant uodates to allow for imoroved suooort of accelerator devices Distributng iteratons of the loop to threads Ofoading code to run on accelerator Controlling data transfer between devices #pragma omp target [clause[[,] clause],…] structured-block #pragma omp declare target declaratons-defniton-seq #pragma omp declare variant*(variant- func-id) clause new-line functon defniton or declaraton #pragma omp teams [clause[[,] clause],…] structured-block #pragma omp distribute [clause[[,] clause], …] for-loops #pragma omp loop* [clause[[,] clause],…] for-loops map ([map-type:] list ) map-type:=allic | tifrim | frim | ti | … #pragma omp target data [clause[[,] clause],…] structured-block #pragma omp target update [clause[[,] clause], …] * denites OMP 5Envirinment variables • Cintril default device thriugh OMP_DEFAULT_DEVICE • Cintril ifiad with OMP_TARGET_OFFLOAD Runtme suppirt riutnes: • viid omp_set_default_device(int dev_num) • int omp_get_default_device(viid) • int omp_get_num_devices(viid) • int omp_get_num_teams(viid)
  • 22. 22 DPC++ (Data Parallel C++) and SYCL SYCL  Khronos standard soecifcation  SYCL is a C++ based abstraction layer (standard C++11)  Builds on OoenCL concepts (but single-source)  SYCL is designed to be as close to standard C++ as possible Current Imolementations of SYCL:  ComouteCPP™ (www.codeolay.com)  Intel SYCL (github.com/intel/llvm)  triSYCL (github.com/triSYCL/triSYCL)  hioSYCL (github.com/illuhad/hioSYCL)  Runs on today’s CPUs and nVidia, AMD, Intel GPUs SYCL 1.2.1 or later C++11 or later
  • 23. 23 DPC++ (Data Parallel C++) and SYCL SYCL  Khronos standard soecifcation  SYCL is a C++ based abstraction layer (standard C++11)  Builds on OoenCL concepts (but single-source)  SYCL is designed to be as close to standard C++ as possible Current Imolementations of SYCL:  ComouteCPP™ (www.codeolay.com)  Intel SYCL (github.com/intel/llvm)  triSYCL (github.com/triSYCL/triSYCL)  hioSYCL (github.com/illuhad/hioSYCL)  Runs on today’s CPUs and nVidia, AMD, Intel GPUs DPC++  Part of Intel oneAPI soecifcation  Intel extension of SYCL to suooort new innovative features  Incoroorates SYCL 1.2.1 soecifcation and Unifed Shared Memory  Add language or runtime extensions as needed to meet user needs Intel DPC++ SYCL 1.2.1 or later C++11 or later Extensions Descripton Unifed Shared Memiry (USM) defnes piinter-based memiry accesses and management interfacess In-irder queues defnes simple in-irder semantcs fir queues, ti simplify cimmin ciding patternss Reductin privides reductin abstractin ti the ND- range firm if parallel_firs Optinal lambda name remives requirement ti manually name lambdas that defne kernelss Subgriups defnes a griuping if wirk-items within a wirk-griups Data fiw pipes enables efcient First-In, First-Out (FIFO) cimmunicatin (FPGA-inly) httos://soec.oneaoi.com/oneAPI/Elements/docoo/docooHroot.html#extensions-table
  • 24. 24 OpenMP 5 Host DeviceTransfer data and execution control extern void init(float*, float*, int); extern void output(float*, int); void vec_mult(float*p, float*v1, float*v2, int N) {   int i;   init(v1, v2, N);   #pragma omp target teams distribute parallel for simd map(to: v1[0:N], v2[0:N]) map(from: p[0:N])   for (i=0; i<N; i++)   {       p[i] = v1[i]*v2[i];   }   output(p, N); } Creates teams if threads in the target device Distributes iteratins ti the threads, where each thread uses SIMD parallelism Cintrilling data transfer
  • 25. 25 void foo( int *A ) { default_selector selector; // Selectors determine which device to dispatch to. { queue myQueue(selector); // Create queue to submit work to, based on selector // Wrap data in a sycl::buffer buffer<cl_int, 1> bufferA(A, 1024);   myQueue.submit([&](handler& cgh) { //Create an accessor for the sycl buffer. auto writeResult = bufferA.get_access<access::mode::discard_write>(cgh);   // Kernel cgh.parallel_for<class hello_world>(range<1>{1024}, [=](id<1> idx) { writeResult[idx] = idx[0]; }); // End of the kernel function }); // End of the queue commands } // End of scope, wait for the queued work to stop. ... } Get a device SYCL bufer using hist piinter Kernel Queue iut if scipe Data accessir SYCL Examples Host DeviceTransfer data and execution control
  • 27. A performance-portable application... 1) Is Portable 2) Runs on diverse architectures with reasonable performance Performance Portability
  • 28. Science Problem Choose Algorithms Optimize Algorithms Knowledge of System Architecture and Tools Run high-performance code! Implement and Test Algorithms The Development Workfow?
  • 29. Science Problem Choose Algorithms Optimize Algorithms Knowledge of System Architecture and Tools Run high-performance code! Implement and Test Algorithms The Development Workfow?
  • 30. Science Problem Choose Algorithms For the Target Architectures Optimize Algorithms Knowledge of System Architecture and Tools Run high-performance code! Implement and Test Algorithms Trade-offs between: ● Basis functions ● Resolution ● Lagrangian vs. Eulerian representations ● Renormalization and regularization schemes ● Solver techniques ● Evolved vs computed degrees of freedom ● And more… Cannot be made by a compiler! Real Workfow...
  • 31. Does this mean that performance portability is impossible? No, but it does mean that performance-portable applications tend to be highly parameterizable. Performance Portability is Possible!
  • 32. http://llvm-hpc2-workshop.github.io/slides/Tian.pdf In 2015, many codes use OpenMP directly to express parallelism. A minority of applications use abstraction libraries (TBB and Thrust on this chart) On the Usage of Abstract Models
  • 33. But this is changing… ● We're seeing even greater adoption of OpenMP, but… ● Many applications are not using OpenMP directly. Abstraction libraries are gaining in popularity. ● Well established libraries such as TBB and Thrust. ● RAJA (https://github.com/LLNL/RAJA) ● Kokkos (https://github.com/kokkos) Use of C++ Lambdas. Can use OpenMP and/or other compiler directives under the hood, but probably DPC++/HIP/CUDA. On the Usage of Abstract Models
  • 34. And starting with C++17, the standard library has parallel algorithms too... // For example: std::sort(std::execution::par_unseq, vec.begin(), vec.end()); // parallel and vectorized On the Usage of Abstract Models
  • 35. Why can't programmers just write the code optimally? ● Because what is optimal is different on different architectures. ● Because programmers use abstraction layers and may not be able to write the optimal code directly: in library1: void foo() { std::for_each(std::execution::par_unseq, vec1.begin(), vec1.end(), ...); } in library2: void bar() { std::for_each(std::execution::par_unseq, vec2.begin(), vec2.end(), ...); } foo(); bar(); Compiler Optimizations for Parallel Code...
  • 36. void foo(double * restrict a, double * restrict b, etc.) { #pragma omp parallel for for (i = 0; i < n; ++i) { a[i] = e[i]*(b[i]*c[i] + d[i]) + f[i]; m[i] = q[i]*(n[i]*o[i] + p[i]) + r[i]; } } void foo(double * restrict a, double * restrict b, etc.) { #pragma omp parallel for for (i = 0; i < n; ++i) { a[i] = e[i]*(b[i]*c[i] + d[i]) + f[i]; } #pragma omp parallel for for (i = 0; i < n; ++i) { m[i] = q[i]*(n[i]*o[i] + p[i]) + r[i]; } } Split the loop Or should we fuse instead? Compiler Optimizations for Parallel Code...
  • 37. void foo(double * restrict a, double * restrict b, etc.) { #pragma omp parallel for for (i = 0; i < n; ++i) { a[i] = e[i]*(b[i]*c[i] + d[i]) + f[i]; } #pragma omp parallel for for (i = 0; i < n; ++i) { m[i] = q[i]*(n[i]*o[i] + p[i]) + r[i]; } } void foo(double * restrict a, double * restrict b, etc.) { #pragma omp parallel { #pragma omp for for (i = 0; i < n; ++i) { a[i] = e[i]*(b[i]*c[i] + d[i]) + f[i]; } #pragma omp for for (i = 0; i < n; ++i) { m[i] = q[i]*(n[i]*o[i] + p[i]) + r[i]; } } } (we might want to fuse the parallel regions) Compiler Optimizations for Parallel Code...
  • 38. Rodinia - hotspot3D ./3D 512 8 100 ../data/hotspot3D/power_512x8 ../data/hotspot3D/temp_512x8 Intel core i9, 10 cores, 20 threads, 51 runs, with and without ● aa => alias attribute propagation ● ap => argument privatization Compiler Understanding Parallelism: It Can Help (Work by Johannes Doerfert, see our IWOMP 2018 paper Base Version Compiler understands parallelism enough to get better pointer aliasing results.
  • 39. It is really hard for compilers to change memory layouts and generally determine what memory is needed where. The Kokkos C++ library has memory placement and layout policies: View<const double ***, Layout, Space , MemoryTraits<RandomAccess>> name (...); https://trilinos.org/oldsite/events/trilinos_user_group_2013/presentations/2013-11-TUG-Kokkos-Tutorial.pdf Constant random-access data might be put into texture memory on a GPU, for example. Using the right memory layout and placement helps a lot! Memory Layout and Placement
  • 40. As you might imagine, nothing is perfect yet... So Where Does This Leave Us? OpenMP DPC++ Kokkos / RAJA Language Simple directives have yielded to complicated directives Modern C++, simple cases will become simpler over time Modern C++ Default Execution Model Fork-Join Work Queue (Probably better for expressing scalable parallelism) Fork-Join Compiler Optimization Potential High Low (Dynamic work queue obscures structure) Medium (Greatly depends on underlying backend)
  • 41. As you might imagine, nothing is perfect yet... So Where Does This Leave Us? OpenMP DPC++ Kokkos / RAJA Integrate With Highly- Parameterized Code Low / Medium High High Helps With Data Layout No No (Not Yet) Yes Good Accelerator-to- Accelerator Transfer / Dispatch No (Not Yet) No (Not Yet) No (Not Yet)
  • 43. ● Future supercomputers will continue to advance scientific progress in a variety of domains. ● Applications will rely on high-performance libraries as well as parallel-programming models. ● DPC++/SYCL will be a critical programming model on future HPC platforms. ● We will continue to understand the extent to which compiler optimizations assist the development of portably-performant applications vs. the ability to explicitly parameterize and dynamically compose the implementations of algorithms. ● Parallel programming models will continue to evolve: support for data layouts and less-host- centric models will be explored. Conclusions
  • 44. 44 Acknowledgements Argonne Leadershio Comouting Facility and Comoutational Science Division Staf This research was suooorted by the Exascale Comouting Project (17-SC- 20-SC), a collaborative efort of two U.S. Deoartment of Energy organitations (Ofce of Science and the National Nuclear Security Administration) resoonsible for the olanning and oreoaration of a caoable exascale ecosystem, including software, aoolications, hardware, advanced system engineering and early testbed olatforms, in suooort of the nation’s exascale comouting imoerative. This research used resources of the Argonne Leadershio Comouting Facility, which is a DOE Ofce of Science User Facility suooorted under Contract DE-AC02-06Cꔰ11357.