SlideShare ist ein Scribd-Unternehmen logo
1 von 26
Downloaden Sie, um offline zu lesen
Parallel Programming
                    using
             Message Passing
                  Interface
                    (MPI)
                         metu-ceng
                 ts@TayfunSen.com
11/05/08          Parallel Programming Using MPI   1
                       25 April 2008
Outline
•   What is MPI?
•   MPI Implementations
•   OpenMPI
•   MPI
•   References
•   Q&A


11/05/08      Parallel Programming Using MPI   2/26
What is MPI?
    • A standard with many implementations
      (lam-mpi and mpich, evolving into
      OpenMPI and MVAPICH).
    • message passing API
    • Library for programming clusters
    • Needs to be high performing, scalable,
      portable ...


11/05/08        Parallel Programming Using MPI   3/26
MPI Implementations
    • Is it up for the challenge?
       MPI does not have many alternatives
       (what about OpenMP, map-reduce etc?).
    • Many implementations out there.
    • The programming interface is all the same. But
      underlying implementations and what they
      support in terms of connectivity, fault tolerance
      etc. differ.
    • On ceng-hpc, both MVAPICH and OpenMPI is
      installed.

11/05/08           Parallel Programming Using MPI    4/26
OpenMPI
• We'll use OpenMPI for this presentation
• It's open source, MPI2 compliant,
  portable, has fault tolerance, combines
  best practices of number of other MPI
  implementations.
• To install it, for example on
  Debian/Ubuntu type:
    # apt-get install openmpi-bin libopenmpi-dev
     openmpi-doc
11/05/08         Parallel Programming Using MPI   5/26
MPI – General Information
• Functions start with MPI_* to differ
  from application
• MPI has defined its own data types to
  abstract machine dependent
  implementations (MPI_CHAR,
  MPI_INT, MPI_BYTE etc.)



11/05/08     Parallel Programming Using MPI   6/26
MPI - API and other stuff
• Housekeeping (initialization,
  termination, header file)
• Two types of communication: Point-
  to-point and collective communication
• Communicators




11/05/08     Parallel Programming Using MPI   7/26
Housekeeping
• You include the header mpi.h
• Initialize using MPI_Init(&argc, &argv)
  and end MPI using MPI_Finalize()
• Demo time, “hello world!” using MPI




11/05/08      Parallel Programming Using MPI   8/26
Point-to-point
           communication
• Related definitions – source,
  destination, communicator, tag,
  buffer, data type, count
• man MPI_Send, MPI_Recv
int MPI_Send(void *buf, int count, MPI_Datatype
 datatype, int dest,int tag, MPI_Comm comm)

• Blocking send, that is the processor
  doesn't do anything until the message
  is sent
11/05/08         Parallel Programming Using MPI   9/26
P2P Communication (cont.)
•     int MPI_Recv(void *buf, int count, MPI_Datatype
      datatype, int source, int tag, MPI_Comm comm,
      MPI_Status *status)

• Source, tag, communicator has to be
  correct for the message to be received
• Demo time – simple send
• One last thing, you can use wildcards in
  place of source and tag.
  MPI_ANY_SOURCE and MPI_ANY_TAG

    11/05/08          Parallel Programming Using MPI    10/26
P2P Communication (cont.)
• The receiver actually does not know
  how much data it received. He takes
  a guess and tries to get the most.
• To be sure of how much received, one
  can use:
•     int MPI_Get_count(MPI_Status *status, MPI_Datatype
      dtype, int *count);

• Demo time – change simple send to
  check the received message size.
    11/05/08          Parallel Programming Using MPI   11/26
P2P Communication (cont.)
• For a receive operation, communication ends when
  the message is copied to the local variables.
• For a send operation, communication is completed
  when the message is transferred to MPI for
  sending. (so that the buffer can be recycled)
• Blocked operations continue when the
  communication has been completed
• Beware – There are some intricacies
  Check [2] for more information.

11/05/08         Parallel Programming Using MPI   12/26
P2P Communication (cont.)
• For blocking communications, deadlock is a
  possibility:
if( myrank == 0 ) {
      /* Receive, then send a message */
      MPI_Recv( b, 100, MPI_DOUBLE, 1, 19, MPI_COMM_WORLD, &status );
      MPI_Send( a, 100, MPI_DOUBLE, 1, 17, MPI_COMM_WORLD );
  }
  else if( myrank == 1 ) {
      /* Receive, then send a message */
      MPI_Recv( b, 100, MPI_DOUBLE, 0, 17, MPI_COMM_WORLD, &status );
      MPI_Send( a, 100, MPI_DOUBLE, 0, 19, MPI_COMM_WORLD );
  }

• How to remove the deadlock?
11/05/08                 Parallel Programming Using MPI          13/26
P2P Communication (cont.)
• When non-blocking communication is used,
  program continues its execution
• A program can send a blocking send and
  the receiver may use non-blocking receive
  or vice versa.
• Very similar function calls
int MPI_Isend(void *buf, int count, MPI_Datatype dtype, int dest,
   int tag, MPI_Comm comm, MPI_Request *request);

• Request handle can be used later
  eg. MPI_Wait, MPI_Test ...
11/05/08               Parallel Programming Using MPI          14/26
P2P Communication (cont.)
• Demo time – non_blocking
• There are other modes of sending
  (but not receiving!) check out the
  documentation for synchronous,
  buffered and ready mode send in
  addition to standard one we have
  seen here.


11/05/08     Parallel Programming Using MPI   15/26
P2P Communication (cont.)
• Keep in mind that each send/receive is costly
  – try to piggyback
• You can send different data types at the same
  time – eg. Integers, floats, characters,
  doubles... using MPI_Pack. This function gives
  you an intermediate buffer which you will
  send.
•     int MPI_Pack(void *inbuf, int incount, MPI_Datatype
      datatype, void *outbuf, int outsize, int *position,
      MPI_Comm comm)
•     MPI_Send(buffer, count, MPI_PACKED, dest, tag,
      MPI_COMM_WORLD);

    11/05/08            Parallel Programming Using MPI      16/26
P2P Communication (cont.)
• You can also send your own structs
  (user defined types). See the
  documentation




11/05/08     Parallel Programming Using MPI   17/26
Collective Communication
• Works like point to point except you
  send to all other processors
• MPI_Barrier(comm), blocks until each
  processor calls this. Synchronizes
  everyone.
• Broadcast operation MPI_Bcast copies
  the data value in one processor to
  others.
• Demo time - bcast_example
11/05/08     Parallel Programming Using MPI   18/26
Collective Communication
• MPI_Reduce collects data from other
  processors, operates on them and
  returns a single value
• reduction operation is performed
• Demo time – reduce_op example
• There are MPI defined reduce
  operations but you can define your
  own
11/05/08     Parallel Programming Using MPI   19/26
Collective Communication -
          MPI_Gather
• gather and scatter operations
• Like what their name implies
• Gather – like every process sending
  their send buffer and root process
  receiving
• Demo time - gather_example



11/05/08     Parallel Programming Using MPI   20/26
Collective Communication -
         MPI_Scatter
• Similar to MPI_Gather but here data
  is sent from root to other processors
• Like gather, you can accomplish it by
  having root calling MPI_Send
  repeatedly and others calling
  MPI_Recv
• Demo time – scatter_example


11/05/08     Parallel Programming Using MPI   21/26
Collective Communication –
      More functionality
• Many more functions to lift hard work
  from you.
• MPI_Allreduce, MPI_Gatherv,
  MPI_Scan, MPI_Reduce_Scatter ...
• Check out the API documentation
• Manual files are your best friend.



11/05/08     Parallel Programming Using MPI   22/26
Communicators
• Communicators group processors
• Basic communicator
  MPI_COMM_WORLD defined for all
  processors
• You can create your own
  communicators to group processors.
  Thus you can send messages to only
  a subset of all processors.
11/05/08     Parallel Programming Using MPI   23/26
More Advanced Stuff
• Parallel I/O – when one node is used
  for reading from disk it is slow. You
  can have each node use its local disk.
• One sided communications – Remote
  memory access
• Both are MPI-2 capabilities. Check
  your MPI implementation to see how
  much it is implemented.
11/05/08        Parallel Programming Using MPI   24/26
References
[1] Wikipedia articles in general, including but not limited to:
http://en.wikipedia.org/wiki/Message_Passing_Interface
[2] An excellent guide at NCSA (National Center for
   Supercomputing Applications):
http://webct.ncsa.uiuc.edu:8900/public/MPI/
[3] OpenMPI Official Web site:
http://www.open-mpi.org/




11/05/08               Parallel Programming Using MPI         25/26
The End

           Thanks For Your Time.
              Any Questions
                            ?
11/05/08        Parallel Programming Using MPI   26/26

Weitere ähnliche Inhalte

Was ist angesagt?

Rpc Case Studies (Distributed computing)
Rpc Case Studies (Distributed computing)Rpc Case Studies (Distributed computing)
Rpc Case Studies (Distributed computing)
Sri Prasanna
 
Linux architecture
Linux architectureLinux architecture
Linux architecture
mcganesh
 
Linux process management
Linux process managementLinux process management
Linux process management
Raghu nath
 

Was ist angesagt? (20)

My ppt hpc u4
My ppt hpc u4My ppt hpc u4
My ppt hpc u4
 
Input-Buffering
Input-BufferingInput-Buffering
Input-Buffering
 
Tcp IP Model
Tcp IP ModelTcp IP Model
Tcp IP Model
 
Processes and threads
Processes and threadsProcesses and threads
Processes and threads
 
IPC
IPCIPC
IPC
 
Iptables the Linux Firewall
Iptables the Linux Firewall Iptables the Linux Firewall
Iptables the Linux Firewall
 
Introduction to OpenMP (Performance)
Introduction to OpenMP (Performance)Introduction to OpenMP (Performance)
Introduction to OpenMP (Performance)
 
Part 01 Linux Kernel Compilation (Ubuntu)
Part 01 Linux Kernel Compilation (Ubuntu)Part 01 Linux Kernel Compilation (Ubuntu)
Part 01 Linux Kernel Compilation (Ubuntu)
 
Amdahl`s law -Processor performance
Amdahl`s law -Processor performanceAmdahl`s law -Processor performance
Amdahl`s law -Processor performance
 
Introduction to OpenMP
Introduction to OpenMPIntroduction to OpenMP
Introduction to OpenMP
 
Rpc Case Studies (Distributed computing)
Rpc Case Studies (Distributed computing)Rpc Case Studies (Distributed computing)
Rpc Case Studies (Distributed computing)
 
ML_saikat dutt.pdf
ML_saikat dutt.pdfML_saikat dutt.pdf
ML_saikat dutt.pdf
 
Linux Internals - Part II
Linux Internals - Part IILinux Internals - Part II
Linux Internals - Part II
 
Introduction to Makefile
Introduction to MakefileIntroduction to Makefile
Introduction to Makefile
 
Parallel programming model
Parallel programming modelParallel programming model
Parallel programming model
 
Distance Vector Multicast Routing Protocol (DVMRP) : Presentation
Distance Vector Multicast Routing Protocol (DVMRP) : PresentationDistance Vector Multicast Routing Protocol (DVMRP) : Presentation
Distance Vector Multicast Routing Protocol (DVMRP) : Presentation
 
Linux architecture
Linux architectureLinux architecture
Linux architecture
 
MPI Tutorial
MPI TutorialMPI Tutorial
MPI Tutorial
 
Linux process management
Linux process managementLinux process management
Linux process management
 
Presentation on arp protocol
Presentation on arp protocolPresentation on arp protocol
Presentation on arp protocol
 

Andere mochten auch

Pratical mpi programming
Pratical mpi programmingPratical mpi programming
Pratical mpi programming
unifesptk
 
It 4-yr-1-sem-digital image processing
It 4-yr-1-sem-digital image processingIt 4-yr-1-sem-digital image processing
It 4-yr-1-sem-digital image processing
Harish Khodke
 
Digital image processing unit 1
Digital image processing unit 1Digital image processing unit 1
Digital image processing unit 1
Anantharaj Manoj
 

Andere mochten auch (20)

The Message Passing Interface (MPI) in Layman's Terms
The Message Passing Interface (MPI) in Layman's TermsThe Message Passing Interface (MPI) in Layman's Terms
The Message Passing Interface (MPI) in Layman's Terms
 
Cloud Services On UI and Ideas for Federated Cloud on idREN
Cloud Services On UI and Ideas for Federated Cloud on idRENCloud Services On UI and Ideas for Federated Cloud on idREN
Cloud Services On UI and Ideas for Federated Cloud on idREN
 
Введение в MPI
Введение в MPIВведение в MPI
Введение в MPI
 
presentation
presentationpresentation
presentation
 
Intro to MPI
Intro to MPIIntro to MPI
Intro to MPI
 
MPI History
MPI HistoryMPI History
MPI History
 
message passing interface
message passing interfacemessage passing interface
message passing interface
 
OpenMP
OpenMPOpenMP
OpenMP
 
ISBI MPI Tutorial
ISBI MPI TutorialISBI MPI Tutorial
ISBI MPI Tutorial
 
Message passing interface
Message passing interfaceMessage passing interface
Message passing interface
 
Pratical mpi programming
Pratical mpi programmingPratical mpi programming
Pratical mpi programming
 
It 4-yr-1-sem-digital image processing
It 4-yr-1-sem-digital image processingIt 4-yr-1-sem-digital image processing
It 4-yr-1-sem-digital image processing
 
Digital image processing unit 1
Digital image processing unit 1Digital image processing unit 1
Digital image processing unit 1
 
Dip Unit Test-I
Dip Unit Test-IDip Unit Test-I
Dip Unit Test-I
 
OGSA
OGSAOGSA
OGSA
 
Globus ppt
Globus pptGlobus ppt
Globus ppt
 
Beowulf cluster
Beowulf clusterBeowulf cluster
Beowulf cluster
 
Intro to OpenMP
Intro to OpenMPIntro to OpenMP
Intro to OpenMP
 
Mpi
Mpi Mpi
Mpi
 
Using MPI
Using MPIUsing MPI
Using MPI
 

Ähnlich wie MPI Presentation

Tutorial on Parallel Computing and Message Passing Model - C2
Tutorial on Parallel Computing and Message Passing Model - C2Tutorial on Parallel Computing and Message Passing Model - C2
Tutorial on Parallel Computing and Message Passing Model - C2
Marcirio Chaves
 
Programming Models for High-performance Computing
Programming Models for High-performance ComputingProgramming Models for High-performance Computing
Programming Models for High-performance Computing
Marc Snir
 
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward
 
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward
 

Ähnlich wie MPI Presentation (20)

Tutorial on Parallel Computing and Message Passing Model - C2
Tutorial on Parallel Computing and Message Passing Model - C2Tutorial on Parallel Computing and Message Passing Model - C2
Tutorial on Parallel Computing and Message Passing Model - C2
 
Parallel programming using MPI
Parallel programming using MPIParallel programming using MPI
Parallel programming using MPI
 
Advanced Scalable Decomposition Method with MPICH Environment for HPC
Advanced Scalable Decomposition Method with MPICH Environment for HPCAdvanced Scalable Decomposition Method with MPICH Environment for HPC
Advanced Scalable Decomposition Method with MPICH Environment for HPC
 
High Performance Computing using MPI
High Performance Computing using MPIHigh Performance Computing using MPI
High Performance Computing using MPI
 
Advanced MPI
Advanced MPIAdvanced MPI
Advanced MPI
 
Nug2004 yhe
Nug2004 yheNug2004 yhe
Nug2004 yhe
 
Introduction to GPUs in HPC
Introduction to GPUs in HPCIntroduction to GPUs in HPC
Introduction to GPUs in HPC
 
Mpi.net tutorial
Mpi.net tutorialMpi.net tutorial
Mpi.net tutorial
 
Programming Models for High-performance Computing
Programming Models for High-performance ComputingProgramming Models for High-performance Computing
Programming Models for High-performance Computing
 
Lecture9
Lecture9Lecture9
Lecture9
 
What is [Open] MPI?
What is [Open] MPI?What is [Open] MPI?
What is [Open] MPI?
 
Programming using MPI and OpenMP
Programming using MPI and OpenMPProgramming using MPI and OpenMP
Programming using MPI and OpenMP
 
Introduction to MPI
Introduction to MPIIntroduction to MPI
Introduction to MPI
 
openmp final2.pptx
openmp final2.pptxopenmp final2.pptx
openmp final2.pptx
 
Streaming your Lyft Ride Prices - Flink Forward SF 2019
Streaming your Lyft Ride Prices - Flink Forward SF 2019Streaming your Lyft Ride Prices - Flink Forward SF 2019
Streaming your Lyft Ride Prices - Flink Forward SF 2019
 
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
 
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
 
Python introduction
Python introductionPython introduction
Python introduction
 
“Programação paralela híbrida com MPI e OpenMP – uma abordagem prática”. Edua...
“Programação paralela híbrida com MPI e OpenMP – uma abordagem prática”. Edua...“Programação paralela híbrida com MPI e OpenMP – uma abordagem prática”. Edua...
“Programação paralela híbrida com MPI e OpenMP – uma abordagem prática”. Edua...
 
Phases of compiler
Phases of compilerPhases of compiler
Phases of compiler
 

Kürzlich hochgeladen

Kürzlich hochgeladen (20)

Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 

MPI Presentation

  • 1. Parallel Programming using Message Passing Interface (MPI) metu-ceng ts@TayfunSen.com 11/05/08 Parallel Programming Using MPI 1 25 April 2008
  • 2. Outline • What is MPI? • MPI Implementations • OpenMPI • MPI • References • Q&A 11/05/08 Parallel Programming Using MPI 2/26
  • 3. What is MPI? • A standard with many implementations (lam-mpi and mpich, evolving into OpenMPI and MVAPICH). • message passing API • Library for programming clusters • Needs to be high performing, scalable, portable ... 11/05/08 Parallel Programming Using MPI 3/26
  • 4. MPI Implementations • Is it up for the challenge? MPI does not have many alternatives (what about OpenMP, map-reduce etc?). • Many implementations out there. • The programming interface is all the same. But underlying implementations and what they support in terms of connectivity, fault tolerance etc. differ. • On ceng-hpc, both MVAPICH and OpenMPI is installed. 11/05/08 Parallel Programming Using MPI 4/26
  • 5. OpenMPI • We'll use OpenMPI for this presentation • It's open source, MPI2 compliant, portable, has fault tolerance, combines best practices of number of other MPI implementations. • To install it, for example on Debian/Ubuntu type: # apt-get install openmpi-bin libopenmpi-dev openmpi-doc 11/05/08 Parallel Programming Using MPI 5/26
  • 6. MPI – General Information • Functions start with MPI_* to differ from application • MPI has defined its own data types to abstract machine dependent implementations (MPI_CHAR, MPI_INT, MPI_BYTE etc.) 11/05/08 Parallel Programming Using MPI 6/26
  • 7. MPI - API and other stuff • Housekeeping (initialization, termination, header file) • Two types of communication: Point- to-point and collective communication • Communicators 11/05/08 Parallel Programming Using MPI 7/26
  • 8. Housekeeping • You include the header mpi.h • Initialize using MPI_Init(&argc, &argv) and end MPI using MPI_Finalize() • Demo time, “hello world!” using MPI 11/05/08 Parallel Programming Using MPI 8/26
  • 9. Point-to-point communication • Related definitions – source, destination, communicator, tag, buffer, data type, count • man MPI_Send, MPI_Recv int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest,int tag, MPI_Comm comm) • Blocking send, that is the processor doesn't do anything until the message is sent 11/05/08 Parallel Programming Using MPI 9/26
  • 10. P2P Communication (cont.) • int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) • Source, tag, communicator has to be correct for the message to be received • Demo time – simple send • One last thing, you can use wildcards in place of source and tag. MPI_ANY_SOURCE and MPI_ANY_TAG 11/05/08 Parallel Programming Using MPI 10/26
  • 11. P2P Communication (cont.) • The receiver actually does not know how much data it received. He takes a guess and tries to get the most. • To be sure of how much received, one can use: • int MPI_Get_count(MPI_Status *status, MPI_Datatype dtype, int *count); • Demo time – change simple send to check the received message size. 11/05/08 Parallel Programming Using MPI 11/26
  • 12. P2P Communication (cont.) • For a receive operation, communication ends when the message is copied to the local variables. • For a send operation, communication is completed when the message is transferred to MPI for sending. (so that the buffer can be recycled) • Blocked operations continue when the communication has been completed • Beware – There are some intricacies Check [2] for more information. 11/05/08 Parallel Programming Using MPI 12/26
  • 13. P2P Communication (cont.) • For blocking communications, deadlock is a possibility: if( myrank == 0 ) { /* Receive, then send a message */ MPI_Recv( b, 100, MPI_DOUBLE, 1, 19, MPI_COMM_WORLD, &status ); MPI_Send( a, 100, MPI_DOUBLE, 1, 17, MPI_COMM_WORLD ); } else if( myrank == 1 ) { /* Receive, then send a message */ MPI_Recv( b, 100, MPI_DOUBLE, 0, 17, MPI_COMM_WORLD, &status ); MPI_Send( a, 100, MPI_DOUBLE, 0, 19, MPI_COMM_WORLD ); } • How to remove the deadlock? 11/05/08 Parallel Programming Using MPI 13/26
  • 14. P2P Communication (cont.) • When non-blocking communication is used, program continues its execution • A program can send a blocking send and the receiver may use non-blocking receive or vice versa. • Very similar function calls int MPI_Isend(void *buf, int count, MPI_Datatype dtype, int dest, int tag, MPI_Comm comm, MPI_Request *request); • Request handle can be used later eg. MPI_Wait, MPI_Test ... 11/05/08 Parallel Programming Using MPI 14/26
  • 15. P2P Communication (cont.) • Demo time – non_blocking • There are other modes of sending (but not receiving!) check out the documentation for synchronous, buffered and ready mode send in addition to standard one we have seen here. 11/05/08 Parallel Programming Using MPI 15/26
  • 16. P2P Communication (cont.) • Keep in mind that each send/receive is costly – try to piggyback • You can send different data types at the same time – eg. Integers, floats, characters, doubles... using MPI_Pack. This function gives you an intermediate buffer which you will send. • int MPI_Pack(void *inbuf, int incount, MPI_Datatype datatype, void *outbuf, int outsize, int *position, MPI_Comm comm) • MPI_Send(buffer, count, MPI_PACKED, dest, tag, MPI_COMM_WORLD); 11/05/08 Parallel Programming Using MPI 16/26
  • 17. P2P Communication (cont.) • You can also send your own structs (user defined types). See the documentation 11/05/08 Parallel Programming Using MPI 17/26
  • 18. Collective Communication • Works like point to point except you send to all other processors • MPI_Barrier(comm), blocks until each processor calls this. Synchronizes everyone. • Broadcast operation MPI_Bcast copies the data value in one processor to others. • Demo time - bcast_example 11/05/08 Parallel Programming Using MPI 18/26
  • 19. Collective Communication • MPI_Reduce collects data from other processors, operates on them and returns a single value • reduction operation is performed • Demo time – reduce_op example • There are MPI defined reduce operations but you can define your own 11/05/08 Parallel Programming Using MPI 19/26
  • 20. Collective Communication - MPI_Gather • gather and scatter operations • Like what their name implies • Gather – like every process sending their send buffer and root process receiving • Demo time - gather_example 11/05/08 Parallel Programming Using MPI 20/26
  • 21. Collective Communication - MPI_Scatter • Similar to MPI_Gather but here data is sent from root to other processors • Like gather, you can accomplish it by having root calling MPI_Send repeatedly and others calling MPI_Recv • Demo time – scatter_example 11/05/08 Parallel Programming Using MPI 21/26
  • 22. Collective Communication – More functionality • Many more functions to lift hard work from you. • MPI_Allreduce, MPI_Gatherv, MPI_Scan, MPI_Reduce_Scatter ... • Check out the API documentation • Manual files are your best friend. 11/05/08 Parallel Programming Using MPI 22/26
  • 23. Communicators • Communicators group processors • Basic communicator MPI_COMM_WORLD defined for all processors • You can create your own communicators to group processors. Thus you can send messages to only a subset of all processors. 11/05/08 Parallel Programming Using MPI 23/26
  • 24. More Advanced Stuff • Parallel I/O – when one node is used for reading from disk it is slow. You can have each node use its local disk. • One sided communications – Remote memory access • Both are MPI-2 capabilities. Check your MPI implementation to see how much it is implemented. 11/05/08 Parallel Programming Using MPI 24/26
  • 25. References [1] Wikipedia articles in general, including but not limited to: http://en.wikipedia.org/wiki/Message_Passing_Interface [2] An excellent guide at NCSA (National Center for Supercomputing Applications): http://webct.ncsa.uiuc.edu:8900/public/MPI/ [3] OpenMPI Official Web site: http://www.open-mpi.org/ 11/05/08 Parallel Programming Using MPI 25/26
  • 26. The End Thanks For Your Time. Any Questions ? 11/05/08 Parallel Programming Using MPI 26/26