Collective Operations
Objectives
We would like to :
- Discuss Collective Operations
Notes
- Communicators
- We have seen one communicators MPI_COMM_WORLD
- This is probably sufficient for most simple computations.
- But if we want to have different groups of processes working on different tasks we need communicators.
- We can build a communicator group with the MPI_Comm_split
-
MPI_Comm_split( MPI_Comm comm, int color, int key, MPI_Comm* newcomm)
- The existing communicator,
- An integer we wish to split on, this is assigned per processes.
- All processes with the same color value are in a new communicator group.
- The key determines the rank order in the group.
- The new communicator group.
- So if we had 16 processes we wanted to configure by rows
- I have never done this so I took some code from https://mpitutorial.com/tutorials/introduction-to-groups-and-communicators/
- The code is in makeComm.cpp
- There are other ways to build a communicator, but this will do for now.
- Collectives
- For us to do our monte carlo pi simulation in MPI we need collectives at a minimum.
- Four of them
- Broadcast - send a single value to every process
- Scatter - send an entry of an array to each process
- Reduction - combine a single value from every process to the host with a given operator.
- Gather - bring a single entry of an array from each process.
- To start either a gather or reduction will do.
-
int MPI_Gather(const void *sendbuf, int sendcount, MPI_Datatype sendtype,
void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)
- reference
- sendbuf - the data to send
- sendcount - the number of this data to send
- sendtype - The type of data
- recvbuf - where we will put the data
- recvcount - how many we will receive (each, not total)
- recvtype - the type we will receive.
- root - where the data will be collected
- communicator: the group involved.
-
int MPI_Allgather(const void *sendbuf, int sendcount, MPI_Datatype sendtype,
void *recvbuf, int recvcount, MPI_Datatype recvtype,
MPI_Comm comm)
- reference.
- As before but no root.
- Everyone gets the data.
- See gather.cpp
- Scatter
- reference.
-
int MPI_Scatter(const void *sendbuf, int sendcount, MPI_Datatype sendtype,
void *recvbuf, int recvcount, MPI_Datatype recvtype, int root,
MPI_Comm comm)
- See scatter.cpp
int MPI_Reduce(const void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm)
-
int MPI_Allreduce(const void *sendbuf, void *recvbuf, int count,
MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)