- •Contents
- •List of Figures
- •List of Tables
- •Acknowledgments
- •Introduction to MPI
- •Overview and Goals
- •Background of MPI-1.0
- •Background of MPI-1.1, MPI-1.2, and MPI-2.0
- •Background of MPI-1.3 and MPI-2.1
- •Background of MPI-2.2
- •Who Should Use This Standard?
- •What Platforms Are Targets For Implementation?
- •What Is Included In The Standard?
- •What Is Not Included In The Standard?
- •Organization of this Document
- •MPI Terms and Conventions
- •Document Notation
- •Naming Conventions
- •Semantic Terms
- •Data Types
- •Opaque Objects
- •Array Arguments
- •State
- •Named Constants
- •Choice
- •Addresses
- •Language Binding
- •Deprecated Names and Functions
- •Fortran Binding Issues
- •C Binding Issues
- •C++ Binding Issues
- •Functions and Macros
- •Processes
- •Error Handling
- •Implementation Issues
- •Independence of Basic Runtime Routines
- •Interaction with Signals
- •Examples
- •Point-to-Point Communication
- •Introduction
- •Blocking Send and Receive Operations
- •Blocking Send
- •Message Data
- •Message Envelope
- •Blocking Receive
- •Return Status
- •Passing MPI_STATUS_IGNORE for Status
- •Data Type Matching and Data Conversion
- •Type Matching Rules
- •Type MPI_CHARACTER
- •Data Conversion
- •Communication Modes
- •Semantics of Point-to-Point Communication
- •Buffer Allocation and Usage
- •Nonblocking Communication
- •Communication Request Objects
- •Communication Initiation
- •Communication Completion
- •Semantics of Nonblocking Communications
- •Multiple Completions
- •Non-destructive Test of status
- •Probe and Cancel
- •Persistent Communication Requests
- •Send-Receive
- •Null Processes
- •Datatypes
- •Derived Datatypes
- •Type Constructors with Explicit Addresses
- •Datatype Constructors
- •Subarray Datatype Constructor
- •Distributed Array Datatype Constructor
- •Address and Size Functions
- •Lower-Bound and Upper-Bound Markers
- •Extent and Bounds of Datatypes
- •True Extent of Datatypes
- •Commit and Free
- •Duplicating a Datatype
- •Use of General Datatypes in Communication
- •Correct Use of Addresses
- •Decoding a Datatype
- •Examples
- •Pack and Unpack
- •Canonical MPI_PACK and MPI_UNPACK
- •Collective Communication
- •Introduction and Overview
- •Communicator Argument
- •Applying Collective Operations to Intercommunicators
- •Barrier Synchronization
- •Broadcast
- •Example using MPI_BCAST
- •Gather
- •Examples using MPI_GATHER, MPI_GATHERV
- •Scatter
- •Examples using MPI_SCATTER, MPI_SCATTERV
- •Example using MPI_ALLGATHER
- •All-to-All Scatter/Gather
- •Global Reduction Operations
- •Reduce
- •Signed Characters and Reductions
- •MINLOC and MAXLOC
- •All-Reduce
- •Process-local reduction
- •Reduce-Scatter
- •MPI_REDUCE_SCATTER_BLOCK
- •MPI_REDUCE_SCATTER
- •Scan
- •Inclusive Scan
- •Exclusive Scan
- •Example using MPI_SCAN
- •Correctness
- •Introduction
- •Features Needed to Support Libraries
- •MPI's Support for Libraries
- •Basic Concepts
- •Groups
- •Contexts
- •Intra-Communicators
- •Group Management
- •Group Accessors
- •Group Constructors
- •Group Destructors
- •Communicator Management
- •Communicator Accessors
- •Communicator Constructors
- •Communicator Destructors
- •Motivating Examples
- •Current Practice #1
- •Current Practice #2
- •(Approximate) Current Practice #3
- •Example #4
- •Library Example #1
- •Library Example #2
- •Inter-Communication
- •Inter-communicator Accessors
- •Inter-communicator Operations
- •Inter-Communication Examples
- •Caching
- •Functionality
- •Communicators
- •Windows
- •Datatypes
- •Error Class for Invalid Keyval
- •Attributes Example
- •Naming Objects
- •Formalizing the Loosely Synchronous Model
- •Basic Statements
- •Models of Execution
- •Static communicator allocation
- •Dynamic communicator allocation
- •The General case
- •Process Topologies
- •Introduction
- •Virtual Topologies
- •Embedding in MPI
- •Overview of the Functions
- •Topology Constructors
- •Cartesian Constructor
- •Cartesian Convenience Function: MPI_DIMS_CREATE
- •General (Graph) Constructor
- •Distributed (Graph) Constructor
- •Topology Inquiry Functions
- •Cartesian Shift Coordinates
- •Partitioning of Cartesian structures
- •Low-Level Topology Functions
- •An Application Example
- •MPI Environmental Management
- •Implementation Information
- •Version Inquiries
- •Environmental Inquiries
- •Tag Values
- •Host Rank
- •IO Rank
- •Clock Synchronization
- •Memory Allocation
- •Error Handling
- •Error Handlers for Communicators
- •Error Handlers for Windows
- •Error Handlers for Files
- •Freeing Errorhandlers and Retrieving Error Strings
- •Error Codes and Classes
- •Error Classes, Error Codes, and Error Handlers
- •Timers and Synchronization
- •Startup
- •Allowing User Functions at Process Termination
- •Determining Whether MPI Has Finished
- •Portable MPI Process Startup
- •The Info Object
- •Process Creation and Management
- •Introduction
- •The Dynamic Process Model
- •Starting Processes
- •The Runtime Environment
- •Process Manager Interface
- •Processes in MPI
- •Starting Processes and Establishing Communication
- •Reserved Keys
- •Spawn Example
- •Manager-worker Example, Using MPI_COMM_SPAWN.
- •Establishing Communication
- •Names, Addresses, Ports, and All That
- •Server Routines
- •Client Routines
- •Name Publishing
- •Reserved Key Values
- •Client/Server Examples
- •Ocean/Atmosphere - Relies on Name Publishing
- •Simple Client-Server Example.
- •Other Functionality
- •Universe Size
- •Singleton MPI_INIT
- •MPI_APPNUM
- •Releasing Connections
- •Another Way to Establish MPI Communication
- •One-Sided Communications
- •Introduction
- •Initialization
- •Window Creation
- •Window Attributes
- •Communication Calls
- •Examples
- •Accumulate Functions
- •Synchronization Calls
- •Fence
- •General Active Target Synchronization
- •Lock
- •Assertions
- •Examples
- •Error Handling
- •Error Handlers
- •Error Classes
- •Semantics and Correctness
- •Atomicity
- •Progress
- •Registers and Compiler Optimizations
- •External Interfaces
- •Introduction
- •Generalized Requests
- •Examples
- •Associating Information with Status
- •MPI and Threads
- •General
- •Initialization
- •Introduction
- •File Manipulation
- •Opening a File
- •Closing a File
- •Deleting a File
- •Resizing a File
- •Preallocating Space for a File
- •Querying the Size of a File
- •Querying File Parameters
- •File Info
- •Reserved File Hints
- •File Views
- •Data Access
- •Data Access Routines
- •Positioning
- •Synchronism
- •Coordination
- •Data Access Conventions
- •Data Access with Individual File Pointers
- •Data Access with Shared File Pointers
- •Noncollective Operations
- •Collective Operations
- •Seek
- •Split Collective Data Access Routines
- •File Interoperability
- •Datatypes for File Interoperability
- •Extent Callback
- •Datarep Conversion Functions
- •Matching Data Representations
- •Consistency and Semantics
- •File Consistency
- •Random Access vs. Sequential Files
- •Progress
- •Collective File Operations
- •Type Matching
- •Logical vs. Physical File Layout
- •File Size
- •Examples
- •Asynchronous I/O
- •I/O Error Handling
- •I/O Error Classes
- •Examples
- •Subarray Filetype Constructor
- •Requirements
- •Discussion
- •Logic of the Design
- •Examples
- •MPI Library Implementation
- •Systems with Weak Symbols
- •Systems Without Weak Symbols
- •Complications
- •Multiple Counting
- •Linker Oddities
- •Multiple Levels of Interception
- •Deprecated Functions
- •Deprecated since MPI-2.0
- •Deprecated since MPI-2.2
- •Language Bindings
- •Overview
- •Design
- •C++ Classes for MPI
- •Class Member Functions for MPI
- •Semantics
- •C++ Datatypes
- •Communicators
- •Exceptions
- •Mixed-Language Operability
- •Problems With Fortran Bindings for MPI
- •Problems Due to Strong Typing
- •Problems Due to Data Copying and Sequence Association
- •Special Constants
- •Fortran 90 Derived Types
- •A Problem with Register Optimization
- •Basic Fortran Support
- •Extended Fortran Support
- •The mpi Module
- •No Type Mismatch Problems for Subroutines with Choice Arguments
- •Additional Support for Fortran Numeric Intrinsic Types
- •Language Interoperability
- •Introduction
- •Assumptions
- •Initialization
- •Transfer of Handles
- •Status
- •MPI Opaque Objects
- •Datatypes
- •Callback Functions
- •Error Handlers
- •Reduce Operations
- •Addresses
- •Attributes
- •Extra State
- •Constants
- •Interlanguage Communication
- •Language Bindings Summary
- •Groups, Contexts, Communicators, and Caching Fortran Bindings
- •External Interfaces C++ Bindings
- •Change-Log
- •Bibliography
- •Examples Index
- •MPI Declarations Index
- •MPI Function Index
5.5. GATHER |
139 |
int array[100]; int root=0;
...
MPI_Bcast( array, 100, MPI_INT, root, comm);
As in many of our example code fragments, we assume that some of the variables (such as comm in the above) have been assigned appropriate values.
5.5 Gather
MPI_GATHER( sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm)
IN |
sendbuf |
starting address of send bu er (choice) |
IN |
sendcount |
number of elements in send bu er (non-negative inte- |
|
|
ger) |
IN |
sendtype |
data type of send bu er elements (handle) |
OUT |
recvbuf |
address of receive bu er (choice, signi cant only at |
|
|
root) |
IN |
recvcount |
number of elements for any single receive (non-negative |
|
|
integer, signi cant only at root) |
IN |
recvtype |
data type of recv bu er elements (signi cant only at |
|
|
root) (handle) |
IN |
root |
rank of receiving process (integer) |
IN |
comm |
communicator (handle) |
int MPI_Gather(void* sendbuf, int sendcount, MPI_Datatype sendtype,
void* recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)
MPI_GATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR)
<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR
fvoid MPI::Comm::Gather(const void* sendbuf, int sendcount, const MPI::Datatype& sendtype, void* recvbuf, int recvcount, const MPI::Datatype& recvtype, int root) const = 0 (binding deprecated, see Section 15.2) g
If comm is an intracommunicator, each process (root process included) sends the contents of its send bu er to the root process. The root process receives the messages and stores them in rank order. The outcome is as if each of the n processes in the group (including the root process) had executed a call to
MPI_Send(sendbuf; sendcount; sendtype; root; :::);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
1
2
140 |
CHAPTER 5. COLLECTIVE COMMUNICATION |
and the root had executed n calls to
3MPI_Recv(recvbuf + i recvcount extent(recvtype); recvcount; recvtype; i; :::);
4
where extent(recvtype) is the type extent obtained from a call to MPI_Type_get_extent().
5
An alternative description is that the n messages sent by the processes in the group
6
are concatenated in rank order, and the resulting message is received by the root as if by a
7
call to MPI_RECV(recvbuf, recvcount n, recvtype, ...).
8
The receive bu er is ignored for all non-root processes.
9
General, derived datatypes are allowed for both sendtype and recvtype. The type signa-
10
ture of sendcount, sendtype on each process must be equal to the type signature of recvcount,
11
recvtype at the root. This implies that the amount of data sent must be equal to the amount
12
of data received, pairwise between each process and the root. Distinct type maps between
13
sender and receiver are still allowed.
14
All arguments to the function are signi cant on process root, while on other processes,
15
only arguments sendbuf, sendcount, sendtype, root, and comm are signi cant. The arguments
16
root and comm must have identical values on all processes.
17
The speci cation of counts and types should not cause any location on the root to be
18
written more than once. Such a call is erroneous.
19
Note that the recvcount argument at the root indicates the number of items it receives
20
from each process, not the total number of items it receives.
21
The \in place" option for intracommunicators is speci ed by passing MPI_IN_PLACE as
22
the value of sendbuf at the root. In such a case, sendcount and sendtype are ignored, and
23
the contribution of the root to the gathered vector is assumed to be already in the correct
24
place in the receive bu er.
25
If comm is an intercommunicator, then the call involves all processes in the intercom-
26
municator, but with one group (group A) de ning the root process. All processes in the
27
other group (group B) pass the same value in argument root, which is the rank of the root
28
in group A. The root passes the value MPI_ROOT in root. All other processes in group A
29
pass the value MPI_PROC_NULL in root. Data is gathered from all processes in group B to
30
the root. The send bu er arguments of the processes in group B must be consistent with
31
the receive bu er argument of the root.
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
5.5. GATHER |
141 |
MPI_GATHERV( sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm)
IN |
sendbuf |
starting address of send bu er (choice) |
IN |
sendcount |
number of elements in send bu er (non-negative inte- |
|
|
ger) |
IN |
sendtype |
data type of send bu er elements (handle) |
OUT |
recvbuf |
address of receive bu er (choice, signi cant only at |
|
|
root) |
IN |
recvcounts |
non-negative integer array (of length group size) con- |
|
|
taining the number of elements that are received from |
|
|
each process (signi cant only at root) |
IN |
displs |
integer array (of length group size). Entry i speci es |
|
|
the displacement relative to recvbuf at which to place |
|
|
the incoming data from process i (signi cant only at |
|
|
root) |
IN |
recvtype |
data type of recv bu er elements (signi cant only at |
|
|
root) (handle) |
IN |
root |
rank of receiving process (integer) |
IN |
comm |
communicator (handle) |
int MPI_Gatherv(void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int *recvcounts, int *displs, MPI_Datatype recvtype, int root, MPI_Comm comm)
MPI_GATHERV(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNTS, DISPLS, RECVTYPE, ROOT, COMM, IERROR)
<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNT, SENDTYPE, RECVCOUNTS(*), DISPLS(*), RECVTYPE, ROOT, COMM, IERROR
fvoid MPI::Comm::Gatherv(const void* sendbuf, int sendcount, const MPI::Datatype& sendtype, void* recvbuf,
const int recvcounts[], const int displs[],
const MPI::Datatype& recvtype, int root) const = 0 (binding deprecated, see Section 15.2) g
MPI_GATHERV extends the functionality of MPI_GATHER by allowing a varying count of data from each process, since recvcounts is now an array. It also allows more exibility as to where the data is placed on the root, by providing the new argument, displs.
If comm is an intracommunicator, the outcome is as if each process, including the root process, sends a message to the root,
MPI_Send(sendbuf; sendcount; sendtype; root; :::);
and the root executes n receives,
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
MPI_Recv(recvbuf + displs[j] extent(recvtype); recvcounts[j]; recvtype; i; :::): |
48 |
|
142 |
CHAPTER 5. COLLECTIVE COMMUNICATION |
1The data received from process j is placed into recvbuf of the root process beginning at
2
3
o set displs[j] elements (in terms of the recvtype).
The receive bu er is ignored for all non-root processes.
4The type signature implied by sendcount, sendtype on process i must be equal to the
5type signature implied by recvcounts[i], recvtype at the root. This implies that the amount
6of data sent must be equal to the amount of data received, pairwise between each process
7and the root. Distinct type maps between sender and receiver are still allowed, as illustrated
8in Example 5.6.
9All arguments to the function are signi cant on process root, while on other processes,
10only arguments sendbuf, sendcount, sendtype, root, and comm are signi cant. The arguments
11root and comm must have identical values on all processes.
12The speci cation of counts, types, and displacements should not cause any location on
13the root to be written more than once. Such a call is erroneous.
14The \in place" option for intracommunicators is speci ed by passing MPI_IN_PLACE as
15the value of sendbuf at the root. In such a case, sendcount and sendtype are ignored, and
16the contribution of the root to the gathered vector is assumed to be already in the correct
17place in the receive bu er
18If comm is an intercommunicator, then the call involves all processes in the intercom-
19municator, but with one group (group A) de ning the root process. All processes in the
20other group (group B) pass the same value in argument root, which is the rank of the root
21in group A. The root passes the value MPI_ROOT in root. All other processes in group A
22pass the value MPI_PROC_NULL in root. Data is gathered from all processes in group B to
23the root. The send bu er arguments of the processes in group B must be consistent with
24the receive bu er argument of the root.
25
26
27
28
29
5.5.1 Examples using MPI_GATHER, MPI_GATHERV
The examples in this section use intracommunicators.
Example 5.2 Gather 100 ints from every process in group to root. See gure 5.4.
30
31
MPI_Comm comm;
32
int gsize,sendarray[100];
33
int root, *rbuf;
34
...
35
MPI_Comm_size( comm, &gsize);
36
rbuf = (int *)malloc(gsize*100*sizeof(int));
37
MPI_Gather( sendarray, 100, MPI_INT, rbuf, 100, MPI_INT, root, comm);
38
39
Example 5.3 Previous example modi ed { only the root allocates memory for the receive
40
bu er.
41
42
43
44
45
46
47
48
MPI_Comm comm;
int gsize,sendarray[100]; int root, myrank, *rbuf;
...
MPI_Comm_rank( comm, &myrank); if ( myrank == root) {
MPI_Comm_size( comm, &gsize);
5.5. GATHER |
|
143 |
||||
100 |
|
100 |
100 |
|
||
|
|
|
|
|
|
all processes |
|
|
|
|
|
|
|
100 100 100
at root
rbuf
Figure 5.4: The root process gathers 100 ints from each process in the group.
rbuf = (int *)malloc(gsize*100*sizeof(int));
}
MPI_Gather( sendarray, 100, MPI_INT, rbuf, 100, MPI_INT, root, comm);
Example 5.4 Do the same as the previous example, but use a derived datatype. Note that the type cannot be the entire set of gsize*100 ints since type matching is de ned pairwise between the root and each process in the gather.
MPI_Comm comm;
int gsize,sendarray[100]; int root, *rbuf; MPI_Datatype rtype;
...
MPI_Comm_size( comm, &gsize);
MPI_Type_contiguous( 100, MPI_INT, &rtype );
MPI_Type_commit( &rtype );
rbuf = (int *)malloc(gsize*100*sizeof(int));
MPI_Gather( sendarray, 100, MPI_INT, rbuf, 1, rtype, root, comm);
Example 5.5 Now have each process send 100 ints to root, but place each set (of 100) stride ints apart at receiving end. Use MPI_GATHERV and the displs argument to achieve this e ect. Assume stride 100. See Figure 5.5.
MPI_Comm comm;
int gsize,sendarray[100]; int root, *rbuf, stride; int *displs,i,*rcounts;
...
MPI_Comm_size( comm, &gsize);
rbuf = (int *)malloc(gsize*stride*sizeof(int)); displs = (int *)malloc(gsize*sizeof(int)); rcounts = (int *)malloc(gsize*sizeof(int));
for (i=0; i<gsize; ++i) { displs[i] = i*stride; rcounts[i] = 100;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
|
144 |
|
|
|
|
|
|
|
CHAPTER 5. COLLECTIVE COMMUNICATION |
|||
1 |
100 |
|
|
100 |
|
|
100 |
|
|
|||
2 |
|
|
|
|
|
|
|
|
|
|
|
all processes |
3 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
|
|
|
|
|
|
|
|
|
|
|
|
5 |
100 |
100 |
|
|
100 |
|
|
|||||
|
|
|
|
|
|
|
|
|
|
|
at root |
|
6 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
stride |
|
|
|
|
|||
8 |
|
|
rbuf |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
|
|
|
|
|
|
|
|
|
|
|
|
Figure 5.5: The root process gathers 100 ints from each process in the group, each set is
10
placed stride ints apart.
11
12
13}
14MPI_Gatherv( sendarray, 100, MPI_INT, rbuf, rcounts, displs, MPI_INT,
15 |
root, comm); |
16 |
|
17
18
19
Note that the program is erroneous if stride < 100.
Example 5.6 Same as Example 5.5 on the receiving side, but send the 100 ints from the
20
0th column of a 100 150 int array, in C. See Figure 5.6.
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
MPI_Comm comm;
int gsize,sendarray[100][150]; int root, *rbuf, stride; MPI_Datatype stype;
int *displs,i,*rcounts;
...
MPI_Comm_size( comm, &gsize);
rbuf = (int *)malloc(gsize*stride*sizeof(int)); displs = (int *)malloc(gsize*sizeof(int)); rcounts = (int *)malloc(gsize*sizeof(int));
for (i=0; i<gsize; ++i) { displs[i] = i*stride; rcounts[i] = 100;
}
/* Create datatype for 1 column of array */
MPI_Type_vector( 100, 1, 150, MPI_INT, &stype);
MPI_Type_commit( &stype );
MPI_Gatherv( sendarray, 1, stype, rbuf, rcounts, displs, MPI_INT, root, comm);
46Example 5.7 Process i sends (100-i) ints from the i-th column of a 100 150 int
47array, in C. It is received into a bu er with stride, as in the previous two examples. See
48Figure 5.7.
5.5. GATHER |
|
|
|
|
145 |
||||
150 |
|
150 |
|
150 |
|
||||
100 |
|
|
100 |
|
|
100 |
|
|
all processes |
|
|
|
|
|
|
||||
|
|
|
|
|
|
||||
|
|
|
|
|
|||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
100 |
100 |
100 |
at root
stride
rbuf
Figure 5.6: The root process gathers column 0 of a 100 150 C array, and each set is placed stride ints apart.
MPI_Comm comm;
int gsize,sendarray[100][150],*sptr; int root, *rbuf, stride, myrank; MPI_Datatype stype;
int *displs,i,*rcounts;
...
MPI_Comm_size( comm, &gsize);
MPI_Comm_rank( comm, &myrank );
rbuf = (int *)malloc(gsize*stride*sizeof(int)); displs = (int *)malloc(gsize*sizeof(int)); rcounts = (int *)malloc(gsize*sizeof(int));
for (i=0; i<gsize; ++i) {
displs[i] = i*stride; |
|
rcounts[i] = 100-i; |
/* note change from previous example */ |
}
/* Create datatype for the column we are sending */
MPI_Type_vector( 100-myrank, 1, 150, MPI_INT, &stype);
MPI_Type_commit( &stype );
/* sptr is the address of start of "myrank" column */
sptr = &sendarray[0][myrank];
MPI_Gatherv( sptr, 1, stype, rbuf, rcounts, displs, MPI_INT, root, comm);
Note that a di erent amount of data is received from each process.
Example 5.8 Same as Example 5.7, but done in a di erent way at the sending end. We create a datatype that causes the correct striding at the sending end so that we read a column of a C array. A similar thing was done in Example 4.16, Section 4.1.14.
MPI_Comm comm;
int gsize,sendarray[100][150],*sptr;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
146 |
|
|
|
|
|
|
|
|
|
|
CHAPTER 5. |
COLLECTIVE COMMUNICATION |
||||
1 |
150 |
|
|
|
|
150 |
|
|
150 |
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
all processes |
|
2 |
100 |
|
|
|
|
|
100 |
|
|
|
|
100 |
|
|
||
|
|
|
|
|
|
|
|
|
|
|
||||||
|
|
|
|
|
|
|||||||||||
3 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|||
100 |
|
99 |
|
|
98 |
|
|
|
|
|
||||||
7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
at root |
|
8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
stride |
|
|
|
|
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||
9 |
|
|
|
rbuf |
|
|
|
|
|
|
|
|
|
|
||
10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Figure 5.7: The root process gathers 100-i ints from column i of a 100 150 C array, and
11
each set is placed stride ints apart.
12
13
14int root, *rbuf, stride, myrank, disp[2], blocklen[2];
15MPI_Datatype stype,type[2];
16int *displs,i,*rcounts;
17
18 ...
19
20MPI_Comm_size( comm, &gsize);
21MPI_Comm_rank( comm, &myrank );
22rbuf = (int *)malloc(gsize*stride*sizeof(int));
23displs = (int *)malloc(gsize*sizeof(int));
24rcounts = (int *)malloc(gsize*sizeof(int));
25for (i=0; i<gsize; ++i) {
26displs[i] = i*stride;
27rcounts[i] = 100-i;
28}
29/* Create datatype for one int, with extent of entire row
30*/
31disp[0] = 0; disp[1] = 150*sizeof(int);
32type[0] = MPI_INT; type[1] = MPI_UB;
33 |
blocklen[0] = 1; blocklen[1] = 1; |
34MPI_Type_create_struct( 2, blocklen, disp, type, &stype );
35MPI_Type_commit( &stype );
36sptr = &sendarray[0][myrank];
37MPI_Gatherv( sptr, 100-myrank, stype, rbuf, rcounts, displs, MPI_INT,
38 |
root, comm); |
39
40Example 5.9 Same as Example 5.7 at sending side, but at receiving side we make the
41stride between received blocks vary from block to block. See Figure 5.8.
42
43
44
45
46
47
MPI_Comm comm;
int gsize,sendarray[100][150],*sptr;
int root, *rbuf, *stride, myrank, bufsize;
MPI_Datatype stype;
int *displs,i,*rcounts,offset;
48
5.5. GATHER |
147 |
|
150 |
|
|
150 |
|
|
150 |
|
|||
100 |
|
|
100 |
|
|
|
100 |
|
|
|
all processes |
|
|
|
|
|
|
|
|
||||
|
|
|
|
||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
100 |
99 |
98 |
at root
stride[1]
rbuf
Figure 5.8: The root process gathers 100-i ints from column i of a 100 150 C array, and each set is placed stride[i] ints apart (a varying stride).
...
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
MPI_Comm_size( comm, &gsize);
MPI_Comm_rank( comm, &myrank );
stride = (int *)malloc(gsize*sizeof(int));
...
/* stride[i] for i = 0 to gsize-1 is set somehow */
/* set up displs and rcounts vectors first */
displs = (int *)malloc(gsize*sizeof(int)); rcounts = (int *)malloc(gsize*sizeof(int)); offset = 0;
for (i=0; i<gsize; ++i) { displs[i] = offset; offset += stride[i]; rcounts[i] = 100-i;
}
/* the required buffer size for rbuf is now easily obtained */
bufsize = displs[gsize-1]+rcounts[gsize-1]; rbuf = (int *)malloc(bufsize*sizeof(int));
/* Create datatype for the column we are sending */
MPI_Type_vector( 100-myrank, 1, 150, MPI_INT, &stype); MPI_Type_commit( &stype );
sptr = &sendarray[0][myrank];
MPI_Gatherv( sptr, 1, stype, rbuf, rcounts, displs, MPI_INT, root, comm);
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
Example 5.10 Process i sends num ints from the i-th column of a 100 150 int array, in C. The complicating factor is that the various values of num are not known to root, so a
47
48
148 |
CHAPTER 5. COLLECTIVE COMMUNICATION |
1separate gather must rst be run to nd these out. The data is placed contiguously at the
2
3
receiving end.
4
5
6
7
8
9
MPI_Comm comm;
int gsize,sendarray[100][150],*sptr;
int root, *rbuf, myrank, disp[2], blocklen[2]; MPI_Datatype stype,type[2];
int *displs,i,*rcounts,num;
10 ...
11
12
13
14
MPI_Comm_size( comm, &gsize); MPI_Comm_rank( comm, &myrank );
15/* First, gather nums to root
16*/
17rcounts = (int *)malloc(gsize*sizeof(int));
18MPI_Gather( &num, 1, MPI_INT, rcounts, 1, MPI_INT, root, comm);
19/* root now has correct rcounts, using these we set displs[] so
20* that data is placed contiguously (or concatenated) at receive end
21*/
22displs = (int *)malloc(gsize*sizeof(int));
23displs[0] = 0;
24for (i=1; i<gsize; ++i) {
25displs[i] = displs[i-1]+rcounts[i-1];
26}
27/* And, create receive buffer
28*/
29rbuf = (int *)malloc(gsize*(displs[gsize-1]+rcounts[gsize-1])
30 |
*sizeof(int)); |
31/* Create datatype for one int, with extent of entire row
32*/
33disp[0] = 0; disp[1] = 150*sizeof(int);
34type[0] = MPI_INT; type[1] = MPI_UB;
35 |
blocklen[0] = 1; blocklen[1] = 1; |
36MPI_Type_create_struct( 2, blocklen, disp, type, &stype );
37MPI_Type_commit( &stype );
38sptr = &sendarray[0][myrank];
39MPI_Gatherv( sptr, num, stype, rbuf, rcounts, displs, MPI_INT,
40 root, comm);
41
42
43
44
45
46
47
48