- •Contents
- •List of Figures
- •List of Tables
- •Acknowledgments
- •Introduction to MPI
- •Overview and Goals
- •Background of MPI-1.0
- •Background of MPI-1.1, MPI-1.2, and MPI-2.0
- •Background of MPI-1.3 and MPI-2.1
- •Background of MPI-2.2
- •Who Should Use This Standard?
- •What Platforms Are Targets For Implementation?
- •What Is Included In The Standard?
- •What Is Not Included In The Standard?
- •Organization of this Document
- •MPI Terms and Conventions
- •Document Notation
- •Naming Conventions
- •Semantic Terms
- •Data Types
- •Opaque Objects
- •Array Arguments
- •State
- •Named Constants
- •Choice
- •Addresses
- •Language Binding
- •Deprecated Names and Functions
- •Fortran Binding Issues
- •C Binding Issues
- •C++ Binding Issues
- •Functions and Macros
- •Processes
- •Error Handling
- •Implementation Issues
- •Independence of Basic Runtime Routines
- •Interaction with Signals
- •Examples
- •Point-to-Point Communication
- •Introduction
- •Blocking Send and Receive Operations
- •Blocking Send
- •Message Data
- •Message Envelope
- •Blocking Receive
- •Return Status
- •Passing MPI_STATUS_IGNORE for Status
- •Data Type Matching and Data Conversion
- •Type Matching Rules
- •Type MPI_CHARACTER
- •Data Conversion
- •Communication Modes
- •Semantics of Point-to-Point Communication
- •Buffer Allocation and Usage
- •Nonblocking Communication
- •Communication Request Objects
- •Communication Initiation
- •Communication Completion
- •Semantics of Nonblocking Communications
- •Multiple Completions
- •Non-destructive Test of status
- •Probe and Cancel
- •Persistent Communication Requests
- •Send-Receive
- •Null Processes
- •Datatypes
- •Derived Datatypes
- •Type Constructors with Explicit Addresses
- •Datatype Constructors
- •Subarray Datatype Constructor
- •Distributed Array Datatype Constructor
- •Address and Size Functions
- •Lower-Bound and Upper-Bound Markers
- •Extent and Bounds of Datatypes
- •True Extent of Datatypes
- •Commit and Free
- •Duplicating a Datatype
- •Use of General Datatypes in Communication
- •Correct Use of Addresses
- •Decoding a Datatype
- •Examples
- •Pack and Unpack
- •Canonical MPI_PACK and MPI_UNPACK
- •Collective Communication
- •Introduction and Overview
- •Communicator Argument
- •Applying Collective Operations to Intercommunicators
- •Barrier Synchronization
- •Broadcast
- •Example using MPI_BCAST
- •Gather
- •Examples using MPI_GATHER, MPI_GATHERV
- •Scatter
- •Examples using MPI_SCATTER, MPI_SCATTERV
- •Example using MPI_ALLGATHER
- •All-to-All Scatter/Gather
- •Global Reduction Operations
- •Reduce
- •Signed Characters and Reductions
- •MINLOC and MAXLOC
- •All-Reduce
- •Process-local reduction
- •Reduce-Scatter
- •MPI_REDUCE_SCATTER_BLOCK
- •MPI_REDUCE_SCATTER
- •Scan
- •Inclusive Scan
- •Exclusive Scan
- •Example using MPI_SCAN
- •Correctness
- •Introduction
- •Features Needed to Support Libraries
- •MPI's Support for Libraries
- •Basic Concepts
- •Groups
- •Contexts
- •Intra-Communicators
- •Group Management
- •Group Accessors
- •Group Constructors
- •Group Destructors
- •Communicator Management
- •Communicator Accessors
- •Communicator Constructors
- •Communicator Destructors
- •Motivating Examples
- •Current Practice #1
- •Current Practice #2
- •(Approximate) Current Practice #3
- •Example #4
- •Library Example #1
- •Library Example #2
- •Inter-Communication
- •Inter-communicator Accessors
- •Inter-communicator Operations
- •Inter-Communication Examples
- •Caching
- •Functionality
- •Communicators
- •Windows
- •Datatypes
- •Error Class for Invalid Keyval
- •Attributes Example
- •Naming Objects
- •Formalizing the Loosely Synchronous Model
- •Basic Statements
- •Models of Execution
- •Static communicator allocation
- •Dynamic communicator allocation
- •The General case
- •Process Topologies
- •Introduction
- •Virtual Topologies
- •Embedding in MPI
- •Overview of the Functions
- •Topology Constructors
- •Cartesian Constructor
- •Cartesian Convenience Function: MPI_DIMS_CREATE
- •General (Graph) Constructor
- •Distributed (Graph) Constructor
- •Topology Inquiry Functions
- •Cartesian Shift Coordinates
- •Partitioning of Cartesian structures
- •Low-Level Topology Functions
- •An Application Example
- •MPI Environmental Management
- •Implementation Information
- •Version Inquiries
- •Environmental Inquiries
- •Tag Values
- •Host Rank
- •IO Rank
- •Clock Synchronization
- •Memory Allocation
- •Error Handling
- •Error Handlers for Communicators
- •Error Handlers for Windows
- •Error Handlers for Files
- •Freeing Errorhandlers and Retrieving Error Strings
- •Error Codes and Classes
- •Error Classes, Error Codes, and Error Handlers
- •Timers and Synchronization
- •Startup
- •Allowing User Functions at Process Termination
- •Determining Whether MPI Has Finished
- •Portable MPI Process Startup
- •The Info Object
- •Process Creation and Management
- •Introduction
- •The Dynamic Process Model
- •Starting Processes
- •The Runtime Environment
- •Process Manager Interface
- •Processes in MPI
- •Starting Processes and Establishing Communication
- •Reserved Keys
- •Spawn Example
- •Manager-worker Example, Using MPI_COMM_SPAWN.
- •Establishing Communication
- •Names, Addresses, Ports, and All That
- •Server Routines
- •Client Routines
- •Name Publishing
- •Reserved Key Values
- •Client/Server Examples
- •Ocean/Atmosphere - Relies on Name Publishing
- •Simple Client-Server Example.
- •Other Functionality
- •Universe Size
- •Singleton MPI_INIT
- •MPI_APPNUM
- •Releasing Connections
- •Another Way to Establish MPI Communication
- •One-Sided Communications
- •Introduction
- •Initialization
- •Window Creation
- •Window Attributes
- •Communication Calls
- •Examples
- •Accumulate Functions
- •Synchronization Calls
- •Fence
- •General Active Target Synchronization
- •Lock
- •Assertions
- •Examples
- •Error Handling
- •Error Handlers
- •Error Classes
- •Semantics and Correctness
- •Atomicity
- •Progress
- •Registers and Compiler Optimizations
- •External Interfaces
- •Introduction
- •Generalized Requests
- •Examples
- •Associating Information with Status
- •MPI and Threads
- •General
- •Initialization
- •Introduction
- •File Manipulation
- •Opening a File
- •Closing a File
- •Deleting a File
- •Resizing a File
- •Preallocating Space for a File
- •Querying the Size of a File
- •Querying File Parameters
- •File Info
- •Reserved File Hints
- •File Views
- •Data Access
- •Data Access Routines
- •Positioning
- •Synchronism
- •Coordination
- •Data Access Conventions
- •Data Access with Individual File Pointers
- •Data Access with Shared File Pointers
- •Noncollective Operations
- •Collective Operations
- •Seek
- •Split Collective Data Access Routines
- •File Interoperability
- •Datatypes for File Interoperability
- •Extent Callback
- •Datarep Conversion Functions
- •Matching Data Representations
- •Consistency and Semantics
- •File Consistency
- •Random Access vs. Sequential Files
- •Progress
- •Collective File Operations
- •Type Matching
- •Logical vs. Physical File Layout
- •File Size
- •Examples
- •Asynchronous I/O
- •I/O Error Handling
- •I/O Error Classes
- •Examples
- •Subarray Filetype Constructor
- •Requirements
- •Discussion
- •Logic of the Design
- •Examples
- •MPI Library Implementation
- •Systems with Weak Symbols
- •Systems Without Weak Symbols
- •Complications
- •Multiple Counting
- •Linker Oddities
- •Multiple Levels of Interception
- •Deprecated Functions
- •Deprecated since MPI-2.0
- •Deprecated since MPI-2.2
- •Language Bindings
- •Overview
- •Design
- •C++ Classes for MPI
- •Class Member Functions for MPI
- •Semantics
- •C++ Datatypes
- •Communicators
- •Exceptions
- •Mixed-Language Operability
- •Problems With Fortran Bindings for MPI
- •Problems Due to Strong Typing
- •Problems Due to Data Copying and Sequence Association
- •Special Constants
- •Fortran 90 Derived Types
- •A Problem with Register Optimization
- •Basic Fortran Support
- •Extended Fortran Support
- •The mpi Module
- •No Type Mismatch Problems for Subroutines with Choice Arguments
- •Additional Support for Fortran Numeric Intrinsic Types
- •Language Interoperability
- •Introduction
- •Assumptions
- •Initialization
- •Transfer of Handles
- •Status
- •MPI Opaque Objects
- •Datatypes
- •Callback Functions
- •Error Handlers
- •Reduce Operations
- •Addresses
- •Attributes
- •Extra State
- •Constants
- •Interlanguage Communication
- •Language Bindings Summary
- •Groups, Contexts, Communicators, and Caching Fortran Bindings
- •External Interfaces C++ Bindings
- •Change-Log
- •Bibliography
- •Examples Index
- •MPI Declarations Index
- •MPI Function Index
5.12. CORRECTNESS |
183 |
|
int i,base; |
|
|
SegScanPair |
a, answer; |
|
MPI_Op |
myOp; |
|
MPI_Datatype |
type[2] = {MPI_DOUBLE, MPI_INT}; |
|
MPI_Aint |
disp[2]; |
|
int |
blocklen[2] = { 1, 1}; |
|
MPI_Datatype sspair;
/* explain to MPI how type SegScanPair is defined */
MPI_Get_address( a, disp); MPI_Get_address( a.log, disp+1); base = disp[0];
for (i=0; i<2; ++i) disp[i] -= base; MPI_Type_create_struct( 2, blocklen, disp, type, &sspair ); MPI_Type_commit( &sspair );
/* create the segmented-scan user-op */
MPI_Op_create( segScan, 0, &myOp );
...
MPI_Scan( &a, &answer, 1, sspair, myOp, comm );
5.12 Correctness
A correct, portable program must invoke collective communications so that deadlock will not occur, whether collective communications are synchronizing or not. The following examples illustrate dangerous use of collective routines on intracommunicators.
Example 5.23 The following is erroneous.
switch(rank) { case 0:
MPI_Bcast(buf1, count, type, 0, comm); MPI_Bcast(buf2, count, type, 1, comm); break;
case 1:
MPI_Bcast(buf2, count, type, 1, comm); MPI_Bcast(buf1, count, type, 0, comm); break;
}
We assume that the group of comm is f0,1g. Two processes execute two broadcast operations in reverse order. If the operation is synchronizing then a deadlock will occur.
Collective operations must be executed in the same order at all members of the communication group.
Example 5.24 The following is erroneous.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
184 |
CHAPTER 5. COLLECTIVE COMMUNICATION |
1switch(rank) {
2case 0:
3 |
MPI_Bcast(buf1, count, type, 0, comm0); |
|
|
4 |
MPI_Bcast(buf2, count, type, 2, comm2); |
|
|
5 |
break; |
|
6case 1:
7 |
MPI_Bcast(buf1, count, type, 1, comm1); |
|
|
8 |
MPI_Bcast(buf2, count, type, 0, comm0); |
|
|
9 |
break; |
|
10case 2:
11MPI_Bcast(buf1, count, type, 2, comm2);
12MPI_Bcast(buf2, count, type, 1, comm1);
13break;
14}
15
16Assume that the group of comm0 is f0,1g, of comm1 is f1, 2g and of comm2 is f2,0g. If
17the broadcast is a synchronizing operation, then there is a cyclic dependency: the broadcast
18in comm2 completes only after the broadcast in comm0; the broadcast in comm0 completes
19only after the broadcast in comm1; and the broadcast in comm1 completes only after the
20broadcast in comm2. Thus, the code will deadlock.
21Collective operations must be executed in an order so that no cyclic dependences occur.
22 |
|
23 |
Example 5.25 The following is erroneous. |
24
switch(rank) {
25
case 0:
26
MPI_Bcast(buf1, count, type, 0, comm);
27
MPI_Send(buf2, count, type, 1, tag, comm);
28
break;
29
case 1:
30
MPI_Recv(buf2, count, type, 0, tag, comm, status);
31
MPI_Bcast(buf1, count, type, 0, comm);
32
break;
33
}
34
35
Process zero executes a broadcast, followed by a blocking send operation. Process one
36
rst executes a blocking receive that matches the send, followed by broadcast call that
37
matches the broadcast of process zero. This program may deadlock. The broadcast call on
38
process zero may block until process one executes the matching broadcast call, so that the
39
send is not executed. Process one will de nitely block on the receive and so, in this case,
40
never executes the broadcast.
41
The relative order of execution of collective operations and point-to-point operations
42
should be such, so that even if the collective operations and the point-to-point operations
43
are synchronizing, no deadlock will occur.
44
45
Example 5.26 An unsafe, non-deterministic program.
46
47switch(rank) {
48case 0:
5.12. CORRECTNESS |
|
|
|
185 |
||
|
|
|
|
First Execution |
|
|
process: |
0 |
|
1 |
|
2 |
|
|
|
|
|
recv |
match |
send |
|
|
|
|
|
||
|
broadcast |
|
broadcast |
broadcast |
||
|
|
|
match |
|
||
|
send |
|
|
recv |
|
|
|
|
|
|
Second Execution
broadcast
match
send recv
broadcast
match
recv send broadcast
Figure 5.12: A race condition causes non-deterministic matching of sends and receives. One cannot rely on synchronization from a broadcast to make the program deterministic.
MPI_Bcast(buf1, count, type, 0, comm); MPI_Send(buf2, count, type, 1, tag, comm); break;
case 1:
MPI_Recv(buf2, count, type, MPI_ANY_SOURCE, tag, comm, status); MPI_Bcast(buf1, count, type, 0, comm);
MPI_Recv(buf2, count, type, MPI_ANY_SOURCE, tag, comm, status); break;
case 2:
MPI_Send(buf2, count, type, 1, tag, comm); MPI_Bcast(buf1, count, type, 0, comm); break;
}
All three processes participate in a broadcast. Process 0 sends a message to process 1 after the broadcast, and process 2 sends a message to process 1 before the broadcast. Process 1 receives before and after the broadcast, with a wildcard source argument.
Two possible executions of this program, with di erent matchings of sends and receives, are illustrated in Figure 5.12. Note that the second execution has the peculiar e ect that a send executed after the broadcast is received at another node before the broadcast. This example illustrates the fact that one should not rely on collective communication functions to have particular synchronization e ects. A program that works correctly only when therst execution occurs (only when broadcast is synchronizing) is erroneous.
Finally, in multithreaded implementations, one can have more than one, concurrently executing, collective communication call at a process. In these situations, it is the user's re-
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
186 |
CHAPTER 5. COLLECTIVE COMMUNICATION |
1sponsibility to ensure that the same communicator is not used concurrently by two di erent
2
3
collective communication calls at the same process.
4Advice to implementors. Assume that broadcast is implemented using point-to-point
5MPI communication. Suppose the following two rules are followed.
6
71. All receives specify their source explicitly (no wildcards).
82. Each process sends all messages that pertain to one collective call before sending
9 |
any message that pertain to a subsequent collective call. |
10 |
|
11
12
Then, messages belonging to successive broadcasts cannot be confused, as the order of point-to-point messages is preserved.
13It is the implementor's responsibility to ensure that point-to-point messages are not
14confused with collective messages. One way to accomplish this is, whenever a commu-
15nicator is created, to also create a \hidden communicator" for collective communica-
16tion. One could achieve a similar e ect more cheaply, for example, by using a hidden
17tag or context bit to indicate whether the communicator is used for point-to-point or
18collective communication. (End of advice to implementors.)
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48