- •Contents
- •List of Figures
- •List of Tables
- •Acknowledgments
- •Introduction to MPI
- •Overview and Goals
- •Background of MPI-1.0
- •Background of MPI-1.1, MPI-1.2, and MPI-2.0
- •Background of MPI-1.3 and MPI-2.1
- •Background of MPI-2.2
- •Who Should Use This Standard?
- •What Platforms Are Targets For Implementation?
- •What Is Included In The Standard?
- •What Is Not Included In The Standard?
- •Organization of this Document
- •MPI Terms and Conventions
- •Document Notation
- •Naming Conventions
- •Semantic Terms
- •Data Types
- •Opaque Objects
- •Array Arguments
- •State
- •Named Constants
- •Choice
- •Addresses
- •Language Binding
- •Deprecated Names and Functions
- •Fortran Binding Issues
- •C Binding Issues
- •C++ Binding Issues
- •Functions and Macros
- •Processes
- •Error Handling
- •Implementation Issues
- •Independence of Basic Runtime Routines
- •Interaction with Signals
- •Examples
- •Point-to-Point Communication
- •Introduction
- •Blocking Send and Receive Operations
- •Blocking Send
- •Message Data
- •Message Envelope
- •Blocking Receive
- •Return Status
- •Passing MPI_STATUS_IGNORE for Status
- •Data Type Matching and Data Conversion
- •Type Matching Rules
- •Type MPI_CHARACTER
- •Data Conversion
- •Communication Modes
- •Semantics of Point-to-Point Communication
- •Buffer Allocation and Usage
- •Nonblocking Communication
- •Communication Request Objects
- •Communication Initiation
- •Communication Completion
- •Semantics of Nonblocking Communications
- •Multiple Completions
- •Non-destructive Test of status
- •Probe and Cancel
- •Persistent Communication Requests
- •Send-Receive
- •Null Processes
- •Datatypes
- •Derived Datatypes
- •Type Constructors with Explicit Addresses
- •Datatype Constructors
- •Subarray Datatype Constructor
- •Distributed Array Datatype Constructor
- •Address and Size Functions
- •Lower-Bound and Upper-Bound Markers
- •Extent and Bounds of Datatypes
- •True Extent of Datatypes
- •Commit and Free
- •Duplicating a Datatype
- •Use of General Datatypes in Communication
- •Correct Use of Addresses
- •Decoding a Datatype
- •Examples
- •Pack and Unpack
- •Canonical MPI_PACK and MPI_UNPACK
- •Collective Communication
- •Introduction and Overview
- •Communicator Argument
- •Applying Collective Operations to Intercommunicators
- •Barrier Synchronization
- •Broadcast
- •Example using MPI_BCAST
- •Gather
- •Examples using MPI_GATHER, MPI_GATHERV
- •Scatter
- •Examples using MPI_SCATTER, MPI_SCATTERV
- •Example using MPI_ALLGATHER
- •All-to-All Scatter/Gather
- •Global Reduction Operations
- •Reduce
- •Signed Characters and Reductions
- •MINLOC and MAXLOC
- •All-Reduce
- •Process-local reduction
- •Reduce-Scatter
- •MPI_REDUCE_SCATTER_BLOCK
- •MPI_REDUCE_SCATTER
- •Scan
- •Inclusive Scan
- •Exclusive Scan
- •Example using MPI_SCAN
- •Correctness
- •Introduction
- •Features Needed to Support Libraries
- •MPI's Support for Libraries
- •Basic Concepts
- •Groups
- •Contexts
- •Intra-Communicators
- •Group Management
- •Group Accessors
- •Group Constructors
- •Group Destructors
- •Communicator Management
- •Communicator Accessors
- •Communicator Constructors
- •Communicator Destructors
- •Motivating Examples
- •Current Practice #1
- •Current Practice #2
- •(Approximate) Current Practice #3
- •Example #4
- •Library Example #1
- •Library Example #2
- •Inter-Communication
- •Inter-communicator Accessors
- •Inter-communicator Operations
- •Inter-Communication Examples
- •Caching
- •Functionality
- •Communicators
- •Windows
- •Datatypes
- •Error Class for Invalid Keyval
- •Attributes Example
- •Naming Objects
- •Formalizing the Loosely Synchronous Model
- •Basic Statements
- •Models of Execution
- •Static communicator allocation
- •Dynamic communicator allocation
- •The General case
- •Process Topologies
- •Introduction
- •Virtual Topologies
- •Embedding in MPI
- •Overview of the Functions
- •Topology Constructors
- •Cartesian Constructor
- •Cartesian Convenience Function: MPI_DIMS_CREATE
- •General (Graph) Constructor
- •Distributed (Graph) Constructor
- •Topology Inquiry Functions
- •Cartesian Shift Coordinates
- •Partitioning of Cartesian structures
- •Low-Level Topology Functions
- •An Application Example
- •MPI Environmental Management
- •Implementation Information
- •Version Inquiries
- •Environmental Inquiries
- •Tag Values
- •Host Rank
- •IO Rank
- •Clock Synchronization
- •Memory Allocation
- •Error Handling
- •Error Handlers for Communicators
- •Error Handlers for Windows
- •Error Handlers for Files
- •Freeing Errorhandlers and Retrieving Error Strings
- •Error Codes and Classes
- •Error Classes, Error Codes, and Error Handlers
- •Timers and Synchronization
- •Startup
- •Allowing User Functions at Process Termination
- •Determining Whether MPI Has Finished
- •Portable MPI Process Startup
- •The Info Object
- •Process Creation and Management
- •Introduction
- •The Dynamic Process Model
- •Starting Processes
- •The Runtime Environment
- •Process Manager Interface
- •Processes in MPI
- •Starting Processes and Establishing Communication
- •Reserved Keys
- •Spawn Example
- •Manager-worker Example, Using MPI_COMM_SPAWN.
- •Establishing Communication
- •Names, Addresses, Ports, and All That
- •Server Routines
- •Client Routines
- •Name Publishing
- •Reserved Key Values
- •Client/Server Examples
- •Ocean/Atmosphere - Relies on Name Publishing
- •Simple Client-Server Example.
- •Other Functionality
- •Universe Size
- •Singleton MPI_INIT
- •MPI_APPNUM
- •Releasing Connections
- •Another Way to Establish MPI Communication
- •One-Sided Communications
- •Introduction
- •Initialization
- •Window Creation
- •Window Attributes
- •Communication Calls
- •Examples
- •Accumulate Functions
- •Synchronization Calls
- •Fence
- •General Active Target Synchronization
- •Lock
- •Assertions
- •Examples
- •Error Handling
- •Error Handlers
- •Error Classes
- •Semantics and Correctness
- •Atomicity
- •Progress
- •Registers and Compiler Optimizations
- •External Interfaces
- •Introduction
- •Generalized Requests
- •Examples
- •Associating Information with Status
- •MPI and Threads
- •General
- •Initialization
- •Introduction
- •File Manipulation
- •Opening a File
- •Closing a File
- •Deleting a File
- •Resizing a File
- •Preallocating Space for a File
- •Querying the Size of a File
- •Querying File Parameters
- •File Info
- •Reserved File Hints
- •File Views
- •Data Access
- •Data Access Routines
- •Positioning
- •Synchronism
- •Coordination
- •Data Access Conventions
- •Data Access with Individual File Pointers
- •Data Access with Shared File Pointers
- •Noncollective Operations
- •Collective Operations
- •Seek
- •Split Collective Data Access Routines
- •File Interoperability
- •Datatypes for File Interoperability
- •Extent Callback
- •Datarep Conversion Functions
- •Matching Data Representations
- •Consistency and Semantics
- •File Consistency
- •Random Access vs. Sequential Files
- •Progress
- •Collective File Operations
- •Type Matching
- •Logical vs. Physical File Layout
- •File Size
- •Examples
- •Asynchronous I/O
- •I/O Error Handling
- •I/O Error Classes
- •Examples
- •Subarray Filetype Constructor
- •Requirements
- •Discussion
- •Logic of the Design
- •Examples
- •MPI Library Implementation
- •Systems with Weak Symbols
- •Systems Without Weak Symbols
- •Complications
- •Multiple Counting
- •Linker Oddities
- •Multiple Levels of Interception
- •Deprecated Functions
- •Deprecated since MPI-2.0
- •Deprecated since MPI-2.2
- •Language Bindings
- •Overview
- •Design
- •C++ Classes for MPI
- •Class Member Functions for MPI
- •Semantics
- •C++ Datatypes
- •Communicators
- •Exceptions
- •Mixed-Language Operability
- •Problems With Fortran Bindings for MPI
- •Problems Due to Strong Typing
- •Problems Due to Data Copying and Sequence Association
- •Special Constants
- •Fortran 90 Derived Types
- •A Problem with Register Optimization
- •Basic Fortran Support
- •Extended Fortran Support
- •The mpi Module
- •No Type Mismatch Problems for Subroutines with Choice Arguments
- •Additional Support for Fortran Numeric Intrinsic Types
- •Language Interoperability
- •Introduction
- •Assumptions
- •Initialization
- •Transfer of Handles
- •Status
- •MPI Opaque Objects
- •Datatypes
- •Callback Functions
- •Error Handlers
- •Reduce Operations
- •Addresses
- •Attributes
- •Extra State
- •Constants
- •Interlanguage Communication
- •Language Bindings Summary
- •Groups, Contexts, Communicators, and Caching Fortran Bindings
- •External Interfaces C++ Bindings
- •Change-Log
- •Bibliography
- •Examples Index
- •MPI Declarations Index
- •MPI Function Index
360 |
CHAPTER 11. ONE-SIDED COMMUNICATIONS |
1MPI_MODE_NOPUT | the local window will not be updated by put or accumulate
2
3
4
calls after the post call, until the ensuing (wait) synchronization. This may avoid the need for cache synchronization at the wait call.
5
6
7
8
MPI_WIN_FENCE:
MPI_MODE_NOSTORE | the local window was not updated by local stores (or local get or receive calls) since last synchronization.
9MPI_MODE_NOPUT | the local window will not be updated by put or accumulate
10 |
calls after the fence call, until the ensuing (fence) synchronization. |
11MPI_MODE_NOPRECEDE | the fence does not complete any sequence of locally issued
12RMA calls. If this assertion is given by any process in the window group, then it
13must be given by all processes in the group.
14MPI_MODE_NOSUCCEED | the fence does not start any sequence of locally issued
15RMA calls. If the assertion is given by any process in the window group, then it
16must be given by all processes in the group.
17
18
19
20
21
22
23
24
25
26
27
28
29
MPI_WIN_LOCK:
MPI_MODE_NOCHECK | no other process holds, or will attempt to acquire a con-icting lock, while the caller holds the window lock. This is useful when mutual exclusion is achieved by other means, but the coherence operations that may be attached to the lock and unlock calls are still required.
Advice to users. Note that the nostore and noprecede ags provide information on what happened before the call; the noput and nosucceed ags provide information on what will happen after the call. (End of advice to users.)
11.4.5 Miscellaneous Clari cations
30Once an RMA routine completes, it is safe to free any opaque objects passed as argument
31to that routine. For example, the datatype argument of a MPI_PUT call can be freed as
32soon as the call returns, even though the communication may not be complete.
33As in message-passing, datatypes must be committed before they can be used in RMA
34communication.
35 |
|
|
36 |
11.5 Examples |
|
37 |
||
|
||
38 |
Example 11.6 The following example shows a generic loosely synchronous, iterative code, |
|
|
||
39 |
using fence synchronization. The window at each process consists of array A, which contains |
|
|
||
40 |
the origin and target bu ers of the put calls. |
|
|
||
41 |
|
|
42 |
... |
|
|
||
43 |
while(!converged(A)){ |
|
|
||
44 |
update(A); |
|
|
||
45 |
MPI_Win_fence(MPI_MODE_NOPRECEDE, win); |
|
|
||
46 |
for(i=0; i < toneighbors; i++) |
|
|
||
47 |
MPI_Put(&frombuf[i], 1, fromtype[i], toneighbor[i], |
|
|
||
48 |
todisp[i], 1, totype[i], win); |
|
|
11.5. EXAMPLES |
361 |
MPI_Win_fence((MPI_MODE_NOSTORE | MPI_MODE_NOSUCCEED), win);
}
The same code could be written with get, rather than put. Note that, during the communication phase, each window is concurrently read (as origin bu er of puts) and written (as target bu er of puts). This is OK, provided that there is no overlap between the target bu er of a put and another communication bu er.
Example 11.7 Same generic example, with more computation/communication overlap. We assume that the update phase is broken in two subphases: the rst, where the \boundary," which is involved in communication, is updated, and the second, where the \core," which neither use nor provide communicated data, is updated.
...
while(!converged(A)){ update_boundary(A);
MPI_Win_fence((MPI_MODE_NOPUT | MPI_MODE_NOPRECEDE), win); for(i=0; i < fromneighbors; i++)
MPI_Get(&tobuf[i], 1, totype[i], fromneighbor[i], fromdisp[i], 1, fromtype[i], win);
update_core(A); MPI_Win_fence(MPI_MODE_NOSUCCEED, win);
}
The get communication can be concurrent with the core update, since they do not access the same locations, and the local update of the origin bu er by the get call can be concurrent with the local update of the core by the update_core call. In order to get similar overlap with put communication we would need to use separate windows for the core and for the boundary. This is required because we do not allow local stores to be concurrent with puts on the same, or on overlapping, windows.
Example 11.8 Same code as in Example 11.6, rewritten using post-start-complete-wait.
...
while(!converged(A)){
update(A); MPI_Win_post(fromgroup, 0, win); MPI_Win_start(togroup, 0, win); for(i=0; i < toneighbors; i++)
MPI_Put(&frombuf[i], 1, fromtype[i], toneighbor[i], todisp[i], 1, totype[i], win);
MPI_Win_complete(win);
MPI_Win_wait(win);
}
Example 11.9 Same example, with split phases, as in Example 11.7.
...
while(!converged(A)){ update_boundary(A);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
362 |
CHAPTER 11. ONE-SIDED COMMUNICATIONS |
1
2
3
4
5
6
7
8
9
10
MPI_Win_post(togroup, MPI_MODE_NOPUT, win); MPI_Win_start(fromgroup, 0, win);
for(i=0; i < fromneighbors; i++)
MPI_Get(&tobuf[i], 1, totype[i], fromneighbor[i], fromdisp[i], 1, fromtype[i], win);
update_core(A); MPI_Win_complete(win); MPI_Win_wait(win);
}
11 Example 11.10 A checkerboard, or double bu er communication pattern, that allows
12more computation/communication overlap. Array A0 is updated using values of array A1,
13and vice versa. We assume that communication is symmetric: if process A gets data from
14process B, then process B gets data from process A. Window wini consists of array Ai.
15
...
16
if (!converged(A0,A1))
17
MPI_Win_post(neighbors, (MPI_MODE_NOCHECK | MPI_MODE_NOPUT), win0);
18
MPI_Barrier(comm0);
19
/* the barrier is needed because the start call inside the
20
loop uses the nocheck option */
21
while(!converged(A0, A1)){
22
/* communication on A0 and computation on A1 */
23
update2(A1, A0); /* local update of A1 that depends on A0 (and A1) */
24
MPI_Win_start(neighbors, MPI_MODE_NOCHECK, win0);
25
for(i=0; i < neighbors; i++)
26
MPI_Get(&tobuf0[i], 1, totype0[i], neighbor[i],
27
fromdisp0[i], 1, fromtype0[i], win0);
28
update1(A1); /* local update of A1 that is
29
concurrent with communication that updates A0 */
30
MPI_Win_post(neighbors, (MPI_MODE_NOCHECK | MPI_MODE_NOPUT), win1);
31
MPI_Win_complete(win0);
32
MPI_Win_wait(win0);
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
/* communication on A1 and computation on A0 */
update2(A0, A1); /* local update of A0 that depends on A1 (and A0)*/ MPI_Win_start(neighbors, MPI_MODE_NOCHECK, win1);
for(i=0; i < neighbors; i++)
MPI_Get(&tobuf1[i], 1, totype1[i], neighbor[i], fromdisp1[i], 1, fromtype1[i], win1);
update1(A0); /* local update of A0 that depends on A0 only, concurrent with communication that updates A1 */
if (!converged(A0,A1))
MPI_Win_post(neighbors, (MPI_MODE_NOCHECK | MPI_MODE_NOPUT), win0); MPI_Win_complete(win1);
MPI_Win_wait(win1);
}
A process posts the local window associated with win0 before it completes RMA accesses