- •Contents
- •List of Figures
- •List of Tables
- •Acknowledgments
- •Introduction to MPI
- •Overview and Goals
- •Background of MPI-1.0
- •Background of MPI-1.1, MPI-1.2, and MPI-2.0
- •Background of MPI-1.3 and MPI-2.1
- •Background of MPI-2.2
- •Who Should Use This Standard?
- •What Platforms Are Targets For Implementation?
- •What Is Included In The Standard?
- •What Is Not Included In The Standard?
- •Organization of this Document
- •MPI Terms and Conventions
- •Document Notation
- •Naming Conventions
- •Semantic Terms
- •Data Types
- •Opaque Objects
- •Array Arguments
- •State
- •Named Constants
- •Choice
- •Addresses
- •Language Binding
- •Deprecated Names and Functions
- •Fortran Binding Issues
- •C Binding Issues
- •C++ Binding Issues
- •Functions and Macros
- •Processes
- •Error Handling
- •Implementation Issues
- •Independence of Basic Runtime Routines
- •Interaction with Signals
- •Examples
- •Point-to-Point Communication
- •Introduction
- •Blocking Send and Receive Operations
- •Blocking Send
- •Message Data
- •Message Envelope
- •Blocking Receive
- •Return Status
- •Passing MPI_STATUS_IGNORE for Status
- •Data Type Matching and Data Conversion
- •Type Matching Rules
- •Type MPI_CHARACTER
- •Data Conversion
- •Communication Modes
- •Semantics of Point-to-Point Communication
- •Buffer Allocation and Usage
- •Nonblocking Communication
- •Communication Request Objects
- •Communication Initiation
- •Communication Completion
- •Semantics of Nonblocking Communications
- •Multiple Completions
- •Non-destructive Test of status
- •Probe and Cancel
- •Persistent Communication Requests
- •Send-Receive
- •Null Processes
- •Datatypes
- •Derived Datatypes
- •Type Constructors with Explicit Addresses
- •Datatype Constructors
- •Subarray Datatype Constructor
- •Distributed Array Datatype Constructor
- •Address and Size Functions
- •Lower-Bound and Upper-Bound Markers
- •Extent and Bounds of Datatypes
- •True Extent of Datatypes
- •Commit and Free
- •Duplicating a Datatype
- •Use of General Datatypes in Communication
- •Correct Use of Addresses
- •Decoding a Datatype
- •Examples
- •Pack and Unpack
- •Canonical MPI_PACK and MPI_UNPACK
- •Collective Communication
- •Introduction and Overview
- •Communicator Argument
- •Applying Collective Operations to Intercommunicators
- •Barrier Synchronization
- •Broadcast
- •Example using MPI_BCAST
- •Gather
- •Examples using MPI_GATHER, MPI_GATHERV
- •Scatter
- •Examples using MPI_SCATTER, MPI_SCATTERV
- •Example using MPI_ALLGATHER
- •All-to-All Scatter/Gather
- •Global Reduction Operations
- •Reduce
- •Signed Characters and Reductions
- •MINLOC and MAXLOC
- •All-Reduce
- •Process-local reduction
- •Reduce-Scatter
- •MPI_REDUCE_SCATTER_BLOCK
- •MPI_REDUCE_SCATTER
- •Scan
- •Inclusive Scan
- •Exclusive Scan
- •Example using MPI_SCAN
- •Correctness
- •Introduction
- •Features Needed to Support Libraries
- •MPI's Support for Libraries
- •Basic Concepts
- •Groups
- •Contexts
- •Intra-Communicators
- •Group Management
- •Group Accessors
- •Group Constructors
- •Group Destructors
- •Communicator Management
- •Communicator Accessors
- •Communicator Constructors
- •Communicator Destructors
- •Motivating Examples
- •Current Practice #1
- •Current Practice #2
- •(Approximate) Current Practice #3
- •Example #4
- •Library Example #1
- •Library Example #2
- •Inter-Communication
- •Inter-communicator Accessors
- •Inter-communicator Operations
- •Inter-Communication Examples
- •Caching
- •Functionality
- •Communicators
- •Windows
- •Datatypes
- •Error Class for Invalid Keyval
- •Attributes Example
- •Naming Objects
- •Formalizing the Loosely Synchronous Model
- •Basic Statements
- •Models of Execution
- •Static communicator allocation
- •Dynamic communicator allocation
- •The General case
- •Process Topologies
- •Introduction
- •Virtual Topologies
- •Embedding in MPI
- •Overview of the Functions
- •Topology Constructors
- •Cartesian Constructor
- •Cartesian Convenience Function: MPI_DIMS_CREATE
- •General (Graph) Constructor
- •Distributed (Graph) Constructor
- •Topology Inquiry Functions
- •Cartesian Shift Coordinates
- •Partitioning of Cartesian structures
- •Low-Level Topology Functions
- •An Application Example
- •MPI Environmental Management
- •Implementation Information
- •Version Inquiries
- •Environmental Inquiries
- •Tag Values
- •Host Rank
- •IO Rank
- •Clock Synchronization
- •Memory Allocation
- •Error Handling
- •Error Handlers for Communicators
- •Error Handlers for Windows
- •Error Handlers for Files
- •Freeing Errorhandlers and Retrieving Error Strings
- •Error Codes and Classes
- •Error Classes, Error Codes, and Error Handlers
- •Timers and Synchronization
- •Startup
- •Allowing User Functions at Process Termination
- •Determining Whether MPI Has Finished
- •Portable MPI Process Startup
- •The Info Object
- •Process Creation and Management
- •Introduction
- •The Dynamic Process Model
- •Starting Processes
- •The Runtime Environment
- •Process Manager Interface
- •Processes in MPI
- •Starting Processes and Establishing Communication
- •Reserved Keys
- •Spawn Example
- •Manager-worker Example, Using MPI_COMM_SPAWN.
- •Establishing Communication
- •Names, Addresses, Ports, and All That
- •Server Routines
- •Client Routines
- •Name Publishing
- •Reserved Key Values
- •Client/Server Examples
- •Ocean/Atmosphere - Relies on Name Publishing
- •Simple Client-Server Example.
- •Other Functionality
- •Universe Size
- •Singleton MPI_INIT
- •MPI_APPNUM
- •Releasing Connections
- •Another Way to Establish MPI Communication
- •One-Sided Communications
- •Introduction
- •Initialization
- •Window Creation
- •Window Attributes
- •Communication Calls
- •Examples
- •Accumulate Functions
- •Synchronization Calls
- •Fence
- •General Active Target Synchronization
- •Lock
- •Assertions
- •Examples
- •Error Handling
- •Error Handlers
- •Error Classes
- •Semantics and Correctness
- •Atomicity
- •Progress
- •Registers and Compiler Optimizations
- •External Interfaces
- •Introduction
- •Generalized Requests
- •Examples
- •Associating Information with Status
- •MPI and Threads
- •General
- •Initialization
- •Introduction
- •File Manipulation
- •Opening a File
- •Closing a File
- •Deleting a File
- •Resizing a File
- •Preallocating Space for a File
- •Querying the Size of a File
- •Querying File Parameters
- •File Info
- •Reserved File Hints
- •File Views
- •Data Access
- •Data Access Routines
- •Positioning
- •Synchronism
- •Coordination
- •Data Access Conventions
- •Data Access with Individual File Pointers
- •Data Access with Shared File Pointers
- •Noncollective Operations
- •Collective Operations
- •Seek
- •Split Collective Data Access Routines
- •File Interoperability
- •Datatypes for File Interoperability
- •Extent Callback
- •Datarep Conversion Functions
- •Matching Data Representations
- •Consistency and Semantics
- •File Consistency
- •Random Access vs. Sequential Files
- •Progress
- •Collective File Operations
- •Type Matching
- •Logical vs. Physical File Layout
- •File Size
- •Examples
- •Asynchronous I/O
- •I/O Error Handling
- •I/O Error Classes
- •Examples
- •Subarray Filetype Constructor
- •Requirements
- •Discussion
- •Logic of the Design
- •Examples
- •MPI Library Implementation
- •Systems with Weak Symbols
- •Systems Without Weak Symbols
- •Complications
- •Multiple Counting
- •Linker Oddities
- •Multiple Levels of Interception
- •Deprecated Functions
- •Deprecated since MPI-2.0
- •Deprecated since MPI-2.2
- •Language Bindings
- •Overview
- •Design
- •C++ Classes for MPI
- •Class Member Functions for MPI
- •Semantics
- •C++ Datatypes
- •Communicators
- •Exceptions
- •Mixed-Language Operability
- •Problems With Fortran Bindings for MPI
- •Problems Due to Strong Typing
- •Problems Due to Data Copying and Sequence Association
- •Special Constants
- •Fortran 90 Derived Types
- •A Problem with Register Optimization
- •Basic Fortran Support
- •Extended Fortran Support
- •The mpi Module
- •No Type Mismatch Problems for Subroutines with Choice Arguments
- •Additional Support for Fortran Numeric Intrinsic Types
- •Language Interoperability
- •Introduction
- •Assumptions
- •Initialization
- •Transfer of Handles
- •Status
- •MPI Opaque Objects
- •Datatypes
- •Callback Functions
- •Error Handlers
- •Reduce Operations
- •Addresses
- •Attributes
- •Extra State
- •Constants
- •Interlanguage Communication
- •Language Bindings Summary
- •Groups, Contexts, Communicators, and Caching Fortran Bindings
- •External Interfaces C++ Bindings
- •Change-Log
- •Bibliography
- •Examples Index
- •MPI Declarations Index
- •MPI Function Index
Chapter 5
Collective Communication
5.1 Introduction and Overview
Collective communication is de ned as communication that involves a group or groups of processes. The functions of this type provided by MPI are the following:
MPI_BARRIER: Barrier synchronization across all members of a group (Section 5.3).
MPI_BCAST: Broadcast from one member to all members of a group (Section 5.4). This is shown as \broadcast" in Figure 5.1.
MPI_GATHER, MPI_GATHERV: Gather data from all members of a group to one member (Section 5.5). This is shown as \gather" in Figure 5.1.
MPI_SCATTER, MPI_SCATTERV: Scatter data from one member to all members of a group (Section 5.6). This is shown as \scatter" in Figure 5.1.
MPI_ALLGATHER, MPI_ALLGATHERV: A variation on Gather where all members of a group receive the result (Section 5.7). This is shown as \allgather" in Figure 5.1.
MPI_ALLTOALL, MPI_ALLTOALLV, MPI_ALLTOALLW: Scatter/Gather data from all members to all members of a group (also called complete exchange) (Section 5.8). This is shown as \complete exchange" in Figure 5.1.
MPI_ALLREDUCE, MPI_REDUCE: Global reduction operations such as sum, max, min, or user-de ned functions, where the result is returned to all members of a group and a variation where the result is returned to only one member (Section 5.9).
MPI_REDUCE_SCATTER: A combined reduction and scatter operation (Section 5.10).
MPI_SCAN, MPI_EXSCAN: Scan across all members of a group (also called pre x) (Section 5.11).
One of the key arguments in a call to a collective routine is a communicator that de nes the group or groups of participating processes and provides a context for the operation. This is discussed further in Section 5.2. The syntax and semantics of the collective operations are de ned to be consistent with the syntax and semantics of the point-to-point operations. Thus, general datatypes are allowed and must match between sending and receiving processes as speci ed in Chapter 4. Several collective routines such as broadcast
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
131
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
132 |
CHAPTER 5. COLLECTIVE COMMUNICATION |
data
processes |
A 0 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
A 0 A 1 A 2 A 3 A 4 A 5
A0
B0
C0
D0
E0
F0
A 0 |
A 1 |
A 2 |
A 3 |
A 4 |
A 5 |
B 0 |
B 1 |
B 2 |
B 3 |
B 4 |
B 5 |
C 0 |
C 1 |
C 2 |
C 3 |
C 4 |
C 5 |
D 0 |
D 1 |
D 2 |
D 3 |
D 4 |
D 5 |
E 0 |
E 1 |
E 2 |
E 3 |
E 4 |
E 5 |
F 0 |
F 1 |
F 2 |
F 3 |
F 4 |
F 5 |
A 0
broadcast A 0
A 0
A 0
A 0
A 0
scatter |
A 0 |
|
|
|
|
|
|
|
A 1 |
|
|
|
|
|
|
gather |
A 2 |
|
|
|
|
|
|
A 3 |
|
|
|
|
|
||
|
|
|
|
|
|
||
|
A 4 |
|
|
|
|
|
|
|
A 5 |
|
|
|
|
|
|
|
|
||||||
|
A 0 |
B 0 |
C 0 |
D 0 |
E 0 |
F 0 |
|
allgather |
A 0 |
B 0 |
C 0 |
D 0 |
E 0 |
F 0 |
|
A 0 |
B 0 |
C 0 |
D 0 |
E 0 |
F 0 |
||
|
|||||||
|
A 0 |
B 0 |
C 0 |
D 0 |
E 0 |
F 0 |
|
|
A 0 |
B 0 |
C 0 |
D 0 |
E 0 |
F 0 |
|
|
A 0 |
B 0 |
C 0 |
D 0 |
E 0 |
F 0 |
|
|
|
|
|
|
|
|
|
complete |
A 0 |
B 0 |
C 0 |
D 0 |
E 0 |
F 0 |
|
A 1 |
B 1 |
C 1 |
D 1 |
E 1 |
F 1 |
||
exchange |
|||||||
|
A 2 |
B 2 |
C 2 |
D 2 |
E 2 |
F 2 |
|
|
A 3 |
B 3 |
C 3 |
D 3 |
E 3 |
F 3 |
|
|
A 4 |
B 4 |
C 4 |
D 4 |
E 4 |
F 4 |
|
|
A 5 |
B 5 |
C 5 |
D 5 |
E 5 |
F 5 |
Figure 5.1: Collective move functions illustrated for a group of six processes. In each case,
46
each row of boxes represents data locations in one process. Thus, in the broadcast, initially
47
just the rst process contains the data A0, but after the broadcast all processes contain it.
48
5.1. INTRODUCTION AND OVERVIEW |
133 |
and gather have a single originating or receiving process. Such a process is called the root. Some arguments in the collective functions are speci ed as \signi cant only at root," and are ignored for all participants except the root. The reader is referred to Chapter 4 for information concerning communication bu ers, general datatypes and type matching rules, and to Chapter 6 for information on how to de ne groups and create communicators.
The type-matching conditions for the collective operations are more strict than the corresponding conditions between sender and receiver in point-to-point. Namely, for collective operations, the amount of data sent must exactly match the amount of data speci ed by the receiver. Di erent type maps (the layout in memory, see Section 4.1) between sender and receiver are still allowed.
Collective routine calls can (but are not required to) return as soon as their participation in the collective communication is complete. The completion of a call indicates that the caller is now free to modify locations in the communication bu er. It does not indicate that other processes in the group have completed or even started the operation (unless otherwise implied by the description of the operation). Thus, a collective communication call may, or may not, have the e ect of synchronizing all calling processes. This statement excludes, of course, the barrier function.
Collective communication calls may use the same communicators as point-to-point communication; MPI guarantees that messages generated on behalf of collective communication calls will not be confused with messages generated by point-to-point communication. A more detailed discussion of correct use of collective routines is found in Section 5.12.
Rationale. The equal-data restriction (on type matching) was made so as to avoid the complexity of providing a facility analogous to the status argument of MPI_RECV for discovering the amount of data sent. Some of the collective routines would require an array of status values.
The statements about synchronization are made so as to allow a variety of implementations of the collective functions.
The collective operations do not accept a message tag argument. If future revisions of MPI de ne nonblocking collective functions, then tags (or a similar mechanism) might need to be added so as to allow the dis-ambiguation of multiple, pending, collective operations. (End of rationale.)
Advice to users. It is dangerous to rely on synchronization side-e ects of the collective operations for program correctness. For example, even though a particular implementation may provide a broadcast routine with a side-e ect of synchronization, the standard does not require this, and a program that relies on this will not be portable.
On the other hand, a correct, portable program must allow for the fact that a collective call may be synchronizing. Though one cannot rely on any synchronization side-e ect, one must program so as to allow it. These issues are discussed further in Section 5.12. (End of advice to users.)
Advice to implementors. While vendors may write optimized collective routines matched to their architectures, a complete library of the collective communication routines can be written entirely using the MPI point-to-point communication functions and a few auxiliary functions. If implementing on top of point-to-point, a hidden,
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48