- •Contents
- •List of Figures
- •List of Tables
- •Acknowledgments
- •Introduction to MPI
- •Overview and Goals
- •Background of MPI-1.0
- •Background of MPI-1.1, MPI-1.2, and MPI-2.0
- •Background of MPI-1.3 and MPI-2.1
- •Background of MPI-2.2
- •Who Should Use This Standard?
- •What Platforms Are Targets For Implementation?
- •What Is Included In The Standard?
- •What Is Not Included In The Standard?
- •Organization of this Document
- •MPI Terms and Conventions
- •Document Notation
- •Naming Conventions
- •Semantic Terms
- •Data Types
- •Opaque Objects
- •Array Arguments
- •State
- •Named Constants
- •Choice
- •Addresses
- •Language Binding
- •Deprecated Names and Functions
- •Fortran Binding Issues
- •C Binding Issues
- •C++ Binding Issues
- •Functions and Macros
- •Processes
- •Error Handling
- •Implementation Issues
- •Independence of Basic Runtime Routines
- •Interaction with Signals
- •Examples
- •Point-to-Point Communication
- •Introduction
- •Blocking Send and Receive Operations
- •Blocking Send
- •Message Data
- •Message Envelope
- •Blocking Receive
- •Return Status
- •Passing MPI_STATUS_IGNORE for Status
- •Data Type Matching and Data Conversion
- •Type Matching Rules
- •Type MPI_CHARACTER
- •Data Conversion
- •Communication Modes
- •Semantics of Point-to-Point Communication
- •Buffer Allocation and Usage
- •Nonblocking Communication
- •Communication Request Objects
- •Communication Initiation
- •Communication Completion
- •Semantics of Nonblocking Communications
- •Multiple Completions
- •Non-destructive Test of status
- •Probe and Cancel
- •Persistent Communication Requests
- •Send-Receive
- •Null Processes
- •Datatypes
- •Derived Datatypes
- •Type Constructors with Explicit Addresses
- •Datatype Constructors
- •Subarray Datatype Constructor
- •Distributed Array Datatype Constructor
- •Address and Size Functions
- •Lower-Bound and Upper-Bound Markers
- •Extent and Bounds of Datatypes
- •True Extent of Datatypes
- •Commit and Free
- •Duplicating a Datatype
- •Use of General Datatypes in Communication
- •Correct Use of Addresses
- •Decoding a Datatype
- •Examples
- •Pack and Unpack
- •Canonical MPI_PACK and MPI_UNPACK
- •Collective Communication
- •Introduction and Overview
- •Communicator Argument
- •Applying Collective Operations to Intercommunicators
- •Barrier Synchronization
- •Broadcast
- •Example using MPI_BCAST
- •Gather
- •Examples using MPI_GATHER, MPI_GATHERV
- •Scatter
- •Examples using MPI_SCATTER, MPI_SCATTERV
- •Example using MPI_ALLGATHER
- •All-to-All Scatter/Gather
- •Global Reduction Operations
- •Reduce
- •Signed Characters and Reductions
- •MINLOC and MAXLOC
- •All-Reduce
- •Process-local reduction
- •Reduce-Scatter
- •MPI_REDUCE_SCATTER_BLOCK
- •MPI_REDUCE_SCATTER
- •Scan
- •Inclusive Scan
- •Exclusive Scan
- •Example using MPI_SCAN
- •Correctness
- •Introduction
- •Features Needed to Support Libraries
- •MPI's Support for Libraries
- •Basic Concepts
- •Groups
- •Contexts
- •Intra-Communicators
- •Group Management
- •Group Accessors
- •Group Constructors
- •Group Destructors
- •Communicator Management
- •Communicator Accessors
- •Communicator Constructors
- •Communicator Destructors
- •Motivating Examples
- •Current Practice #1
- •Current Practice #2
- •(Approximate) Current Practice #3
- •Example #4
- •Library Example #1
- •Library Example #2
- •Inter-Communication
- •Inter-communicator Accessors
- •Inter-communicator Operations
- •Inter-Communication Examples
- •Caching
- •Functionality
- •Communicators
- •Windows
- •Datatypes
- •Error Class for Invalid Keyval
- •Attributes Example
- •Naming Objects
- •Formalizing the Loosely Synchronous Model
- •Basic Statements
- •Models of Execution
- •Static communicator allocation
- •Dynamic communicator allocation
- •The General case
- •Process Topologies
- •Introduction
- •Virtual Topologies
- •Embedding in MPI
- •Overview of the Functions
- •Topology Constructors
- •Cartesian Constructor
- •Cartesian Convenience Function: MPI_DIMS_CREATE
- •General (Graph) Constructor
- •Distributed (Graph) Constructor
- •Topology Inquiry Functions
- •Cartesian Shift Coordinates
- •Partitioning of Cartesian structures
- •Low-Level Topology Functions
- •An Application Example
- •MPI Environmental Management
- •Implementation Information
- •Version Inquiries
- •Environmental Inquiries
- •Tag Values
- •Host Rank
- •IO Rank
- •Clock Synchronization
- •Memory Allocation
- •Error Handling
- •Error Handlers for Communicators
- •Error Handlers for Windows
- •Error Handlers for Files
- •Freeing Errorhandlers and Retrieving Error Strings
- •Error Codes and Classes
- •Error Classes, Error Codes, and Error Handlers
- •Timers and Synchronization
- •Startup
- •Allowing User Functions at Process Termination
- •Determining Whether MPI Has Finished
- •Portable MPI Process Startup
- •The Info Object
- •Process Creation and Management
- •Introduction
- •The Dynamic Process Model
- •Starting Processes
- •The Runtime Environment
- •Process Manager Interface
- •Processes in MPI
- •Starting Processes and Establishing Communication
- •Reserved Keys
- •Spawn Example
- •Manager-worker Example, Using MPI_COMM_SPAWN.
- •Establishing Communication
- •Names, Addresses, Ports, and All That
- •Server Routines
- •Client Routines
- •Name Publishing
- •Reserved Key Values
- •Client/Server Examples
- •Ocean/Atmosphere - Relies on Name Publishing
- •Simple Client-Server Example.
- •Other Functionality
- •Universe Size
- •Singleton MPI_INIT
- •MPI_APPNUM
- •Releasing Connections
- •Another Way to Establish MPI Communication
- •One-Sided Communications
- •Introduction
- •Initialization
- •Window Creation
- •Window Attributes
- •Communication Calls
- •Examples
- •Accumulate Functions
- •Synchronization Calls
- •Fence
- •General Active Target Synchronization
- •Lock
- •Assertions
- •Examples
- •Error Handling
- •Error Handlers
- •Error Classes
- •Semantics and Correctness
- •Atomicity
- •Progress
- •Registers and Compiler Optimizations
- •External Interfaces
- •Introduction
- •Generalized Requests
- •Examples
- •Associating Information with Status
- •MPI and Threads
- •General
- •Initialization
- •Introduction
- •File Manipulation
- •Opening a File
- •Closing a File
- •Deleting a File
- •Resizing a File
- •Preallocating Space for a File
- •Querying the Size of a File
- •Querying File Parameters
- •File Info
- •Reserved File Hints
- •File Views
- •Data Access
- •Data Access Routines
- •Positioning
- •Synchronism
- •Coordination
- •Data Access Conventions
- •Data Access with Individual File Pointers
- •Data Access with Shared File Pointers
- •Noncollective Operations
- •Collective Operations
- •Seek
- •Split Collective Data Access Routines
- •File Interoperability
- •Datatypes for File Interoperability
- •Extent Callback
- •Datarep Conversion Functions
- •Matching Data Representations
- •Consistency and Semantics
- •File Consistency
- •Random Access vs. Sequential Files
- •Progress
- •Collective File Operations
- •Type Matching
- •Logical vs. Physical File Layout
- •File Size
- •Examples
- •Asynchronous I/O
- •I/O Error Handling
- •I/O Error Classes
- •Examples
- •Subarray Filetype Constructor
- •Requirements
- •Discussion
- •Logic of the Design
- •Examples
- •MPI Library Implementation
- •Systems with Weak Symbols
- •Systems Without Weak Symbols
- •Complications
- •Multiple Counting
- •Linker Oddities
- •Multiple Levels of Interception
- •Deprecated Functions
- •Deprecated since MPI-2.0
- •Deprecated since MPI-2.2
- •Language Bindings
- •Overview
- •Design
- •C++ Classes for MPI
- •Class Member Functions for MPI
- •Semantics
- •C++ Datatypes
- •Communicators
- •Exceptions
- •Mixed-Language Operability
- •Problems With Fortran Bindings for MPI
- •Problems Due to Strong Typing
- •Problems Due to Data Copying and Sequence Association
- •Special Constants
- •Fortran 90 Derived Types
- •A Problem with Register Optimization
- •Basic Fortran Support
- •Extended Fortran Support
- •The mpi Module
- •No Type Mismatch Problems for Subroutines with Choice Arguments
- •Additional Support for Fortran Numeric Intrinsic Types
- •Language Interoperability
- •Introduction
- •Assumptions
- •Initialization
- •Transfer of Handles
- •Status
- •MPI Opaque Objects
- •Datatypes
- •Callback Functions
- •Error Handlers
- •Reduce Operations
- •Addresses
- •Attributes
- •Extra State
- •Constants
- •Interlanguage Communication
- •Language Bindings Summary
- •Groups, Contexts, Communicators, and Caching Fortran Bindings
- •External Interfaces C++ Bindings
- •Change-Log
- •Bibliography
- •Examples Index
- •MPI Declarations Index
- •MPI Function Index
1
2
3
4
5
6
502 |
CHAPTER 16. LANGUAGE BINDINGS |
16.3.5 Status
The following two procedures are provided in C to convert from a Fortran status (which is an array of integers) to a C status (which is a structure), and vice versa. The conversion occurs on all the information in status, including that which is hidden. That is, no status information is lost in the conversion.
7int MPI_Status_f2c(MPI_Fint *f_status, MPI_Status *c_status)
8
If f_status is a valid Fortran status, but not the Fortran value of MPI_STATUS_IGNORE
9
or MPI_STATUSES_IGNORE, then MPI_Status_f2c returns in c_status a valid C status with
10
the same content. If f_status is the Fortran value of MPI_STATUS_IGNORE or
11
MPI_STATUSES_IGNORE, or if f_status is not a valid Fortran status, then the call is erroneous.
12
The C status has the same source, tag and error code values as the Fortran status,
13
and returns the same answers when queried for count, elements, and cancellation. The
14
conversion function may be called with a Fortran status argument that has an unde ned
15
error eld, in which case the value of the error eld in the C status argument is unde ned.
16
17Two global variables of type MPI_Fint*, MPI_F_STATUS_IGNORE and
18MPI_F_STATUSES_IGNORE are declared in mpi.h. They can be used to test, in C, whether
19f_status is the Fortran value of MPI_STATUS_IGNORE or MPI_STATUSES_IGNORE, respec-
20tively. These are global variables, not C constant expressions and cannot be used in places
21where C requires constant expressions. Their value is de ned only between the calls to
22MPI_INIT and MPI_FINALIZE and should not be changed by user code.
23To do the conversion in the other direction, we have the following:
24int MPI_Status_c2f(MPI_Status *c_status, MPI_Fint *f_status)
25
This call converts a C status into a Fortran status, and has a behavior similar to
26
MPI_Status_f2c. That is, the value of c_status must not be either MPI_STATUS_IGNORE or
27
MPI_STATUSES_IGNORE.
Advice to users. There is not a separate conversion function for arrays of statuses, since one can simply loop through the array, converting each status. (End of advice to users.)
Rationale. The handling of MPI_STATUS_IGNORE is required in order to layer libraries with only a C wrapper: if the Fortran call has passed MPI_STATUS_IGNORE, then the C wrapper must handle this correctly. Note that this constant need not have the same value in Fortran and C. If MPI_Status_f2c were to handle MPI_STATUS_IGNORE, then the type of its result would have to be MPI_Status**, which was considered an inferior solution. (End of rationale.)
16.3.6 MPI Opaque Objects
42Unless said otherwise, opaque objects are \the same" in all languages: they carry the same
43information, and have the same meaning in both languages. The mechanism described
44in the previous section can be used to pass references to MPI objects from language to
45language. An object created in one language can be accessed, modi ed or freed in another
46language.
47We examine below in more detail, issues that arise for each type of MPI object.
48
16.3. LANGUAGE INTEROPERABILITY |
503 |
Datatypes
Datatypes encode the same information in all languages. E.g., a datatype accessor like MPI_TYPE_GET_EXTENT will return the same information in all languages. If a datatype de ned in one language is used for a communication call in another language, then the message sent will be identical to the message that would be sent from the rst language: the same communication bu er is accessed, and the same representation conversion is performed, if needed. All prede ned datatypes can be used in datatype constructors in any language. If a datatype is committed, it can be used for communication in any language.
The function MPI_GET_ADDRESS returns the same value in all languages. Note that we do not require that the constant MPI_BOTTOM have the same value in all languages (see 16.3.9, page 509).
Example 16.16
!FORTRAN CODE REAL R(5)
INTEGER TYPE, IERR, AOBLEN(1), AOTYPE(1) INTEGER (KIND=MPI_ADDRESS_KIND) AODISP(1)
!create an absolute datatype for array R AOBLEN(1) = 5
CALL MPI_GET_ADDRESS( R, AODISP(1), IERR) AOTYPE(1) = MPI_REAL
CALL MPI_TYPE_CREATE_STRUCT(1, AOBLEN,AODISP,AOTYPE, TYPE, IERR) CALL C_ROUTINE(TYPE)
/* C code */
void C_ROUTINE(MPI_Fint *ftype)
{
int count = 5;
int lens[2] = {1,1}; MPI_Aint displs[2];
MPI_Datatype types[2], newtype;
/* create an absolute datatype for buffer that consists |
*/ |
/* of count, followed by R(5) |
*/ |
MPI_Get_address(&count, &displs[0]); |
|
displs[1] = 0; |
|
types[0] = MPI_INT; |
|
types[1] = MPI_Type_f2c(*ftype); |
|
MPI_Type_create_struct(2, lens, displs, types, &newtype); |
|
MPI_Type_commit(&newtype); |
|
MPI_Send(MPI_BOTTOM, 1, newtype, 1, 0, MPI_COMM_WORLD); |
|
/* the message sent contains an int count of 5, followed |
*/ |
/* by the 5 REAL entries of the Fortran array R. |
*/ |
} |
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
504 |
CHAPTER 16. LANGUAGE BINDINGS |
1Advice to implementors. The following implementation can be used: MPI addresses,
2as returned by MPI_GET_ADDRESS, will have the same value in all languages. One
3obvious choice is that MPI addresses be identical to regular addresses. The address
4is stored in the datatype, when datatypes with absolute addresses are constructed.
5When a send or receive operation is performed, then addresses stored in a datatype
6are interpreted as displacements that are all augmented by a base address. This base
7address is (the address of) buf, or zero, if buf = MPI_BOTTOM. Thus, if MPI_BOTTOM
8is zero then a send or receive call with buf = MPI_BOTTOM is implemented exactly
9as a call with a regular bu er argument: in both cases the base address is buf. On the
10other hand, if MPI_BOTTOM is not zero, then the implementation has to be slightly
11di erent. A test is performed to check whether buf = MPI_BOTTOM. If true, then
12the base address is zero, otherwise it is buf. In particular, if MPI_BOTTOM does
13not have the same value in Fortran and C/C++, then an additional test for buf =
14MPI_BOTTOM is needed in at least one of the languages.
15
16
17
18
19
20
21
22
It may be desirable to use a value other than zero for MPI_BOTTOM even in C/C++, so as to distinguish it from a NULL pointer. If MPI_BOTTOM = c then one can still avoid the test buf = MPI_BOTTOM, by using the displacement from MPI_BOTTOM, i.e., the regular address - c, as the MPI address returned by MPI_GET_ADDRESS and stored in absolute datatypes. (End of advice to implementors.)
Callback Functions
23MPI calls may associate callback functions with MPI objects: error handlers are associ-
24ated with communicators and les, attribute copy and delete functions are associated with
25attribute keys, reduce operations are associated with operation objects, etc. In a multilan-
26guage environment, a function passed in an MPI call in one language may be invoked by an
27MPI call in another language. MPI implementations must make sure that such invocation
28will use the calling convention of the language the function is bound to.
29
30
31
32
33
34
35
Advice to implementors. Callback functions need to have a language tag. This tag is set when the callback function is passed in by the library function (which is presumably di erent for each language), and is used to generate the right calling sequence when the callback function is invoked. (End of advice to implementors.)
Error Handlers
36Advice to implementors. Error handlers, have, in C and C++, a \stdargs" argu-
37ment list. It might be useful to provide to the handler information on the language
38environment where the error occurred. (End of advice to implementors.)
39
40 |
Reduce Operations |
|
41
42
43
44
45
46
Advice to users. Reduce operations receive as one of their arguments the datatype of the operands. Thus, one can de ne \polymorphic" reduce operations that work for C, C++, and Fortran datatypes. (End of advice to users.)
Addresses
47Some of the datatype accessors and constructors have arguments of type MPI_Aint (in C)
48or MPI::Aint in C++, to hold addresses. The corresponding arguments, in Fortran, have