- •Contents
- •List of Figures
- •List of Tables
- •Acknowledgments
- •Introduction to MPI
- •Overview and Goals
- •Background of MPI-1.0
- •Background of MPI-1.1, MPI-1.2, and MPI-2.0
- •Background of MPI-1.3 and MPI-2.1
- •Background of MPI-2.2
- •Who Should Use This Standard?
- •What Platforms Are Targets For Implementation?
- •What Is Included In The Standard?
- •What Is Not Included In The Standard?
- •Organization of this Document
- •MPI Terms and Conventions
- •Document Notation
- •Naming Conventions
- •Semantic Terms
- •Data Types
- •Opaque Objects
- •Array Arguments
- •State
- •Named Constants
- •Choice
- •Addresses
- •Language Binding
- •Deprecated Names and Functions
- •Fortran Binding Issues
- •C Binding Issues
- •C++ Binding Issues
- •Functions and Macros
- •Processes
- •Error Handling
- •Implementation Issues
- •Independence of Basic Runtime Routines
- •Interaction with Signals
- •Examples
- •Point-to-Point Communication
- •Introduction
- •Blocking Send and Receive Operations
- •Blocking Send
- •Message Data
- •Message Envelope
- •Blocking Receive
- •Return Status
- •Passing MPI_STATUS_IGNORE for Status
- •Data Type Matching and Data Conversion
- •Type Matching Rules
- •Type MPI_CHARACTER
- •Data Conversion
- •Communication Modes
- •Semantics of Point-to-Point Communication
- •Buffer Allocation and Usage
- •Nonblocking Communication
- •Communication Request Objects
- •Communication Initiation
- •Communication Completion
- •Semantics of Nonblocking Communications
- •Multiple Completions
- •Non-destructive Test of status
- •Probe and Cancel
- •Persistent Communication Requests
- •Send-Receive
- •Null Processes
- •Datatypes
- •Derived Datatypes
- •Type Constructors with Explicit Addresses
- •Datatype Constructors
- •Subarray Datatype Constructor
- •Distributed Array Datatype Constructor
- •Address and Size Functions
- •Lower-Bound and Upper-Bound Markers
- •Extent and Bounds of Datatypes
- •True Extent of Datatypes
- •Commit and Free
- •Duplicating a Datatype
- •Use of General Datatypes in Communication
- •Correct Use of Addresses
- •Decoding a Datatype
- •Examples
- •Pack and Unpack
- •Canonical MPI_PACK and MPI_UNPACK
- •Collective Communication
- •Introduction and Overview
- •Communicator Argument
- •Applying Collective Operations to Intercommunicators
- •Barrier Synchronization
- •Broadcast
- •Example using MPI_BCAST
- •Gather
- •Examples using MPI_GATHER, MPI_GATHERV
- •Scatter
- •Examples using MPI_SCATTER, MPI_SCATTERV
- •Example using MPI_ALLGATHER
- •All-to-All Scatter/Gather
- •Global Reduction Operations
- •Reduce
- •Signed Characters and Reductions
- •MINLOC and MAXLOC
- •All-Reduce
- •Process-local reduction
- •Reduce-Scatter
- •MPI_REDUCE_SCATTER_BLOCK
- •MPI_REDUCE_SCATTER
- •Scan
- •Inclusive Scan
- •Exclusive Scan
- •Example using MPI_SCAN
- •Correctness
- •Introduction
- •Features Needed to Support Libraries
- •MPI's Support for Libraries
- •Basic Concepts
- •Groups
- •Contexts
- •Intra-Communicators
- •Group Management
- •Group Accessors
- •Group Constructors
- •Group Destructors
- •Communicator Management
- •Communicator Accessors
- •Communicator Constructors
- •Communicator Destructors
- •Motivating Examples
- •Current Practice #1
- •Current Practice #2
- •(Approximate) Current Practice #3
- •Example #4
- •Library Example #1
- •Library Example #2
- •Inter-Communication
- •Inter-communicator Accessors
- •Inter-communicator Operations
- •Inter-Communication Examples
- •Caching
- •Functionality
- •Communicators
- •Windows
- •Datatypes
- •Error Class for Invalid Keyval
- •Attributes Example
- •Naming Objects
- •Formalizing the Loosely Synchronous Model
- •Basic Statements
- •Models of Execution
- •Static communicator allocation
- •Dynamic communicator allocation
- •The General case
- •Process Topologies
- •Introduction
- •Virtual Topologies
- •Embedding in MPI
- •Overview of the Functions
- •Topology Constructors
- •Cartesian Constructor
- •Cartesian Convenience Function: MPI_DIMS_CREATE
- •General (Graph) Constructor
- •Distributed (Graph) Constructor
- •Topology Inquiry Functions
- •Cartesian Shift Coordinates
- •Partitioning of Cartesian structures
- •Low-Level Topology Functions
- •An Application Example
- •MPI Environmental Management
- •Implementation Information
- •Version Inquiries
- •Environmental Inquiries
- •Tag Values
- •Host Rank
- •IO Rank
- •Clock Synchronization
- •Memory Allocation
- •Error Handling
- •Error Handlers for Communicators
- •Error Handlers for Windows
- •Error Handlers for Files
- •Freeing Errorhandlers and Retrieving Error Strings
- •Error Codes and Classes
- •Error Classes, Error Codes, and Error Handlers
- •Timers and Synchronization
- •Startup
- •Allowing User Functions at Process Termination
- •Determining Whether MPI Has Finished
- •Portable MPI Process Startup
- •The Info Object
- •Process Creation and Management
- •Introduction
- •The Dynamic Process Model
- •Starting Processes
- •The Runtime Environment
- •Process Manager Interface
- •Processes in MPI
- •Starting Processes and Establishing Communication
- •Reserved Keys
- •Spawn Example
- •Manager-worker Example, Using MPI_COMM_SPAWN.
- •Establishing Communication
- •Names, Addresses, Ports, and All That
- •Server Routines
- •Client Routines
- •Name Publishing
- •Reserved Key Values
- •Client/Server Examples
- •Ocean/Atmosphere - Relies on Name Publishing
- •Simple Client-Server Example.
- •Other Functionality
- •Universe Size
- •Singleton MPI_INIT
- •MPI_APPNUM
- •Releasing Connections
- •Another Way to Establish MPI Communication
- •One-Sided Communications
- •Introduction
- •Initialization
- •Window Creation
- •Window Attributes
- •Communication Calls
- •Examples
- •Accumulate Functions
- •Synchronization Calls
- •Fence
- •General Active Target Synchronization
- •Lock
- •Assertions
- •Examples
- •Error Handling
- •Error Handlers
- •Error Classes
- •Semantics and Correctness
- •Atomicity
- •Progress
- •Registers and Compiler Optimizations
- •External Interfaces
- •Introduction
- •Generalized Requests
- •Examples
- •Associating Information with Status
- •MPI and Threads
- •General
- •Initialization
- •Introduction
- •File Manipulation
- •Opening a File
- •Closing a File
- •Deleting a File
- •Resizing a File
- •Preallocating Space for a File
- •Querying the Size of a File
- •Querying File Parameters
- •File Info
- •Reserved File Hints
- •File Views
- •Data Access
- •Data Access Routines
- •Positioning
- •Synchronism
- •Coordination
- •Data Access Conventions
- •Data Access with Individual File Pointers
- •Data Access with Shared File Pointers
- •Noncollective Operations
- •Collective Operations
- •Seek
- •Split Collective Data Access Routines
- •File Interoperability
- •Datatypes for File Interoperability
- •Extent Callback
- •Datarep Conversion Functions
- •Matching Data Representations
- •Consistency and Semantics
- •File Consistency
- •Random Access vs. Sequential Files
- •Progress
- •Collective File Operations
- •Type Matching
- •Logical vs. Physical File Layout
- •File Size
- •Examples
- •Asynchronous I/O
- •I/O Error Handling
- •I/O Error Classes
- •Examples
- •Subarray Filetype Constructor
- •Requirements
- •Discussion
- •Logic of the Design
- •Examples
- •MPI Library Implementation
- •Systems with Weak Symbols
- •Systems Without Weak Symbols
- •Complications
- •Multiple Counting
- •Linker Oddities
- •Multiple Levels of Interception
- •Deprecated Functions
- •Deprecated since MPI-2.0
- •Deprecated since MPI-2.2
- •Language Bindings
- •Overview
- •Design
- •C++ Classes for MPI
- •Class Member Functions for MPI
- •Semantics
- •C++ Datatypes
- •Communicators
- •Exceptions
- •Mixed-Language Operability
- •Problems With Fortran Bindings for MPI
- •Problems Due to Strong Typing
- •Problems Due to Data Copying and Sequence Association
- •Special Constants
- •Fortran 90 Derived Types
- •A Problem with Register Optimization
- •Basic Fortran Support
- •Extended Fortran Support
- •The mpi Module
- •No Type Mismatch Problems for Subroutines with Choice Arguments
- •Additional Support for Fortran Numeric Intrinsic Types
- •Language Interoperability
- •Introduction
- •Assumptions
- •Initialization
- •Transfer of Handles
- •Status
- •MPI Opaque Objects
- •Datatypes
- •Callback Functions
- •Error Handlers
- •Reduce Operations
- •Addresses
- •Attributes
- •Extra State
- •Constants
- •Interlanguage Communication
- •Language Bindings Summary
- •Groups, Contexts, Communicators, and Caching Fortran Bindings
- •External Interfaces C++ Bindings
- •Change-Log
- •Bibliography
- •Examples Index
- •MPI Declarations Index
- •MPI Function Index
476 |
CHAPTER 16. LANGUAGE BINDINGS |
1
2
3
4
Graphcomm& Graphcomm::Clone() const Distgraphcomm& Distgraphcomm::Clone() const
5Rationale. Clone() provides the \virtual dup" functionality that is expected by C++
6programmers and library writers. Since Clone() returns a new object by reference,
7users are responsible for eventually deleting the object. A new name is introduced
8rather than changing the functionality of Dup(). (End of rationale.)
9
10Advice to implementors. Within their class declarations, prototypes for Clone() and
11Dup() would look like the following:
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
namespace MPI { class Comm {
virtual Comm& Clone() const = 0;
};
class Intracomm : public Comm { Intracomm Dup() const { ... };
virtual Intracomm& Clone() const { ... };
};
class Intercomm : public Comm { Intercomm Dup() const { ... };
virtual Intercomm& Clone() const { ... };
};
//Cartcomm, Graphcomm,
//and Distgraphcomm are similarly defined
};
(End of advice to implementors.)
16.1.8 Exceptions
32The C++ language interface for MPI includes the prede ned error handler
33MPI::ERRORS_THROW_EXCEPTIONS for use with the Set_errhandler() member functions.
34MPI::ERRORS_THROW_EXCEPTIONS can only be set or retrieved by C++ functions. If a
35non-C++ program causes an error that invokes the MPI::ERRORS_THROW_EXCEPTIONS error
36handler, the exception will pass up the calling stack until C++ code can catch it. If there
37is no C++ code to catch it, the behavior is unde ned. In a multi-threaded environment
38or if a nonblocking MPI call throws an exception while making progress in the background,
39the behavior is implementation dependent.
40The error handler MPI::ERRORS_THROW_EXCEPTIONS causes an MPI::Exception to be
41thrown for any MPI result code other than MPI::SUCCESS. The public interface to
42MPI::Exception class is de ned as follows:
43
44namespace MPI {
45class Exception {
46public:
47
48 Exception(int error_code);
16.1. C++ |
477 |
int Get_error_code() const; int Get_error_class() const;
const char *Get_error_string() const;
};
};
Advice to implementors.
The exception will be thrown within the body of MPI::ERRORS_THROW_EXCEPTIONS. It is expected that control will be returned to the user when the exception is thrown. Some MPI functions specify certain return information in their parameters in the case of an error and MPI_ERRORS_RETURN is speci ed. The same type of return information must be provided when exceptions are thrown.
For example, MPI_WAITALL puts an error code for each request in the corresponding entry in the status array and returns MPI_ERR_IN_STATUS. When using MPI::ERRORS_THROW_EXCEPTIONS, it is expected that the error codes in the status array will be set appropriately before the exception is thrown.
(End of advice to implementors.)
16.1.9 Mixed-Language Operability
The C++ language interface provides functions listed below for mixed-language operability. These functions provide for a seamless transition between C and C++. For the case where the C++ class corresponding to <CLASS> has derived classes, functions are also provided for converting between the derived classes and the C MPI_<CLASS>.
MPI::<CLASS>& MPI::<CLASS>::operator=(const MPI_<CLASS>& data)
MPI::<CLASS>(const MPI_<CLASS>& data)
MPI::<CLASS>::operator MPI_<CLASS>() const
These functions are discussed in Section 16.3.4.
16.1.10 Pro ling
This section speci es the requirements of a C++ pro ling interface to MPI.
Advice to implementors. Since the main goal of pro ling is to intercept function calls from user code, it is the implementor's decision how to layer the underlying implementation to allow function calls to be intercepted and pro led. If an implementation of the MPI C++ bindings is layered on top of MPI bindings in another language (such as C), or if the C++ bindings are layered on top of a pro ling interface in another language, no extra pro ling interface is necessary because the underlying MPI implementation already meets the MPI pro ling interface requirements.
Native C++ MPI implementations that do not have access to other pro ling interfaces must implement an interface that meets the requirements outlined in this section.
High-quality implementations can implement the interface outlined in this section in order to promote portable C++ pro ling libraries. Implementors may wish to provide
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
478 |
CHAPTER 16. LANGUAGE BINDINGS |
1an option whether to build the C++ pro ling interface or not; C++ implementations
2that are already layered on top of bindings in another language or another pro ling
3interface will have to insert a third layer to implement the C++ pro ling interface.
4
5
6
7
8
9
10
11
(End of advice to implementors.)
To meet the requirements of the C++ MPI pro ling interface, an implementation of the MPI functions must:
1.Provide a mechanism through which all of the MPI de ned functions may be accessed with a name shift. Thus all of the MPI functions (which normally start with the pre x \MPI::") should also be accessible with the pre x \PMPI::."
122. Ensure that those MPI functions which are not replaced may still be linked into an
13executable image without causing name clashes.
14
153. Document the implementation of di erent language bindings of the MPI interface if
16they are layered on top of each other, so that pro ler developer knows whether they
17must implement the pro le interface for each binding, or can economize by imple-
18menting it only for the lowest level routines.
19
20
21
22
23
4.Where the implementation of di erent language bindings is done through a layered approach (e.g., the C++ binding is a set of \wrapper" functions which call the C implementation), ensure that these wrapper functions are separable from the rest of the library.
24
25
26
27
28
29
30
This is necessary to allow a separate pro ling library to be correctly implemented, since (at least with Unix linker semantics) the pro ling library must contain these wrapper functions if it is to perform as expected. This requirement allows the author of the pro ling library to extract these functions from the original MPI library and add them into the pro ling library without bringing along any other unnecessary code.
5. Provide a no-op routine MPI::Pcontrol in the MPI library.
31Advice to implementors. There are (at least) two apparent options for implementing
32the C++ pro ling interface: inheritance or caching. An inheritance-based approach
33may not be attractive because it may require a virtual inheritance implementation of
34the communicator classes. Thus, it is most likely that implementors will cache PMPI
35objects on their corresponding MPI objects. The caching scheme is outlined below.
36The \real" entry points to each routine can be provided within a namespace PMPI.
37The non-pro ling version can then be provided within a namespace MPI.
38
Caching instances of PMPI objects in the MPI handles provides the \has a" relationship
39
that is necessary to implement the pro ling scheme.
40
41Each instance of an MPI object simply \wraps up" an instance of a PMPI object. MPI
42objects can then perform pro ling actions before invoking the corresponding function
43in their internal PMPI object.
44The key to making the pro ling work by simply re-linking programs is by having
45a header le that declares all the MPI functions. The functions must be de ned
46elsewhere, and compiled into a library. MPI constants should be declared extern in
47the MPI namespace. For example, the following is an excerpt from a sample mpi.h
48le:
16.1. C++ |
479 |
Example 16.6 Sample mpi.h le.
namespace PMPI { class Comm { public:
int Get_size() const;
};
// etc.
};
namespace MPI { public:
class Comm { public:
int Get_size() const;
private:
PMPI::Comm pmpi_comm;
};
};
Note that all constructors, the assignment operator, and the destructor in the MPI class will need to initialize/destroy the internal PMPI object as appropriate.
The de nitions of the functions must be in separate object les; the PMPI class member functions and the non-pro ling versions of the MPI class member functions can be compiled into libmpi.a, while the pro ling versions can be compiled into libpmpi.a. Note that the PMPI class member functions and the MPI constants must be in di erent object les than the non-pro ling MPI class member functions in the libmpi.a library to prevent multiple de nitions of MPI class member function names when linking both libmpi.a and libpmpi.a. For example:
Example 16.7 pmpi.cc, to be compiled into libmpi.a.
int PMPI::Comm::Get_size() const
{
// Implementation of MPI_COMM_SIZE
}
Example 16.8 constants.cc, to be compiled into libmpi.a.
const MPI::Intracomm MPI::COMM_WORLD;
Example 16.9 mpi_no_profile.cc, to be compiled into libmpi.a.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
1
2
3
4
5
6
480 |
CHAPTER 16. LANGUAGE BINDINGS |
int MPI::Comm::Get_size() const
{
return pmpi_comm.Get_size();
}
7Example 16.10 mpi_profile.cc, to be compiled into libpmpi.a.
8 |
|
9 |
int MPI::Comm::Get_size() const |
|
10{
11// Do profiling stuff
12int ret = pmpi_comm.Get_size();
13// More profiling stuff
14return ret;
15}
(End of advice to implementors.)
The Fortran MPI-2 language bindings have been designed to be compatible with the Fortran
24
90 standard (and later). These bindings are in most cases compatible with Fortran 77,
25
implicit-style interfaces.
26
27Rationale. Fortran 90 contains numerous features designed to make it a more \mod-
28ern" language than Fortran 77. It seems natural that MPI should be able to take
29advantage of these new features with a set of bindings tailored to Fortran 90. MPI
30does not (yet) use many of these features because of a number of technical di culties.
31(End of rationale.)
32
33MPI de nes two levels of Fortran support, described in Sections 16.2.3 and 16.2.4. In
34the rest of this section, \Fortran" and \Fortran 90" shall refer to \Fortran 90" and its
35successors, unless quali ed.
36
371. Basic Fortran Support An implementation with this level of Fortran support pro-
38vides the original Fortran bindings speci ed in MPI-1, with small additional require-
39ments speci ed in Section 16.2.3.
40
41
42
43
44
2.Extended Fortran Support An implementation with this level of Fortran support provides Basic Fortran Support plus additional features that speci cally support Fortran 90, as described in Section 16.2.4.
A compliant MPI-2 implementation providing a Fortran interface must provide Ex-
45
tended Fortran Support unless the target compiler does not support modules or KIND-
46
parameterized types.
47
48