- •Contents
- •List of Figures
- •List of Tables
- •Acknowledgments
- •Introduction to MPI
- •Overview and Goals
- •Background of MPI-1.0
- •Background of MPI-1.1, MPI-1.2, and MPI-2.0
- •Background of MPI-1.3 and MPI-2.1
- •Background of MPI-2.2
- •Who Should Use This Standard?
- •What Platforms Are Targets For Implementation?
- •What Is Included In The Standard?
- •What Is Not Included In The Standard?
- •Organization of this Document
- •MPI Terms and Conventions
- •Document Notation
- •Naming Conventions
- •Semantic Terms
- •Data Types
- •Opaque Objects
- •Array Arguments
- •State
- •Named Constants
- •Choice
- •Addresses
- •Language Binding
- •Deprecated Names and Functions
- •Fortran Binding Issues
- •C Binding Issues
- •C++ Binding Issues
- •Functions and Macros
- •Processes
- •Error Handling
- •Implementation Issues
- •Independence of Basic Runtime Routines
- •Interaction with Signals
- •Examples
- •Point-to-Point Communication
- •Introduction
- •Blocking Send and Receive Operations
- •Blocking Send
- •Message Data
- •Message Envelope
- •Blocking Receive
- •Return Status
- •Passing MPI_STATUS_IGNORE for Status
- •Data Type Matching and Data Conversion
- •Type Matching Rules
- •Type MPI_CHARACTER
- •Data Conversion
- •Communication Modes
- •Semantics of Point-to-Point Communication
- •Buffer Allocation and Usage
- •Nonblocking Communication
- •Communication Request Objects
- •Communication Initiation
- •Communication Completion
- •Semantics of Nonblocking Communications
- •Multiple Completions
- •Non-destructive Test of status
- •Probe and Cancel
- •Persistent Communication Requests
- •Send-Receive
- •Null Processes
- •Datatypes
- •Derived Datatypes
- •Type Constructors with Explicit Addresses
- •Datatype Constructors
- •Subarray Datatype Constructor
- •Distributed Array Datatype Constructor
- •Address and Size Functions
- •Lower-Bound and Upper-Bound Markers
- •Extent and Bounds of Datatypes
- •True Extent of Datatypes
- •Commit and Free
- •Duplicating a Datatype
- •Use of General Datatypes in Communication
- •Correct Use of Addresses
- •Decoding a Datatype
- •Examples
- •Pack and Unpack
- •Canonical MPI_PACK and MPI_UNPACK
- •Collective Communication
- •Introduction and Overview
- •Communicator Argument
- •Applying Collective Operations to Intercommunicators
- •Barrier Synchronization
- •Broadcast
- •Example using MPI_BCAST
- •Gather
- •Examples using MPI_GATHER, MPI_GATHERV
- •Scatter
- •Examples using MPI_SCATTER, MPI_SCATTERV
- •Example using MPI_ALLGATHER
- •All-to-All Scatter/Gather
- •Global Reduction Operations
- •Reduce
- •Signed Characters and Reductions
- •MINLOC and MAXLOC
- •All-Reduce
- •Process-local reduction
- •Reduce-Scatter
- •MPI_REDUCE_SCATTER_BLOCK
- •MPI_REDUCE_SCATTER
- •Scan
- •Inclusive Scan
- •Exclusive Scan
- •Example using MPI_SCAN
- •Correctness
- •Introduction
- •Features Needed to Support Libraries
- •MPI's Support for Libraries
- •Basic Concepts
- •Groups
- •Contexts
- •Intra-Communicators
- •Group Management
- •Group Accessors
- •Group Constructors
- •Group Destructors
- •Communicator Management
- •Communicator Accessors
- •Communicator Constructors
- •Communicator Destructors
- •Motivating Examples
- •Current Practice #1
- •Current Practice #2
- •(Approximate) Current Practice #3
- •Example #4
- •Library Example #1
- •Library Example #2
- •Inter-Communication
- •Inter-communicator Accessors
- •Inter-communicator Operations
- •Inter-Communication Examples
- •Caching
- •Functionality
- •Communicators
- •Windows
- •Datatypes
- •Error Class for Invalid Keyval
- •Attributes Example
- •Naming Objects
- •Formalizing the Loosely Synchronous Model
- •Basic Statements
- •Models of Execution
- •Static communicator allocation
- •Dynamic communicator allocation
- •The General case
- •Process Topologies
- •Introduction
- •Virtual Topologies
- •Embedding in MPI
- •Overview of the Functions
- •Topology Constructors
- •Cartesian Constructor
- •Cartesian Convenience Function: MPI_DIMS_CREATE
- •General (Graph) Constructor
- •Distributed (Graph) Constructor
- •Topology Inquiry Functions
- •Cartesian Shift Coordinates
- •Partitioning of Cartesian structures
- •Low-Level Topology Functions
- •An Application Example
- •MPI Environmental Management
- •Implementation Information
- •Version Inquiries
- •Environmental Inquiries
- •Tag Values
- •Host Rank
- •IO Rank
- •Clock Synchronization
- •Memory Allocation
- •Error Handling
- •Error Handlers for Communicators
- •Error Handlers for Windows
- •Error Handlers for Files
- •Freeing Errorhandlers and Retrieving Error Strings
- •Error Codes and Classes
- •Error Classes, Error Codes, and Error Handlers
- •Timers and Synchronization
- •Startup
- •Allowing User Functions at Process Termination
- •Determining Whether MPI Has Finished
- •Portable MPI Process Startup
- •The Info Object
- •Process Creation and Management
- •Introduction
- •The Dynamic Process Model
- •Starting Processes
- •The Runtime Environment
- •Process Manager Interface
- •Processes in MPI
- •Starting Processes and Establishing Communication
- •Reserved Keys
- •Spawn Example
- •Manager-worker Example, Using MPI_COMM_SPAWN.
- •Establishing Communication
- •Names, Addresses, Ports, and All That
- •Server Routines
- •Client Routines
- •Name Publishing
- •Reserved Key Values
- •Client/Server Examples
- •Ocean/Atmosphere - Relies on Name Publishing
- •Simple Client-Server Example.
- •Other Functionality
- •Universe Size
- •Singleton MPI_INIT
- •MPI_APPNUM
- •Releasing Connections
- •Another Way to Establish MPI Communication
- •One-Sided Communications
- •Introduction
- •Initialization
- •Window Creation
- •Window Attributes
- •Communication Calls
- •Examples
- •Accumulate Functions
- •Synchronization Calls
- •Fence
- •General Active Target Synchronization
- •Lock
- •Assertions
- •Examples
- •Error Handling
- •Error Handlers
- •Error Classes
- •Semantics and Correctness
- •Atomicity
- •Progress
- •Registers and Compiler Optimizations
- •External Interfaces
- •Introduction
- •Generalized Requests
- •Examples
- •Associating Information with Status
- •MPI and Threads
- •General
- •Initialization
- •Introduction
- •File Manipulation
- •Opening a File
- •Closing a File
- •Deleting a File
- •Resizing a File
- •Preallocating Space for a File
- •Querying the Size of a File
- •Querying File Parameters
- •File Info
- •Reserved File Hints
- •File Views
- •Data Access
- •Data Access Routines
- •Positioning
- •Synchronism
- •Coordination
- •Data Access Conventions
- •Data Access with Individual File Pointers
- •Data Access with Shared File Pointers
- •Noncollective Operations
- •Collective Operations
- •Seek
- •Split Collective Data Access Routines
- •File Interoperability
- •Datatypes for File Interoperability
- •Extent Callback
- •Datarep Conversion Functions
- •Matching Data Representations
- •Consistency and Semantics
- •File Consistency
- •Random Access vs. Sequential Files
- •Progress
- •Collective File Operations
- •Type Matching
- •Logical vs. Physical File Layout
- •File Size
- •Examples
- •Asynchronous I/O
- •I/O Error Handling
- •I/O Error Classes
- •Examples
- •Subarray Filetype Constructor
- •Requirements
- •Discussion
- •Logic of the Design
- •Examples
- •MPI Library Implementation
- •Systems with Weak Symbols
- •Systems Without Weak Symbols
- •Complications
- •Multiple Counting
- •Linker Oddities
- •Multiple Levels of Interception
- •Deprecated Functions
- •Deprecated since MPI-2.0
- •Deprecated since MPI-2.2
- •Language Bindings
- •Overview
- •Design
- •C++ Classes for MPI
- •Class Member Functions for MPI
- •Semantics
- •C++ Datatypes
- •Communicators
- •Exceptions
- •Mixed-Language Operability
- •Problems With Fortran Bindings for MPI
- •Problems Due to Strong Typing
- •Problems Due to Data Copying and Sequence Association
- •Special Constants
- •Fortran 90 Derived Types
- •A Problem with Register Optimization
- •Basic Fortran Support
- •Extended Fortran Support
- •The mpi Module
- •No Type Mismatch Problems for Subroutines with Choice Arguments
- •Additional Support for Fortran Numeric Intrinsic Types
- •Language Interoperability
- •Introduction
- •Assumptions
- •Initialization
- •Transfer of Handles
- •Status
- •MPI Opaque Objects
- •Datatypes
- •Callback Functions
- •Error Handlers
- •Reduce Operations
- •Addresses
- •Attributes
- •Extra State
- •Constants
- •Interlanguage Communication
- •Language Bindings Summary
- •Groups, Contexts, Communicators, and Caching Fortran Bindings
- •External Interfaces C++ Bindings
- •Change-Log
- •Bibliography
- •Examples Index
- •MPI Declarations Index
- •MPI Function Index
16.3. LANGUAGE INTEROPERABILITY |
497 |
call MPI_FILE_WRITE(fh, x, 100, xtype, status, ierror) call MPI_FILE_CLOSE(fh, ierror)
endif
call MPI_BARRIER(MPI_COMM_WORLD, ierror)
if (myrank .eq. 1) then
call MPI_FILE_OPEN(MPI_COMM_SELF, 'foo', MPI_MODE_RDONLY, & MPI_INFO_NULL, fh, ierror)
call MPI_FILE_SET_VIEW(fh, 0, xtype, xtype, 'external32', & MPI_INFO_NULL, ierror)
call MPI_FILE_WRITE(fh, x, 100, xtype, status, ierror) call MPI_FILE_CLOSE(fh, ierror)
endif
If processes 0 and 1 are on di erent machines, this code may not work as expected if the size is di erent on the two machines. (End of advice to users.)
16.3 Language Interoperability
16.3.1 Introduction
It is not uncommon for library developers to use one language to develop an applications library that may be called by an application program written in a di erent language. MPI currently supports ISO (previously ANSI) C, C++, and Fortran bindings. It should be possible for applications in any of the supported languages to call MPI-related functions in another language.
Moreover, MPI allows the development of client-server code, with MPI communication used between a parallel client and a parallel server. It should be possible to code the server in one language and the clients in another language. To do so, communications should be possible between applications written in di erent languages.
There are several issues that need to be addressed in order to achieve interoperability.
Initialization We need to specify how the MPI environment is initialized for all languages.
Interlanguage passing of MPI opaque objects We need to specify how MPI object handles are passed between languages. We also need to specify what happens when an MPI object is accessed in one language, to retrieve information (e.g., attributes) set in another language.
Interlanguage communication We need to specify how messages sent in one language can be received in another language.
It is highly desirable that the solution for interlanguage interoperability be extendable to new languages, should MPI bindings be de ned for such languages.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
1
2
498 |
CHAPTER 16. LANGUAGE BINDINGS |
16.3.2 Assumptions
We assume that conventions exist for programs written in one language to call routines
3
written in another language. These conventions specify how to link routines in di erent
4
languages into one program, how to call functions in a di erent language, how to pass ar-
5
guments between languages, and the correspondence between basic data types in di erent
6
languages. In general, these conventions will be implementation dependent. Furthermore,
7
not every basic datatype may have a matching type in other languages. For example,
8
C/C++ character strings may not be compatible with Fortran CHARACTER variables. How-
9
ever, we assume that a Fortran INTEGER, as well as a (sequence associated) Fortran array
10
of INTEGERs, can be passed to a C or C++ program. We also assume that Fortran, C, and C++ have address-sized integers. This does not mean that the default-size integers are the same size as default-sized pointers, but only that there is some way to hold (and pass) a C address in a Fortran integer. It is also assumed that INTEGER(KIND=MPI_OFFSET_KIND) can be passed from Fortran to C as MPI_O set.
16.3.3 Initialization
A call to MPI_INIT or MPI_INIT_THREAD, from any language, initializes MPI for execution
19
in all languages.
20
21Advice to users. Certain implementations use the (inout) argc, argv arguments of
22the C/C++ version of MPI_INIT in order to propagate values for argc and argv to
23all executing processes. Use of the Fortran version of MPI_INIT to initialize MPI may
24result in a loss of this ability. (End of advice to users.)
25
26The function MPI_INITIALIZED returns the same answer in all languages.
27The function MPI_FINALIZE nalizes the MPI environments for all languages.
28The function MPI_FINALIZED returns the same answer in all languages.
29The function MPI_ABORT kills processes, irrespective of the language used by the
30caller or by the processes killed.
31The MPI environment is initialized in the same manner for all languages by
32MPI_INIT. E.g., MPI_COMM_WORLD carries the same information regardless of language:
33same processes, same environmental attributes, same error handlers.
34Information can be added to info objects in one language and retrieved in another.
35
36Advice to users. The use of several languages in one MPI program may require the
37use of special options at compile and/or link time. (End of advice to users.)
38
39
40
41
42
43
44
Advice to implementors. Implementations may selectively link language speci c MPI libraries only to codes that need them, so as not to increase the size of binaries for codes that use only one language. The MPI initialization code need perform initialization for a language only if that language library is loaded. (End of advice to implementors.)
16.3.4 Transfer of Handles
45Handles are passed between Fortran and C or C++ by using an explicit C wrapper to
46convert Fortran handles to C handles. There is no direct access to C or C++ handles in
47Fortran. Handles are passed between C and C++ using overloaded C++ operators called
48from C++ code. There is no direct access to C++ objects from C.
16.3. LANGUAGE INTEROPERABILITY |
499 |
The type de nition MPI_Fint is provided in C/C++ for an integer of the size that matches a Fortran INTEGER; usually, MPI_Fint will be equivalent to int.
The following functions are provided in C to convert from a Fortran communicator handle (which is an integer) to a C communicator handle, and vice versa. See also Section 2.6.5 on page 21.
MPI_Comm MPI_Comm_f2c(MPI_Fint comm)
If comm is a valid Fortran handle to a communicator, then MPI_Comm_f2c returns a valid C handle to that same communicator; if comm = MPI_COMM_NULL (Fortran value), then MPI_Comm_f2c returns a null C handle; if comm is an invalid Fortran handle, then MPI_Comm_f2c returns an invalid C handle.
MPI_Fint MPI_Comm_c2f(MPI_Comm comm)
The function MPI_Comm_c2f translates a C communicator handle into a Fortran handle to the same communicator; it maps a null handle into a null handle and an invalid handle into an invalid handle.
Similar functions are provided for the other types of opaque objects.
MPI_Datatype MPI_Type_f2c(MPI_Fint datatype)
MPI_Fint MPI_Type_c2f(MPI_Datatype datatype)
MPI_Group MPI_Group_f2c(MPI_Fint group)
MPI_Fint MPI_Group_c2f(MPI_Group group)
MPI_Request MPI_Request_f2c(MPI_Fint request)
MPI_Fint MPI_Request_c2f(MPI_Request request)
MPI_File MPI_File_f2c(MPI_Fint file)
MPI_Fint MPI_File_c2f(MPI_File file)
MPI_Win MPI_Win_f2c(MPI_Fint win)
MPI_Fint MPI_Win_c2f(MPI_Win win)
MPI_Op MPI_Op_f2c(MPI_Fint op)
MPI_Fint MPI_Op_c2f(MPI_Op op)
MPI_Info MPI_Info_f2c(MPI_Fint info)
MPI_Fint MPI_Info_c2f(MPI_Info info)
MPI_Errhandler MPI_Errhandler_f2c(MPI_Fint errhandler)
MPI_Fint MPI_Errhandler_c2f(MPI_Errhandler errhandler)
Example 16.13 The example below illustrates how the Fortran MPI function
MPI_TYPE_COMMIT can be implemented by wrapping the C MPI function
MPI_Type_commit with a C wrapper to do handle conversions. In this example a Fortran-C interface is assumed where a Fortran function is all upper case when referred to from C and arguments are passed by addresses.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
500 |
CHAPTER 16. LANGUAGE BINDINGS |
1
2
3
4
5
6
7
! FORTRAN PROCEDURE
SUBROUTINE MPI_TYPE_COMMIT( DATATYPE, IERR) INTEGER DATATYPE, IERR
CALL MPI_X_TYPE_COMMIT(DATATYPE, IERR) RETURN
END
8
9
/* C wrapper */
10void MPI_X_TYPE_COMMIT( MPI_Fint *f_handle, MPI_Fint *ierr)
11{
12MPI_Datatype datatype;
13
14datatype = MPI_Type_f2c( *f_handle);
15*ierr = (MPI_Fint)MPI_Type_commit( &datatype);
16*f_handle = MPI_Type_c2f(datatype);
17return;
18}
19
The same approach can be used for all other MPI functions. The call to MPI_xxx_f2c
20
(resp. MPI_xxx_c2f) can be omitted when the handle is an OUT (resp. IN) argument, rather
21
than INOUT.
22
23Rationale. The design here provides a convenient solution for the prevalent case,
24where a C wrapper is used to allow Fortran code to call a C library, or C code to
25call a Fortran library. The use of C wrappers is much more likely than the use of
26Fortran wrappers, because it is much more likely that a variable of type INTEGER can
27be passed to C, than a C handle can be passed to Fortran.
28
29
30
31
32
33
34
Returning the converted value as a function value rather than through the argument list allows the generation of e cient inlined code when these functions are simple (e.g., the identity). The conversion function in the wrapper does not catch an invalid handle argument. Instead, an invalid handle is passed below to the library function, which, presumably, checks its input arguments. (End of rationale.)
C and C++ The C++ language interface provides the functions listed below for mixed-
35
language interoperability. The token <CLASS> is used below to indicate any valid MPI
36
opaque handle name (e.g., Group), except where noted. For the case where the C++ class
37
corresponding to <CLASS> has derived classes, functions are also provided for converting
38
between the derived classes and the C MPI_<CLASS>.
39
The following function allows assignment from a C MPI handle to a C++ MPI handle.
40
41 |
MPI::<CLASS>& MPI::<CLASS>::operator=(const MPI_<CLASS>& data) |
42
The constructor below creates a C++ MPI object from a C MPI handle. This allows
43
the automatic promotion of a C MPI handle to a C++ MPI handle.
44
45 MPI::<CLASS>::<CLASS>(const MPI_<CLASS>& data)
46
47Example 16.14 In order for a C program to use a C++ library, the C++ library must
48export a C interface that provides appropriate conversions before invoking the underlying
16.3. LANGUAGE INTEROPERABILITY |
501 |
C++ library call. This example shows a C interface function that invokes a C++ library call with a C communicator; the communicator is automatically promoted to a C++ handle when the underlying C++ function is invoked.
// C++ library function prototype
void cpp_lib_call(MPI::Comm cpp_comm);
// Exported C function prototype extern "C" {
void c_interface(MPI_Comm c_comm);
}
void c_interface(MPI_Comm c_comm)
{
// the MPI_Comm (c_comm) is automatically promoted to MPI::Comm cpp_lib_call(c_comm);
}
The following function allows conversion from C++ objects to C MPI handles. In this case, the casting operator is overloaded to provide the functionality.
MPI::<CLASS>::operator MPI_<CLASS>() const
Example 16.15 A C library routine is called from a C++ program. The C library routine is prototyped to take an MPI_Comm as an argument.
// C function prototype extern "C" {
void c_lib_call(MPI_Comm c_comm);
}
void cpp_function()
{
//Create a C++ communicator, and initialize it with a dup of
//MPI::COMM_WORLD
MPI::Intracomm cpp_comm(MPI::COMM_WORLD.Dup()); c_lib_call(cpp_comm);
}
Rationale. Providing conversion from C to C++ via constructors and from C++ to C via casting allows the compiler to make automatic conversions. Calling C from C++ becomes trivial, as does the provision of a C or Fortran interface to a C++ library. (End of rationale.)
Advice to users. Note that the casting and promotion operators return new handles by value. Using these new handles as INOUT parameters will a ect the internal MPI object, but will not a ect the original handle from which it was cast. (End of advice to users.)
It is important to note that all C++ objects with corresponding C handles can be used interchangeably by an application. For example, an application can cache an attribute on
MPI_COMM_WORLD and later retrieve it from MPI::COMM_WORLD.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48