- •Contents
- •List of Figures
- •List of Tables
- •Acknowledgments
- •Introduction to MPI
- •Overview and Goals
- •Background of MPI-1.0
- •Background of MPI-1.1, MPI-1.2, and MPI-2.0
- •Background of MPI-1.3 and MPI-2.1
- •Background of MPI-2.2
- •Who Should Use This Standard?
- •What Platforms Are Targets For Implementation?
- •What Is Included In The Standard?
- •What Is Not Included In The Standard?
- •Organization of this Document
- •MPI Terms and Conventions
- •Document Notation
- •Naming Conventions
- •Semantic Terms
- •Data Types
- •Opaque Objects
- •Array Arguments
- •State
- •Named Constants
- •Choice
- •Addresses
- •Language Binding
- •Deprecated Names and Functions
- •Fortran Binding Issues
- •C Binding Issues
- •C++ Binding Issues
- •Functions and Macros
- •Processes
- •Error Handling
- •Implementation Issues
- •Independence of Basic Runtime Routines
- •Interaction with Signals
- •Examples
- •Point-to-Point Communication
- •Introduction
- •Blocking Send and Receive Operations
- •Blocking Send
- •Message Data
- •Message Envelope
- •Blocking Receive
- •Return Status
- •Passing MPI_STATUS_IGNORE for Status
- •Data Type Matching and Data Conversion
- •Type Matching Rules
- •Type MPI_CHARACTER
- •Data Conversion
- •Communication Modes
- •Semantics of Point-to-Point Communication
- •Buffer Allocation and Usage
- •Nonblocking Communication
- •Communication Request Objects
- •Communication Initiation
- •Communication Completion
- •Semantics of Nonblocking Communications
- •Multiple Completions
- •Non-destructive Test of status
- •Probe and Cancel
- •Persistent Communication Requests
- •Send-Receive
- •Null Processes
- •Datatypes
- •Derived Datatypes
- •Type Constructors with Explicit Addresses
- •Datatype Constructors
- •Subarray Datatype Constructor
- •Distributed Array Datatype Constructor
- •Address and Size Functions
- •Lower-Bound and Upper-Bound Markers
- •Extent and Bounds of Datatypes
- •True Extent of Datatypes
- •Commit and Free
- •Duplicating a Datatype
- •Use of General Datatypes in Communication
- •Correct Use of Addresses
- •Decoding a Datatype
- •Examples
- •Pack and Unpack
- •Canonical MPI_PACK and MPI_UNPACK
- •Collective Communication
- •Introduction and Overview
- •Communicator Argument
- •Applying Collective Operations to Intercommunicators
- •Barrier Synchronization
- •Broadcast
- •Example using MPI_BCAST
- •Gather
- •Examples using MPI_GATHER, MPI_GATHERV
- •Scatter
- •Examples using MPI_SCATTER, MPI_SCATTERV
- •Example using MPI_ALLGATHER
- •All-to-All Scatter/Gather
- •Global Reduction Operations
- •Reduce
- •Signed Characters and Reductions
- •MINLOC and MAXLOC
- •All-Reduce
- •Process-local reduction
- •Reduce-Scatter
- •MPI_REDUCE_SCATTER_BLOCK
- •MPI_REDUCE_SCATTER
- •Scan
- •Inclusive Scan
- •Exclusive Scan
- •Example using MPI_SCAN
- •Correctness
- •Introduction
- •Features Needed to Support Libraries
- •MPI's Support for Libraries
- •Basic Concepts
- •Groups
- •Contexts
- •Intra-Communicators
- •Group Management
- •Group Accessors
- •Group Constructors
- •Group Destructors
- •Communicator Management
- •Communicator Accessors
- •Communicator Constructors
- •Communicator Destructors
- •Motivating Examples
- •Current Practice #1
- •Current Practice #2
- •(Approximate) Current Practice #3
- •Example #4
- •Library Example #1
- •Library Example #2
- •Inter-Communication
- •Inter-communicator Accessors
- •Inter-communicator Operations
- •Inter-Communication Examples
- •Caching
- •Functionality
- •Communicators
- •Windows
- •Datatypes
- •Error Class for Invalid Keyval
- •Attributes Example
- •Naming Objects
- •Formalizing the Loosely Synchronous Model
- •Basic Statements
- •Models of Execution
- •Static communicator allocation
- •Dynamic communicator allocation
- •The General case
- •Process Topologies
- •Introduction
- •Virtual Topologies
- •Embedding in MPI
- •Overview of the Functions
- •Topology Constructors
- •Cartesian Constructor
- •Cartesian Convenience Function: MPI_DIMS_CREATE
- •General (Graph) Constructor
- •Distributed (Graph) Constructor
- •Topology Inquiry Functions
- •Cartesian Shift Coordinates
- •Partitioning of Cartesian structures
- •Low-Level Topology Functions
- •An Application Example
- •MPI Environmental Management
- •Implementation Information
- •Version Inquiries
- •Environmental Inquiries
- •Tag Values
- •Host Rank
- •IO Rank
- •Clock Synchronization
- •Memory Allocation
- •Error Handling
- •Error Handlers for Communicators
- •Error Handlers for Windows
- •Error Handlers for Files
- •Freeing Errorhandlers and Retrieving Error Strings
- •Error Codes and Classes
- •Error Classes, Error Codes, and Error Handlers
- •Timers and Synchronization
- •Startup
- •Allowing User Functions at Process Termination
- •Determining Whether MPI Has Finished
- •Portable MPI Process Startup
- •The Info Object
- •Process Creation and Management
- •Introduction
- •The Dynamic Process Model
- •Starting Processes
- •The Runtime Environment
- •Process Manager Interface
- •Processes in MPI
- •Starting Processes and Establishing Communication
- •Reserved Keys
- •Spawn Example
- •Manager-worker Example, Using MPI_COMM_SPAWN.
- •Establishing Communication
- •Names, Addresses, Ports, and All That
- •Server Routines
- •Client Routines
- •Name Publishing
- •Reserved Key Values
- •Client/Server Examples
- •Ocean/Atmosphere - Relies on Name Publishing
- •Simple Client-Server Example.
- •Other Functionality
- •Universe Size
- •Singleton MPI_INIT
- •MPI_APPNUM
- •Releasing Connections
- •Another Way to Establish MPI Communication
- •One-Sided Communications
- •Introduction
- •Initialization
- •Window Creation
- •Window Attributes
- •Communication Calls
- •Examples
- •Accumulate Functions
- •Synchronization Calls
- •Fence
- •General Active Target Synchronization
- •Lock
- •Assertions
- •Examples
- •Error Handling
- •Error Handlers
- •Error Classes
- •Semantics and Correctness
- •Atomicity
- •Progress
- •Registers and Compiler Optimizations
- •External Interfaces
- •Introduction
- •Generalized Requests
- •Examples
- •Associating Information with Status
- •MPI and Threads
- •General
- •Initialization
- •Introduction
- •File Manipulation
- •Opening a File
- •Closing a File
- •Deleting a File
- •Resizing a File
- •Preallocating Space for a File
- •Querying the Size of a File
- •Querying File Parameters
- •File Info
- •Reserved File Hints
- •File Views
- •Data Access
- •Data Access Routines
- •Positioning
- •Synchronism
- •Coordination
- •Data Access Conventions
- •Data Access with Individual File Pointers
- •Data Access with Shared File Pointers
- •Noncollective Operations
- •Collective Operations
- •Seek
- •Split Collective Data Access Routines
- •File Interoperability
- •Datatypes for File Interoperability
- •Extent Callback
- •Datarep Conversion Functions
- •Matching Data Representations
- •Consistency and Semantics
- •File Consistency
- •Random Access vs. Sequential Files
- •Progress
- •Collective File Operations
- •Type Matching
- •Logical vs. Physical File Layout
- •File Size
- •Examples
- •Asynchronous I/O
- •I/O Error Handling
- •I/O Error Classes
- •Examples
- •Subarray Filetype Constructor
- •Requirements
- •Discussion
- •Logic of the Design
- •Examples
- •MPI Library Implementation
- •Systems with Weak Symbols
- •Systems Without Weak Symbols
- •Complications
- •Multiple Counting
- •Linker Oddities
- •Multiple Levels of Interception
- •Deprecated Functions
- •Deprecated since MPI-2.0
- •Deprecated since MPI-2.2
- •Language Bindings
- •Overview
- •Design
- •C++ Classes for MPI
- •Class Member Functions for MPI
- •Semantics
- •C++ Datatypes
- •Communicators
- •Exceptions
- •Mixed-Language Operability
- •Problems With Fortran Bindings for MPI
- •Problems Due to Strong Typing
- •Problems Due to Data Copying and Sequence Association
- •Special Constants
- •Fortran 90 Derived Types
- •A Problem with Register Optimization
- •Basic Fortran Support
- •Extended Fortran Support
- •The mpi Module
- •No Type Mismatch Problems for Subroutines with Choice Arguments
- •Additional Support for Fortran Numeric Intrinsic Types
- •Language Interoperability
- •Introduction
- •Assumptions
- •Initialization
- •Transfer of Handles
- •Status
- •MPI Opaque Objects
- •Datatypes
- •Callback Functions
- •Error Handlers
- •Reduce Operations
- •Addresses
- •Attributes
- •Extra State
- •Constants
- •Interlanguage Communication
- •Language Bindings Summary
- •Groups, Contexts, Communicators, and Caching Fortran Bindings
- •External Interfaces C++ Bindings
- •Change-Log
- •Bibliography
- •Examples Index
- •MPI Declarations Index
- •MPI Function Index
13.4. DATA ACCESS |
421 |
int MPI_File_seek_shared(MPI_File fh, MPI_Offset offset, int whence)
MPI_FILE_SEEK_SHARED(FH, OFFSET, WHENCE, IERROR)
INTEGER FH, WHENCE, IERROR
INTEGER(KIND=MPI_OFFSET_KIND) OFFSET
fvoid MPI::File::Seek_shared(MPI::Offset offset, int whence) (binding deprecated, see Section 15.2) g
MPI_FILE_SEEK_SHARED updates the shared le pointer according to whence, which has the following possible values:
MPI_SEEK_SET: the pointer is set to o set
MPI_SEEK_CUR: the pointer is set to the current pointer position plus o set
MPI_SEEK_END: the pointer is set to the end of le plus o set
MPI_FILE_SEEK_SHARED is collective; all the processes in the communicator group associated with the le handle fh must call MPI_FILE_SEEK_SHARED with the same values for o set and whence.
The o set can be negative, which allows seeking backwards. It is erroneous to seek to a negative position in the view.
MPI_FILE_GET_POSITION_SHARED(fh, o set)
IN |
fh |
le handle (handle) |
OUT |
o set |
o set of shared pointer (integer) |
int MPI_File_get_position_shared(MPI_File fh, MPI_Offset *offset)
MPI_FILE_GET_POSITION_SHARED(FH, OFFSET, IERROR)
INTEGER FH, IERROR
INTEGER(KIND=MPI_OFFSET_KIND) OFFSET
fMPI::Offset MPI::File::Get_position_shared() const (binding deprecated, see Section 15.2) g
MPI_FILE_GET_POSITION_SHARED returns, in o set, the current position of the shared le pointer in etype units relative to the current view.
Advice to users. The o set can be used in a future call to MPI_FILE_SEEK_SHARED using whence = MPI_SEEK_SET to return to the current position. To set the displacement to the current le pointer position, rst convert o set into an absolute byte position using MPI_FILE_GET_BYTE_OFFSET, then call MPI_FILE_SET_VIEW with the resulting displacement. (End of advice to users.)
13.4.5 Split Collective Data Access Routines
MPI provides a restricted form of \nonblocking collective" I/O operations for all data accesses using split collective data access routines. These routines are referred to as \split" collective routines because a single collective operation is split in two: a begin routine and
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
422 |
CHAPTER 13. I/O |
1an end routine. The begin routine begins the operation, much like a nonblocking data access
2(e.g., MPI_FILE_IREAD). The end routine completes the operation, much like the matching
3test or wait (e.g., MPI_WAIT). As with nonblocking data access operations, the user must
4not use the bu er passed to a begin routine while the routine is outstanding; the operation
5must be completed with an end routine before it is safe to free bu ers, etc.
6Split collective data access operations on a le handle fh are subject to the semantic
7
8
rules given below.
9On any MPI process, each le handle may have at most one active split collective
10operation at any time.
11Begin calls are collective over the group of processes that participated in the collective
12open and follow the ordering rules for collective calls.
13
14End calls are collective over the group of processes that participated in the collective
15open and follow the ordering rules for collective calls. Each end call matches the
16preceding begin call for the same collective operation. When an \end" call is made,
17exactly one unmatched \begin" call for the same operation must precede it.
18
19
20
21
22
23
An implementation is free to implement any split collective data access routine using the corresponding blocking collective routine when either the begin call (e.g.,
MPI_FILE_READ_ALL_BEGIN) or the end call (e.g., MPI_FILE_READ_ALL_END) is issued. The begin and end calls are provided to allow the user and MPI implementation to optimize the collective operation.
24Split collective operations do not match the corresponding regular collective opera-
25tion. For example, in a single collective read operation, an MPI_FILE_READ_ALL
26on one process does not match an MPI_FILE_READ_ALL_BEGIN/
27MPI_FILE_READ_ALL_END pair on another process.
28
29
30
31
32
Split collective routines must specify a bu er in both the begin and end routines. By specifying the bu er that receives data in the end routine, we can avoid many (though not all) of the problems described in \A Problem with Register Optimization," Section 16.2.2, page 485.
33No collective I/O operations are permitted on a le handle concurrently with a split
34collective access on that le handle (i.e., between the begin and end of the access).
35That is
36
37
38
39
40
41
42
MPI_File_read_all_begin(fh, ...);
...
MPI_File_read_all(fh, ...);
...
MPI_File_read_all_end(fh, ...);
is erroneous.
43
44In a multithreaded implementation, any split collective begin and end operation called
45by a process must be called from the same thread. This restriction is made to simplify
46the implementation in the multithreaded case. (Note that we have already disallowed
47having two threads begin a split collective operation on the same le handle since only
48one split collective operation can be active on a le handle at any time.)
13.4. DATA ACCESS |
423 |
The arguments for these routines have the same meaning as for the equivalent collective versions (e.g., the argument de nitions for MPI_FILE_READ_ALL_BEGIN and MPI_FILE_READ_ALL_END are equivalent to the arguments for MPI_FILE_READ_ALL). The begin routine (e.g., MPI_FILE_READ_ALL_BEGIN) begins a split collective operation that, when completed with the matching end routine (i.e., MPI_FILE_READ_ALL_END) produces the result as de ned for the equivalent collective routine (i.e.,
MPI_FILE_READ_ALL).
For the purpose of consistency semantics (Section 13.6.1, page 437), a matched pair of split collective data access operations (e.g., MPI_FILE_READ_ALL_BEGIN and MPI_FILE_READ_ALL_END) compose a single data access.
MPI_FILE_READ_AT_ALL_BEGIN(fh, o set, buf, count, datatype)
IN |
fh |
le handle (handle) |
IN |
o set |
le o set (integer) |
OUT |
buf |
initial address of bu er (choice) |
IN |
count |
number of elements in bu er (integer) |
IN |
datatype |
datatype of each bu er element (handle) |
int MPI_File_read_at_all_begin(MPI_File fh, MPI_Offset offset, void *buf, int count, MPI_Datatype datatype)
MPI_FILE_READ_AT_ALL_BEGIN(FH, OFFSET, BUF, COUNT, DATATYPE, IERROR) <type> BUF(*)
INTEGER FH, COUNT, DATATYPE, IERROR
INTEGER(KIND=MPI_OFFSET_KIND) OFFSET
fvoid MPI::File::Read_at_all_begin(MPI::Offset offset, void* buf,
int count, const MPI::Datatype& datatype) (binding deprecated, see Section 15.2) g
MPI_FILE_READ_AT_ALL_END(fh, buf, status)
IN |
fh |
le handle (handle) |
OUT |
buf |
initial address of bu er (choice) |
OUT |
status |
status object (Status) |
int MPI_File_read_at_all_end(MPI_File fh, void *buf, MPI_Status *status)
MPI_FILE_READ_AT_ALL_END(FH, BUF, STATUS, IERROR) <type> BUF(*)
INTEGER FH, STATUS(MPI_STATUS_SIZE), IERROR
fvoid MPI::File::Read_at_all_end(void* buf, MPI::Status& status) (binding deprecated, see Section 15.2) g
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
|
424 |
|
CHAPTER 13. I/O |
|
1 |
fvoid MPI::File::Read_at_all_end(void* buf) (binding deprecated, see Section 15.2) |
|||
2 |
||||
3 |
|
g |
|
|
4 |
|
|
|
|
5 |
MPI_FILE_WRITE_AT_ALL_BEGIN(fh, o set, buf, count, datatype) |
|||
|
||||
6 |
|
|
|
|
7 |
INOUT |
fh |
le handle (handle) |
|
8 |
IN |
o set |
le o set (integer) |
|
|
||||
9 |
IN |
buf |
initial address of bu er (choice) |
|
10 |
||||
|
|
number of elements in bu er (integer) |
||
11 |
IN |
count |
||
12 |
IN |
datatype |
datatype of each bu er element (handle) |
|
|
||||
13 |
|
|
|
14 |
int MPI_File_write_at_all_begin(MPI_File fh, MPI_Offset offset, void *buf, |
|
|
15 |
int count, MPI_Datatype datatype) |
|
|
16 |
|
17MPI_FILE_WRITE_AT_ALL_BEGIN(FH, OFFSET, BUF, COUNT, DATATYPE, IERROR)
18<type> BUF(*)
19INTEGER FH, COUNT, DATATYPE, IERROR
20INTEGER(KIND=MPI_OFFSET_KIND) OFFSET
21 |
fvoid MPI::File::Write_at_all_begin(MPI::Offset offset, const void* buf, |
22 |
|
|
int count, const MPI::Datatype& datatype) (binding deprecated, see |
23 |
Section 15.2) g |
|
24
25
26
27
28
29
30
31
32
MPI_FILE_WRITE_AT_ALL_END(fh, buf, status)
INOUT |
fh |
le handle (handle) |
IN |
buf |
initial address of bu er (choice) |
OUT |
status |
status object (Status) |
33 |
int MPI_File_write_at_all_end(MPI_File fh, void *buf, MPI_Status *status) |
34 |
MPI_FILE_WRITE_AT_ALL_END(FH, BUF, STATUS, IERROR) |
|
|
35 |
<type> BUF(*) |
|
|
36 |
INTEGER FH, STATUS(MPI_STATUS_SIZE), IERROR |
|
|
37 |
fvoid MPI::File::Write_at_all_end(const void* buf, MPI::Status& status) |
38 |
|
39 |
(binding deprecated, see Section 15.2) g |
40 |
fvoid MPI::File::Write_at_all_end(const void* buf) (binding deprecated, see |
41 |
|
42 |
Section 15.2) g |
|
43
44
45
46
47
48
13.4. DATA ACCESS |
425 |
MPI_FILE_READ_ALL_BEGIN(fh, buf, count, datatype)
INOUT |
fh |
le handle (handle) |
OUT |
buf |
initial address of bu er (choice) |
IN |
count |
number of elements in bu er (integer) |
IN |
datatype |
datatype of each bu er element (handle) |
int MPI_File_read_all_begin(MPI_File fh, void *buf, int count, MPI_Datatype datatype)
MPI_FILE_READ_ALL_BEGIN(FH, BUF, COUNT, DATATYPE, IERROR) <type> BUF(*)
INTEGER FH, COUNT, DATATYPE, IERROR
fvoid MPI::File::Read_all_begin(void* buf, int count,
const MPI::Datatype& datatype) (binding deprecated, see Section 15.2) g
MPI_FILE_READ_ALL_END(fh, buf, status)
INOUT |
fh |
le handle (handle) |
OUT |
buf |
initial address of bu er (choice) |
OUT |
status |
status object (Status) |
int MPI_File_read_all_end(MPI_File fh, void *buf, MPI_Status *status)
MPI_FILE_READ_ALL_END(FH, BUF, STATUS, IERROR) <type> BUF(*)
INTEGER FH, STATUS(MPI_STATUS_SIZE), IERROR
fvoid MPI::File::Read_all_end(void* buf, MPI::Status& status) (binding deprecated, see Section 15.2) g
fvoid MPI::File::Read_all_end(void* buf) (binding deprecated, see Section 15.2) g
MPI_FILE_WRITE_ALL_BEGIN(fh, buf, count, datatype)
INOUT |
fh |
le handle (handle) |
IN |
buf |
initial address of bu er (choice) |
IN |
count |
number of elements in bu er (integer) |
IN |
datatype |
datatype of each bu er element (handle) |
int MPI_File_write_all_begin(MPI_File fh, void *buf, int count, MPI_Datatype datatype)
MPI_FILE_WRITE_ALL_BEGIN(FH, BUF, COUNT, DATATYPE, IERROR) <type> BUF(*)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
1
2
426 |
CHAPTER 13. I/O |
INTEGER FH, COUNT, DATATYPE, IERROR
3fvoid MPI::File::Write_all_begin(const void* buf, int count,
4
5
6
7
const MPI::Datatype& datatype) (binding deprecated, see Section 15.2) g
8
9
10
11
12
13
14
15
MPI_FILE_WRITE_ALL_END(fh, buf, status)
INOUT |
fh |
le handle (handle) |
IN |
buf |
initial address of bu er (choice) |
OUT |
status |
status object (Status) |
int MPI_File_write_all_end(MPI_File fh, void *buf, MPI_Status *status)
16 |
MPI_FILE_WRITE_ALL_END(FH, BUF, STATUS, IERROR) |
17 |
<type> BUF(*) |
|
|
18 |
INTEGER FH, STATUS(MPI_STATUS_SIZE), IERROR |
|
|
19 |
fvoid MPI::File::Write_all_end(const void* buf, MPI::Status& status) |
20 |
|
21 |
(binding deprecated, see Section 15.2) g |
22 |
fvoid MPI::File::Write_all_end(const void* buf) (binding deprecated, see |
23 |
Section 15.2) g |
24 |
|
25 |
|
26 |
MPI_FILE_READ_ORDERED_BEGIN(fh, buf, count, datatype) |
27
28
29
30
31
32
33
INOUT |
fh |
le handle (handle) |
OUT |
buf |
initial address of bu er (choice) |
IN |
count |
number of elements in bu er (integer) |
IN |
datatype |
datatype of each bu er element (handle) |
int MPI_File_read_ordered_begin(MPI_File fh, void *buf, int count,
34
MPI_Datatype datatype)
35
36MPI_FILE_READ_ORDERED_BEGIN(FH, BUF, COUNT, DATATYPE, IERROR)
37<type> BUF(*)
38INTEGER FH, COUNT, DATATYPE, IERROR
39
40
41
42
fvoid MPI::File::Read_ordered_begin(void* buf, int count,
const MPI::Datatype& datatype) (binding deprecated, see Section 15.2) g
43
44
45
46
47
48
13.4. DATA ACCESS |
427 |
|
MPI_FILE_READ_ORDERED_END(fh, buf, status) |
||
INOUT |
fh |
le handle (handle) |
OUT |
buf |
initial address of bu er (choice) |
OUT |
status |
status object (Status) |
int MPI_File_read_ordered_end(MPI_File fh, void *buf, MPI_Status *status)
MPI_FILE_READ_ORDERED_END(FH, BUF, STATUS, IERROR) <type> BUF(*)
INTEGER FH, STATUS(MPI_STATUS_SIZE), IERROR
fvoid MPI::File::Read_ordered_end(void* buf, MPI::Status& status) (binding deprecated, see Section 15.2) g
fvoid MPI::File::Read_ordered_end(void* buf) (binding deprecated, see Section 15.2) g
MPI_FILE_WRITE_ORDERED_BEGIN(fh, buf, count, datatype)
INOUT |
fh |
le handle (handle) |
IN |
buf |
initial address of bu er (choice) |
IN |
count |
number of elements in bu er (integer) |
IN |
datatype |
datatype of each bu er element (handle) |
int MPI_File_write_ordered_begin(MPI_File fh, void *buf, int count, MPI_Datatype datatype)
MPI_FILE_WRITE_ORDERED_BEGIN(FH, BUF, COUNT, DATATYPE, IERROR) <type> BUF(*)
INTEGER FH, COUNT, DATATYPE, IERROR
fvoid MPI::File::Write_ordered_begin(const void* buf, int count, const MPI::Datatype& datatype) (binding deprecated, see Section 15.2) g
MPI_FILE_WRITE_ORDERED_END(fh, buf, status)
INOUT |
fh |
le handle (handle) |
IN |
buf |
initial address of bu er (choice) |
OUT |
status |
status object (Status) |
int MPI_File_write_ordered_end(MPI_File fh, void *buf, MPI_Status *status)
MPI_FILE_WRITE_ORDERED_END(FH, BUF, STATUS, IERROR) <type> BUF(*)
INTEGER FH, STATUS(MPI_STATUS_SIZE), IERROR
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48