- •Contents
- •List of Figures
- •List of Tables
- •Acknowledgments
- •Introduction to MPI
- •Overview and Goals
- •Background of MPI-1.0
- •Background of MPI-1.1, MPI-1.2, and MPI-2.0
- •Background of MPI-1.3 and MPI-2.1
- •Background of MPI-2.2
- •Who Should Use This Standard?
- •What Platforms Are Targets For Implementation?
- •What Is Included In The Standard?
- •What Is Not Included In The Standard?
- •Organization of this Document
- •MPI Terms and Conventions
- •Document Notation
- •Naming Conventions
- •Semantic Terms
- •Data Types
- •Opaque Objects
- •Array Arguments
- •State
- •Named Constants
- •Choice
- •Addresses
- •Language Binding
- •Deprecated Names and Functions
- •Fortran Binding Issues
- •C Binding Issues
- •C++ Binding Issues
- •Functions and Macros
- •Processes
- •Error Handling
- •Implementation Issues
- •Independence of Basic Runtime Routines
- •Interaction with Signals
- •Examples
- •Point-to-Point Communication
- •Introduction
- •Blocking Send and Receive Operations
- •Blocking Send
- •Message Data
- •Message Envelope
- •Blocking Receive
- •Return Status
- •Passing MPI_STATUS_IGNORE for Status
- •Data Type Matching and Data Conversion
- •Type Matching Rules
- •Type MPI_CHARACTER
- •Data Conversion
- •Communication Modes
- •Semantics of Point-to-Point Communication
- •Buffer Allocation and Usage
- •Nonblocking Communication
- •Communication Request Objects
- •Communication Initiation
- •Communication Completion
- •Semantics of Nonblocking Communications
- •Multiple Completions
- •Non-destructive Test of status
- •Probe and Cancel
- •Persistent Communication Requests
- •Send-Receive
- •Null Processes
- •Datatypes
- •Derived Datatypes
- •Type Constructors with Explicit Addresses
- •Datatype Constructors
- •Subarray Datatype Constructor
- •Distributed Array Datatype Constructor
- •Address and Size Functions
- •Lower-Bound and Upper-Bound Markers
- •Extent and Bounds of Datatypes
- •True Extent of Datatypes
- •Commit and Free
- •Duplicating a Datatype
- •Use of General Datatypes in Communication
- •Correct Use of Addresses
- •Decoding a Datatype
- •Examples
- •Pack and Unpack
- •Canonical MPI_PACK and MPI_UNPACK
- •Collective Communication
- •Introduction and Overview
- •Communicator Argument
- •Applying Collective Operations to Intercommunicators
- •Barrier Synchronization
- •Broadcast
- •Example using MPI_BCAST
- •Gather
- •Examples using MPI_GATHER, MPI_GATHERV
- •Scatter
- •Examples using MPI_SCATTER, MPI_SCATTERV
- •Example using MPI_ALLGATHER
- •All-to-All Scatter/Gather
- •Global Reduction Operations
- •Reduce
- •Signed Characters and Reductions
- •MINLOC and MAXLOC
- •All-Reduce
- •Process-local reduction
- •Reduce-Scatter
- •MPI_REDUCE_SCATTER_BLOCK
- •MPI_REDUCE_SCATTER
- •Scan
- •Inclusive Scan
- •Exclusive Scan
- •Example using MPI_SCAN
- •Correctness
- •Introduction
- •Features Needed to Support Libraries
- •MPI's Support for Libraries
- •Basic Concepts
- •Groups
- •Contexts
- •Intra-Communicators
- •Group Management
- •Group Accessors
- •Group Constructors
- •Group Destructors
- •Communicator Management
- •Communicator Accessors
- •Communicator Constructors
- •Communicator Destructors
- •Motivating Examples
- •Current Practice #1
- •Current Practice #2
- •(Approximate) Current Practice #3
- •Example #4
- •Library Example #1
- •Library Example #2
- •Inter-Communication
- •Inter-communicator Accessors
- •Inter-communicator Operations
- •Inter-Communication Examples
- •Caching
- •Functionality
- •Communicators
- •Windows
- •Datatypes
- •Error Class for Invalid Keyval
- •Attributes Example
- •Naming Objects
- •Formalizing the Loosely Synchronous Model
- •Basic Statements
- •Models of Execution
- •Static communicator allocation
- •Dynamic communicator allocation
- •The General case
- •Process Topologies
- •Introduction
- •Virtual Topologies
- •Embedding in MPI
- •Overview of the Functions
- •Topology Constructors
- •Cartesian Constructor
- •Cartesian Convenience Function: MPI_DIMS_CREATE
- •General (Graph) Constructor
- •Distributed (Graph) Constructor
- •Topology Inquiry Functions
- •Cartesian Shift Coordinates
- •Partitioning of Cartesian structures
- •Low-Level Topology Functions
- •An Application Example
- •MPI Environmental Management
- •Implementation Information
- •Version Inquiries
- •Environmental Inquiries
- •Tag Values
- •Host Rank
- •IO Rank
- •Clock Synchronization
- •Memory Allocation
- •Error Handling
- •Error Handlers for Communicators
- •Error Handlers for Windows
- •Error Handlers for Files
- •Freeing Errorhandlers and Retrieving Error Strings
- •Error Codes and Classes
- •Error Classes, Error Codes, and Error Handlers
- •Timers and Synchronization
- •Startup
- •Allowing User Functions at Process Termination
- •Determining Whether MPI Has Finished
- •Portable MPI Process Startup
- •The Info Object
- •Process Creation and Management
- •Introduction
- •The Dynamic Process Model
- •Starting Processes
- •The Runtime Environment
- •Process Manager Interface
- •Processes in MPI
- •Starting Processes and Establishing Communication
- •Reserved Keys
- •Spawn Example
- •Manager-worker Example, Using MPI_COMM_SPAWN.
- •Establishing Communication
- •Names, Addresses, Ports, and All That
- •Server Routines
- •Client Routines
- •Name Publishing
- •Reserved Key Values
- •Client/Server Examples
- •Ocean/Atmosphere - Relies on Name Publishing
- •Simple Client-Server Example.
- •Other Functionality
- •Universe Size
- •Singleton MPI_INIT
- •MPI_APPNUM
- •Releasing Connections
- •Another Way to Establish MPI Communication
- •One-Sided Communications
- •Introduction
- •Initialization
- •Window Creation
- •Window Attributes
- •Communication Calls
- •Examples
- •Accumulate Functions
- •Synchronization Calls
- •Fence
- •General Active Target Synchronization
- •Lock
- •Assertions
- •Examples
- •Error Handling
- •Error Handlers
- •Error Classes
- •Semantics and Correctness
- •Atomicity
- •Progress
- •Registers and Compiler Optimizations
- •External Interfaces
- •Introduction
- •Generalized Requests
- •Examples
- •Associating Information with Status
- •MPI and Threads
- •General
- •Initialization
- •Introduction
- •File Manipulation
- •Opening a File
- •Closing a File
- •Deleting a File
- •Resizing a File
- •Preallocating Space for a File
- •Querying the Size of a File
- •Querying File Parameters
- •File Info
- •Reserved File Hints
- •File Views
- •Data Access
- •Data Access Routines
- •Positioning
- •Synchronism
- •Coordination
- •Data Access Conventions
- •Data Access with Individual File Pointers
- •Data Access with Shared File Pointers
- •Noncollective Operations
- •Collective Operations
- •Seek
- •Split Collective Data Access Routines
- •File Interoperability
- •Datatypes for File Interoperability
- •Extent Callback
- •Datarep Conversion Functions
- •Matching Data Representations
- •Consistency and Semantics
- •File Consistency
- •Random Access vs. Sequential Files
- •Progress
- •Collective File Operations
- •Type Matching
- •Logical vs. Physical File Layout
- •File Size
- •Examples
- •Asynchronous I/O
- •I/O Error Handling
- •I/O Error Classes
- •Examples
- •Subarray Filetype Constructor
- •Requirements
- •Discussion
- •Logic of the Design
- •Examples
- •MPI Library Implementation
- •Systems with Weak Symbols
- •Systems Without Weak Symbols
- •Complications
- •Multiple Counting
- •Linker Oddities
- •Multiple Levels of Interception
- •Deprecated Functions
- •Deprecated since MPI-2.0
- •Deprecated since MPI-2.2
- •Language Bindings
- •Overview
- •Design
- •C++ Classes for MPI
- •Class Member Functions for MPI
- •Semantics
- •C++ Datatypes
- •Communicators
- •Exceptions
- •Mixed-Language Operability
- •Problems With Fortran Bindings for MPI
- •Problems Due to Strong Typing
- •Problems Due to Data Copying and Sequence Association
- •Special Constants
- •Fortran 90 Derived Types
- •A Problem with Register Optimization
- •Basic Fortran Support
- •Extended Fortran Support
- •The mpi Module
- •No Type Mismatch Problems for Subroutines with Choice Arguments
- •Additional Support for Fortran Numeric Intrinsic Types
- •Language Interoperability
- •Introduction
- •Assumptions
- •Initialization
- •Transfer of Handles
- •Status
- •MPI Opaque Objects
- •Datatypes
- •Callback Functions
- •Error Handlers
- •Reduce Operations
- •Addresses
- •Attributes
- •Extra State
- •Constants
- •Interlanguage Communication
- •Language Bindings Summary
- •Groups, Contexts, Communicators, and Caching Fortran Bindings
- •External Interfaces C++ Bindings
- •Change-Log
- •Bibliography
- •Examples Index
- •MPI Declarations Index
- •MPI Function Index
13.2. FILE MANIPULATION |
395 |
13.2.4 Resizing a File
MPI_FILE_SET_SIZE(fh, size)
INOUT |
fh |
le handle (handle) |
IN |
size |
size to truncate or expand le (integer) |
int MPI_File_set_size(MPI_File fh, MPI_Offset size)
MPI_FILE_SET_SIZE(FH, SIZE, IERROR)
INTEGER FH, IERROR
INTEGER(KIND=MPI_OFFSET_KIND) SIZE
fvoid MPI::File::Set_size(MPI::Offset size) (binding deprecated, see Section 15.2) g
MPI_FILE_SET_SIZE resizes the le associated with the le handle fh. size is measured in bytes from the beginning of the le. MPI_FILE_SET_SIZE is collective; all processes in the group must pass identical values for size.
If size is smaller than the current le size, the le is truncated at the position de ned by size. The implementation is free to deallocate le blocks located beyond this position.
If size is larger than the current le size, the le size becomes size. Regions of the le that have been previously written are una ected. The values of data in the new regions in the le (those locations with displacements between old le size and size) are unde ned. It is implementation dependent whether the MPI_FILE_SET_SIZE routine allocates le space| use MPI_FILE_PREALLOCATE to force le space to be reserved.
MPI_FILE_SET_SIZE does not a ect the individual le pointers or the shared le pointer. If MPI_MODE_SEQUENTIAL mode was speci ed when the le was opened, it is erroneous to call this routine.
Advice to users. It is possible for the le pointers to point beyond the end of le after a MPI_FILE_SET_SIZE operation truncates a le. This is legal, and equivalent to seeking beyond the current end of le. (End of advice to users.)
All nonblocking requests and split collective operations on fh must be completed before calling MPI_FILE_SET_SIZE. Otherwise, calling MPI_FILE_SET_SIZE is erroneous. As far as consistency semantics are concerned, MPI_FILE_SET_SIZE is a write operation that con icts with operations that access bytes at displacements between the old and new le sizes (see Section 13.6.1, page 437).
13.2.5 Preallocating Space for a File
MPI_FILE_PREALLOCATE(fh, size)
INOUT |
fh |
le handle (handle) |
IN |
size |
size to preallocate le (integer) |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
1
2
396 |
CHAPTER 13. I/O |
int MPI_File_preallocate(MPI_File fh, MPI_Offset size)
3
4
5
6
7
8
MPI_FILE_PREALLOCATE(FH, SIZE, IERROR)
INTEGER FH, IERROR
INTEGER(KIND=MPI_OFFSET_KIND) SIZE
fvoid MPI::File::Preallocate(MPI::Offset size) (binding deprecated, see Section 15.2) g
9MPI_FILE_PREALLOCATE ensures that storage space is allocated for the rst size bytes
10of the le associated with fh. MPI_FILE_PREALLOCATE is collective; all processes in the
11group must pass identical values for size. Regions of the le that have previously been
12written are una ected. For newly allocated regions of the le, MPI_FILE_PREALLOCATE
13has the same e ect as writing unde ned data. If size is larger than the current le size, the
14le size increases to size. If size is less than or equal to the current le size, the le size is
15unchanged.
16The treatment of le pointers, pending nonblocking accesses, and le consistency is the
17same as with MPI_FILE_SET_SIZE. If MPI_MODE_SEQUENTIAL mode was speci ed when
18the le was opened, it is erroneous to call this routine.
19
20
21
22
Advice to users. In some implementations, le preallocation may be expensive. (End of advice to users.)
23 13.2.6 Querying the Size of a File
24
25
26 MPI_FILE_GET_SIZE(fh, size)
27
28
29
30
IN |
fh |
le handle (handle) |
OUT |
size |
size of the le in bytes (integer) |
31 int MPI_File_get_size(MPI_File fh, MPI_Offset *size)
32
MPI_FILE_GET_SIZE(FH, SIZE, IERROR)
33
INTEGER FH, IERROR
34
INTEGER(KIND=MPI_OFFSET_KIND) SIZE
35
36 fMPI::Offset MPI::File::Get_size() const (binding deprecated, see Section 15.2) g
37
MPI_FILE_GET_SIZE returns, in size, the current size in bytes of the le associated with
38
the le handle fh. As far as consistency semantics are concerned, MPI_FILE_GET_SIZE is a
39
data access operation (see Section 13.6.1, page 437).
40
41
42
43
44
45
46
47
48
13.2. FILE MANIPULATION |
397 |
13.2.7 Querying File Parameters
MPI_FILE_GET_GROUP(fh, group)
IN |
fh |
le handle (handle) |
OUT |
group |
group which opened the le (handle) |
int MPI_File_get_group(MPI_File fh, MPI_Group *group)
MPI_FILE_GET_GROUP(FH, GROUP, IERROR)
INTEGER FH, GROUP, IERROR
fMPI::Group MPI::File::Get_group() const (binding deprecated, see Section 15.2) g
MPI_FILE_GET_GROUP returns a duplicate of the group of the communicator used to open the le associated with fh. The group is returned in group. The user is responsible for freeing group.
MPI_FILE_GET_AMODE(fh, amode)
IN |
fh |
le handle (handle) |
OUT |
amode |
le access mode used to open the le (integer) |
int MPI_File_get_amode(MPI_File fh, int *amode)
MPI_FILE_GET_AMODE(FH, AMODE, IERROR)
INTEGER FH, AMODE, IERROR
fint MPI::File::Get_amode() const (binding deprecated, see Section 15.2) g
MPI_FILE_GET_AMODE returns, in amode, the access mode of the le associated with
fh.
Example 13.1 In Fortran 77, decoding an amode bit vector will require a routine such as the following:
SUBROUTINE BIT_QUERY(TEST_BIT, MAX_BIT, AMODE, BIT_FOUND)
!
!TEST IF THE INPUT TEST_BIT IS SET IN THE INPUT AMODE
!IF SET, RETURN 1 IN BIT_FOUND, 0 OTHERWISE
!
INTEGER TEST_BIT, AMODE, BIT_FOUND, CP_AMODE, HIFOUND
BIT_FOUND = 0
CP_AMODE = AMODE
100CONTINUE LBIT = 0 HIFOUND = 0
DO 20 L = MAX_BIT, 0, -1 MATCHER = 2**L
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48