Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
ИНСАЙД ИНФА MPI.pdf
Скачиваний:
15
Добавлен:
15.04.2015
Размер:
3.3 Mб
Скачать

1

2

156

CHAPTER 5. COLLECTIVE COMMUNICATION

MPI_Datatype recvtype, MPI_Comm comm)

3MPI_ALLGATHERV(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNTS, DISPLS,

4

5

RECVTYPE, COMM, IERROR)

<type> SENDBUF(*), RECVBUF(*)

6INTEGER SENDCOUNT, SENDTYPE, RECVCOUNTS(*), DISPLS(*), RECVTYPE, COMM,

7IERROR

8fvoid MPI::Comm::Allgatherv(const void* sendbuf, int sendcount, const

9

10

11

12

13

MPI::Datatype& sendtype, void* recvbuf,

const int recvcounts[], const int displs[],

const MPI::Datatype& recvtype) const = 0 (binding deprecated, see Section 15.2) g

14MPI_ALLGATHERV can be thought of as MPI_GATHERV, but where all processes re-

15ceive the result, instead of just the root. The block of data sent from the j-th process is

16received by every process and placed in the j-th block of the bu er recvbuf. These blocks

17need not all be the same size.

18The type signature associated with sendcount, sendtype, at process j must be equal to

19the type signature associated with recvcounts[j], recvtype at any other process.

20If comm is an intracommunicator, the outcome is as if all processes executed calls to

21

22

23

MPI_GATHERV(sendbuf,sendcount,sendtype,recvbuf,recvcounts,displs,

recvtype,root,comm),

24for root = 0 , ..., n-1. The rules for correct usage of MPI_ALLGATHERV are easily

25found from the corresponding rules for MPI_GATHERV.

26The \in place" option for intracommunicators is speci ed by passing the value

27MPI_IN_PLACE to the argument sendbuf at all processes. In such a case, sendcount and

28sendtype are ignored, and the input data of each process is assumed to be in the area where

29that process would receive its own contribution to the receive bu er.

30If comm is an intercommunicator, then each process of one group (group A) contributes

31sendcount data items; these data are concatenated and the result is stored at each process

32in the other group (group B). Conversely the concatenation of the contributions of the

33processes in group B is stored at each process in group A. The send bu er arguments in

34group A must be consistent with the receive bu er arguments in group B, and vice versa.

35

36

5.7.1 Example using MPI_ALLGATHER

 

37

The example in this section uses intracommunicators.

38

39

Example 5.14 The all-gather version of Example 5.2. Using MPI_ALLGATHER, we will

40

gather 100 ints from every process in the group to every process.

41

42MPI_Comm comm;

43int gsize,sendarray[100];

44int *rbuf;

45...

46MPI_Comm_size( comm, &gsize);

47rbuf = (int *)malloc(gsize*100*sizeof(int));

48MPI_Allgather( sendarray, 100, MPI_INT, rbuf, 100, MPI_INT, comm);