7.3. GATHER AND SCATTER 291
!
0.2347455187E-40 0.1010193260E-38 -0.8896380928E+10
-0.3097083589E+30 0.3083417141E-40 0.1102030158E-38
7.3.4 Illustrations of mpi_gather()
The second code gathmpi.f collects some of the data loc_n, loc_a, and loc_b,
which is computed in lines 15-17 for each processor. In particular, all the values
of loc_a are sent and stored in the array a_list on processor 0. This is done by
mpi_gather() on line 23 where count is equal to one and the root processor is
zero. This is verified by the print commands in lines 18-20 and 25-29.
MPI/Fortran 9x Code gathmpi.f
1. program gathmpi
2.! Illustrates mpi_gather.
3. implicit none
4. include ’mpif.h’
5. real:: a,b,h,loc_a,lo c_b,total
6. real, dimension(0:31):: a_list
7. integer:: my_rank,p,n,source,dest,tag,ierr,loc_n
8. integer:: i,status(mpi_status_size)
9. data a,b,n,dest,tag/0.0,100.0,1024,0,50/
10. call mpi_init(ierr)
11. call mpi_comm_rank(mpi_comm_world,my_rank,ierr)
12. call mpi_comm_size(mpi_comm_world,p,ierr)
13. h = (b-a)/n
14.! Each pro cessor has a unique loc_n, loc_a and loc_b
15. loc_n = n/p
16. loc_a = a+my_rank*loc_n*h
17. loc_b = loc_a + loc_n*h
18. print*,’my_rank =’,my_rank, ’loc_a = ’,loc_a
19. print*,’my_rank =’,my_rank, ’loc_b = ’,loc_b
20. print*,’my_rank =’,my_rank, ’loc_n = ’,loc_n
21.! The loc_a are sent and recieved to an array, a_list, on
22.! processor 0.
23. call mpi_gather(loc_a,1,mpi_real,a_list,1,mpi_real,0,&
mpi_comm_world,status,ierr)
24. call mpi_barrier(mpi_comm_world,ierr)
25. if (my_rank.eq.0) then
26. do i = 0,p-1
27. print*, ’a_list(’,i,’) = ’,a_list(i)
28. nd do
29. end if
30. call mpi_finalize(ierr)
31. end program gathmpi
© 2004 by Chapman & Hall/CRC