6.4. MPI AND THE IBM/SP 259
passing jobs. If the requested resources (wall clock time or nodes) exceed those
available for the specified class, then Load Leveler will reject the job. The
command file is submitted to Load Leveler with the llsubmit command. The
status of the job in the queue can be monitored with the llq command.
6.4.3 Basic MPI
There
is a very nice tutorial called “MPI User Guide in Fortran” by Pacheco and
Ming, which can be found at the above homepage as well as a number of other
references including the text by P. S. Pacheco [21]. Here we will not present
a tutorial, but we will give some very simple examples of MPI code that can
be run on the IBM/SP. The essential subroutines of MPI are include ’mpif.h’,
mpi_init(), mpi_comm_rank(), mpi_comm_size(), mpi_send(), mpi_recv(),
mpi_barrier() and mpi_finalize(). Additional information about MPI’s sub-
is a slightly modified version of one given by Pacheco and Ming. This code is an
implementation of the trapezoid rule for numerical approximation of an integral,
which approximates the integral by a summation of areas of trapezoids.
The line 7 include ‘mpif.h’ makes the mpi subroutines available. The data
defined in line 13 will be "hard wired" into any processors that will be used.
The lines 16-18 mpi_init(), mpi_comm_rank() and mpi_comm_size() start
mpi, get a processor rank (a number from 0 to p-1), and find out how many
pro cessors (p) there are available for this program. All processors will be able
to execute the code in lines 22-40. The work (numerical integration) is done
in lines 29-40 by grouping the trapezoids; loc_n, loc_a and loc_b depend on
the processor whose identifier is my_rank. Each processor will have its own
copy of loc_a, loc_b, and integral. In the i-loop in lines 31-34 the calculations
are done by each processor but with di
erent data. The partial integrations
are communicated and summed by mpi_reduce() in lines 39-40. Line 41 uses
barrier() to stop any further computation until all previous work is done. The
call in line 55 to mpi_finalize() terminates the mpi segment of the Fortan code.
MPI/Fortran Code trapmpi.f
1. program trapezoid
2.! This illustrates how the basic mpi commands
3.! can be used to do parallel numerical integration
4.! by partitioning the summation.
5. implicit none
6.! Includes the mpi Fortran library.
7. include ’mpif.h’
8. real:: a,b,h,loc_a,loc_b,integral,total,t1,t2,x
9. real:: timef
10. integer:: my_rank,p,n,source,dest,tag,ierr,loc_n
11. integer:: i,status(mpi_status_size)
© 2004 by Chapman & Hall/CRC
routines can be found in Chapter 7.The following MPI/Fortran code, trapmpi.f,
The MPI homepage is http://www-unix.mcs.anl.gov/mpi/index.html.