CMSC 714 – High Performance
Computing
Fall 2003 - Programming Assignment
Due Tuesday, September 30,
2003 @ 9:00 AM
The purpose of this
programming assignment is to gain experience in parallel programming and MPI.
For this assignment you are to write a parallel implementation of a program to
simulate the game of life.
The game of life
simulates simple cellular automata. The game is played on a rectangular board
containing cells. At the start, some of the cells are occupied, the rest are
empty. The game consists of constructing successive generations of the board.
The rules for constructing the next generation from the previous one are:
For this project
the game board has finite size. The x-axis starts at 0 and ends at X_limit-1
(supplied on the command line). Likewise, the y-axis start at 0 and ends at Y_limit-1
(supplied on the command line).
INPUT
Your program should read in a file containing the coordinates of the
initial cells. Sample files are located life.data.1
and life.data.2.
You can also find many other sample patterns on the web (use your favorite
search engine on "game of life" and/or "Conway").
Your program should take five command line arguments: the name of the
data file, the number of processes to invoke (including the initial one), the
number of generations to iterate, X_limit, and Y_limit.
OUTPUT
Your program should print out one line (containing the x coordinate, a
space, and then the y coordinate) for each occupied cell at the end of the last
iteration.
HINTS
The goal is not to write the most efficient implementation of Life, but
rather to learn parallel programming with MPI.
Figure out how you will decompose the problem for parallel execution.
Remember that MPI (at least the mpich implementation) does not have great
communication performance and so you will want to make message passing
infrequent. Also, you will need to be concerned about load balancing.
One you have decided how to decompose the problem, write the sequential
version first.
WHAT TO TURN IN
You should submit your program and the times to run it on the input
file final.data
(for 1, 2, 4, and 8 processes).
You also must submit a short report about the results (1-2 pages) that
explains:
Using MPICH
To compile MPI, run the program usr/local/stow/mpich/bin/mpicc as your C compiler
To run MPI, you
need to set a few environment variables:
setenv MPI_ROOT /usr/local/stow/mpich
setenv MPI_LIB $MPI_ROOT/lib
setenv MPI_INC $MPI_ROOT/include
setenv MPI_BIN $MPI_ROOT/bin
# add MPICH commands to your path (includes mpirun and mpicc)
set path=($MPI_BIN $path)
# add MPICH man pages to your manpath
if ( $?MANPATH ) then
setenv MANPATH $MPI_ROOT/man:$MANPATH
else
setenv MANPATH $MPI_ROOT/man
endif
COMMAND LINE
ARGUMENTS
The command line
arguments should be:
life < input file> <# of generations> < x limit> < y limit>
The number of processes is specified as part of the mpirun command.
GRADING
The project will be
graded as follows:
Item |
Pct |
Correctly runs on
1 processor |
15 % |
Correctly runs on
8 processors |
40% |
Performance on 1
processor |
15% |
Speedup of
parallel version |
20% |
Writeup |
10% |
In addition, extra
credit of 5% is available if you complete and turn-in the log for the study.
ADDITIONAL
RESOURCES
For additional MPI
information, see http://www.mpi-forum.org
(MPI API) and http://www-unix.mcs.anl.gov/mpi
(for MPICH)
For more
information about using the Maryland cluster PBS scheduler, see http://umiacs.umd.edu/labs/LPDC/plc/user-manual.html
.
This page needs to be updated (path names are not correct for the current Linux
environment), which should happen soon.