ParkBench Benchmark Suite
Top Level Readme
Last Modification: Thu May 23 14:51:12 EDT 1996

Welcome to the ParkBench suite of benchmarks. The ParkBench distribution
consists of four distinct packages, all included here. All but 5 of the
codes are parallel, and require that either PVM or MPI be installed and 
configured properly. MPI and PVM are software systems that present a 
uniform message passing API to different tasks in a parallel or distributed
application. If neither of these are available, or you are not even sure
how to find out if they are, fire up Netscape, browse the Web and 
do some reading.

PVM Home Page	http://www.epm.ornl.gov/pvm/pvm_home.html
MPI General	http://www.erc.msstate.edu/mpi/
MPI on networks http://www.mcs.anl.gov/home/lusk/mpich/index.html

______________________________________________________________________

1) How do I start?

Ok, first you need to decide which parts of the package you want to
build.  Compiling the entire distribution takes lots of space, 
especially if you have to build the libraries or are compiling for 
multiple architectures. That being said, if you have more than 100
Megs of disk and lots of cycles to spare, go for it.  For those of you
who are less fortunate, consult the following list and decide which of
the four packages you'd like to build. If you only want certain
executables, go to the appropriate directory and look at the Makefile.
The Makefiles in the subdirectories don't always build the required
libraries, so you might have to go to "lib" and "make" if the link fails.

Directory	What
---------	---------
Low_Level/	5 Serial and 5 Parallel benchmarks to measure the performance
		of some hardware, system and communication operations.
Kernels/	5 Parallel Linear Algebra "Kernels" to measure the 
		performance of some standard matrix operations.
Comp_Apps/	4 Sample Message Passing Application Codes (3 are links to...)
NPB2.1/		The Nasa Ames Parallel Benchmarking Suite. 

The distribution also contains:

lib/		A stripped source tree of various parallel linear algebra 
		library distributions. Chances are, you have optimized
		versions of these routines, which you can specify in step 2.
conf/		Variable and rule definitions for different
		architectures that are included by the makefiles. These
		shouldn't need to be changed.
bin/		Where the executables get built.
include/	Header files included by some of the codes.

______________________________________________________________________

2) Configure the build.

Read and edit the file "make.local.def" to reflect your current setup.
This file contains pointers and flags necessary the build process. It's
very short and self documenting. If you need to change *anything* not
specified in "make.local.def", you should edit the file 
"conf/make.def.$PVM_ARCH". Anytime either of these files change,
"conf/make.def" will be *overwitten* with a new version.

Word to the wise: The NPB setup codes require an ANSI C compiler!
(/bin/cc on SunOS 4.x will not work, gcc is always nice)

______________________________________________________________________

3) Make the benchmarks.

"make" or "make help" displays the major possible targets:

Target		What
---------	---------
make conf	This makes the file "conf/make.def" which is included
		by all Makefiles in the distribution. It is a hacked
		version of "make.local.def" and "conf/make.def.$PVM_ARCH"
		This gets rewritten every time you change the above two
		files, so be careful with your edits.

make all		Make all executables for both PVM and MPI.
make all.pvm		Make all executables for PVM.
make all.mpi		Make all executables for MPI.

make Low_Level		Make the Low_Level executables for PVM and MPI 
			including the sequential codes. 
make Low_Level.seq	Make the sequential codes only.
make Low_Level.mpi	-
make Low_Level.pvm	-

make Kernels		Make all the Linear Algebra Kernels consisting of some 
			ParkBench and NPB codes.
make Kernels.mpi	-
make Kernels.pvm	-

make Comp_Apps		Make all the Compact Applications, again with overlap
			of some NPB codes.
make Comp_Apps.mpi	-
make Comp_Apps.pvm	-

make NPB		Make all of the Nasa-Ames Parallel Benchmark Codes
			Please read the instructions in the "NPB2.1/" dir.
make NPB.mpi		-
make NPB.pvm		-

make "program" 		Make both the MPI and PVM executables for an
			individual program. (poly1 poly2 rinf1 tick1 tick2 
			comms1 comms2 comms3 poly3 synch1 LU_solver MATMUL QR
			TRANS TRD FT MG BT LU SP PSTSWM)

make clean		Removes any *~ and *.o files throughout the dist.
make clobber		Removes all created files except the directories
			"lib/$PVM_ARCH" and "bin/$PVM_ARCH"
Kill_Exec_Libs		makes clobber and removes both mentioned directories


A good strategy for those unfamiliar would be the following sequence:
1) 'edit make.local.def'
2) make Low_Level

______________________________________________________________________

2) Running the Benchmarks.

All executables and their I/O files will be placed in the directory
"ParkBench/bin/$PVM_ARCH". Yes, even the MPI ones. Please see the
specific instructions in README.pvm and README.mpi for how to set
things up.

In case of the NPB benchmarks, the executables are compiled for the
"S"mall size only. To compile and run the codes for different problem
sizes read the file ParkBench/NPB2.1/2.1README. This package uses a
different configuration file format than the ParkBench
distribution. If you are planning on building any of these on a
regular basis, set up a "NPB2.1/config/suite.def" file that defines
a "suite" of NPB benchmarks to build every time for specific
problem sizes.  Read the 2.1README file in this directory for further
information.

The Shallow Water Compact Application code, PSTSWM defaults to the
small problem size and only *one task*. To run this code in parallel
for different problem sizes, copy the "problem" file from
"ParkBench/Comp_Apps/PSTSWM/input/" and the appropriate
"algorithm.[mpi,pvm]" file from
"ParkBench/Comp_Apps/PSTSWM/input_[pvm,mpi]/".  If you want to execute
the default size problem, just remove the "problem" file from the bin
directory.

There is a new version of the "Run-Rules" for this release of
ParkBench.  What are "Run-Rules"? These are the guidelines you should
follow if you'd like to use your results. If you plan on *ever* doing
anything related to information obtained through the use of something
in the ParkBench distribution, you should read "runrules.draft7" very
carefully.

Comments or help:       parkbench-comments@cs.utk.edu
URL:                    http://www.netlib.org/parkbench/
