MPI in OpenFabrics Enterprise Distribution (OFED) v1.0 for Linux
****************************************************************
1. General:
===========
Two MPI stacks are included in this release of OFED:

- Ohio State University (OSU) MVAPICH 0.9.7 (Modified by Mellanox
  Technologies)
- Open MPI 1.1 (alpha pre-release)

Setup, compilation and run information of OSU MVAPICH and Open MPI is
provided below in sections 2 and 3 respectively.

1.1 Installation Note
---------------------
In Step 2 of the main menu of install.sh, options 2, 3 and 4 can install
one or more MPI stacks. Please refer to docs/OFED_Installation_Guide.txt
to learn about the different options.

The installation script allows each MPI to be compiled using one or
more compilers. Users need to set, per MPI stack installed, the PATH
and/or LD_LIBRARY_PATH so as to install the desired compiled MPI stacks.

1.2 MPI Tests
-------------
OFED includes 4 basic tests that can be run against each MPI stack:
bandwidth (bw), latency (lt), Pallas, and Presta. The tests
are located under: <prefix>/mpi/<compiler>/<mpi stack>/tests/
(where <prefix> is /usr/local/ofed by default).


2. OSU MVAPICH MPI
==================

This package is a modified version of the Ohio State University (OSU)
MVAPICH Rev 0.9.7 MPI software package.
 See: http://nowlab.cse.ohio-state.edu/projects/mpi-iba/

This modified version included additional features, bug fixes, and RPM
packaging). It is the officially supported MPI stack for this release
of OFED.

2.1 Setting up for OSU MVAPICH MPI
----------------------------------
To launch OSU MPI jobs, its installation directory needs to be included in the
in the PATH and LD_LIBRARY_PATH. To set them, execute one of the following
commands:
  source <prefix>/mpi/<compiler>/<mpi stack>/etc/mvapich.sh
	-- when using sh for launching MPI jobs
 or
  source <prefix>/mpi/<compiler>/<mpi stack>/etc/mvapich.csh
	-- when using csh for launching MPI jobs


2.2 Compiling OSU MVAPICH MPI Applications:
-------------------------------------------
***Important note***: 
A valid Fortran compiler must be present in order to build MVAPICH MPI
stack and tests.

The default gcc-g77 Fortran compiler is provided with all RedHat Linux
releases.  SuSE distributions prior to SuSE Linux 9.0 do not provide
this compiler as part of the default installation.

The following compilers are supported by OFED's OSU MPI package: gcc,
intel and pathscale.  The install script will prompt you to choose
the compiler(s) with which to build the OSU MVAPICH MPI RPM(s).

For details see:
  http://nowlab.cse.ohio-state.edu/projects/mpi-iba/mvapich_user_guide.html

To review the default configuration of the installation check the default
configuration file: <prefix>/mpi/<compiler>/<mpi stack>/etc/mvapich.conf

2.3 Running OSU MVAPICH MPI Applications:
----------------------------------------- 
Requirements:
o At lease two nodes. For example: mtlm01, mtlm02
o Machine file: Includes the list of machines. For example: /root/cluster
o Bidirectional rsh or ssh without a password.
 
Note for OSU: ssh will be used unless -rsh is specified. In order to use
rsh add to the mpirun_rsh command the following parameter: -rsh

*** Running OSU-bw/lt tests (1000 iterations with a 16-byte packet)***

/usr/local/ofed/mpi/gcc/mvapich-0.9.7-mlx2.1.0/bin/mpirun_rsh -np 2 -hostfile /root/cluster /usr/local/ofed/mpi/gcc/mvapich-0.9.7-mlx2.1.0/tests/osutests-1.0/bw 1000 16
/usr/local/ofed/mpi/gcc/mvapich-0.9.7-mlx2.1.0/bin/mpirun_rsh -np 2 -hostfile /root/cluster /usr/local/ofed/mpi/gcc/mvapich-0.9.7-mlx2.1.0/tests/osutests-1.0/lt 1000 16

*** Running Pallas test (Full test) ***

/usr/local/ofed/mpi/gcc/mvapich-0.9.7-mlx2.1.0/bin/mpirun_rsh -np 2 -hostfile /root/cluster /usr/local/ofed/mpi/gcc/mvapich-0.9.7-mlx2.1.0/tests/PMB2.2.1/PMB-MPI1
 
*** Running Presta test ***

/usr/local/ofed/mpi/gcc/mvapich-0.9.7-mlx2.1.0/bin/mpirun_rsh -np 2 -hostfile /root/cluster /usr/local/ofed/mpi/gcc/mvapich-0.9.7-mlx2.1.0/tests/presta1.2/allred 10 10 1000
/usr/local/ofed/mpi/gcc/mvapich-0.9.7-mlx2.1.0/bin/mpirun_rsh -np 2 -hostfile /root/cluster /usr/local/ofed/mpi/gcc/mvapich-0.9.7-mlx2.1.0/tests/presta1.2/allred -o 10
/usr/local/ofed/mpi/gcc/mvapich-0.9.7-mlx2.1.0/bin/mpirun_rsh -np 2 -hostfile /root/cluster /usr/local/ofed/mpi/gcc/mvapich-0.9.7-mlx2.1.0/tests/presta1.2/allred
/usr/local/ofed/mpi/gcc/mvapich-0.9.7-mlx2.1.0/bin/mpirun_rsh -np 2 -hostfile /root/cluster /usr/local/ofed/mpi/gcc/mvapich-0.9.7-mlx2.1.0/tests/presta1.2/allred -o 10


3. Open MPI
===========

Open MPI is a next-generation MPI implementation from the Open MPI
Project (http://www.open-mpi.org/).  The version included in this
release of OFED is a prerelease of version 1.1 of Open MPI, also
available directly from the main Open MPI web site.  This MPI stack is
being offered in OFED as a "technology preview," meaning that it is
not yet officially supported.  It is expected that future releases of
OFED will have stable, supported versions of Open MPI.

A working Fortran compiler is not required to build Open MPI, but some
of the included MPI tests are written in Fortran, and will not
compile/run if Open MPI is built without Fortran support.

Users should check the main Open MPI web site for additional
documentation and support (e.g., the web site FAQ specifically
mentions InfiniBand tuning, etc.).

Open MPI supports building with any available compilers: GNU,
Pathscale, Intel, or Portland.  The install script will prompt you to
choose the compiler(s) with which to build the Open MPI RPM(s).

3.1 Setting up for Open MPI:
----------------------------
The Open MPI Team strongly advises users put the Open MPI installation
directory in their PATH and LD_LIBRARY_PATH (this can be done at the
system level if all users are going to use Open MPI).  Specifically:

- add <prefix>/bin to PATH
- add <prefix>/lib to LD_LIBRARY_PATH

(where <prefix> is the directory where Open MPI was installed; this
will be specific to which compiler you used to install Open MPI)

If using rsh or ssh to launch MPI jobs, you *must* set these values in
your shell's startup files (e.g., .bashrc, .cshrc, etc.).  If you are
using a job scheduler to launch MPI jobs (e.g., SLURM, Torque),
setting the PATH and LD_LIBRARY_PATH is still required, but it does
not need to be set in your shell startup files.  Procedures describing
how to add these values to PATH and LD_LIBRARY_PATH are described in
detail at:
    http://www.open-mpi.org/faq/?category=running

3.2 Compiling Open MPI Applications:
------------------------------------
(copied from http://www.open-mpi.org/faq/?category=mpi-apps -- see 
this web page for more details)

The Open MPI team strongly recommends that you simply use Open MPI's
"wrapper" compilers to compile your MPI applications. That is, instead
of using (for example) gcc to compile your program, use mpicc. Open
MPI provides a wrapper compiler for four languages:

          Language       Wrapper compiler name 
          -------------  --------------------------------
          C              mpicc
          C++            mpiCC, mpicxx, or mpic++
                         (note that mpiCC will not exist
                          on case-insensitive file-systems)
          Fortran 77     mpif77
          Fortran 90     mpif90
          -------------  --------------------------------

Note that if no Fortran 77 or Fortran 90 compilers were found when
Open MPI was built, Fortran 77 and 90 support will automatically be
disabled (respectively).

If you expect to compile your program as: 

    shell$ gcc my_mpi_application.c -lmpi -o my_mpi_application
 
Simply use the following instead: 

    shell$ mpicc my_mpi_application.c -o my_mpi_application

Specifically: simply adding "-lmpi" to your normal compile/link
command line *will not work*.  See
http://www.open-mpi.org/faq/?category=mpi-apps if you cannot use the
Open MPI wrapper compilers.
 
Note that Open MPI's wrapper compilers do not do any actual compiling
or linking; all they do is manipulate the command line and add in all
the relevant compiler / linker flags and then invoke the underlying
compiler / linker (hence, the name "wrapper" compiler). More
specifically, if you run into a compiler or linker error, check your
source code and/or back-end compiler -- it is usually not the fault of
the Open MPI wrapper compiler.

3.3 Running Open MPI Applications:
----------------------------------
Open MPI uses either the "mpirun" or "mpiexec" commands to launch
applications.  If your cluster uses a resource manager (such as SLURM
or Torque), providing a hostfile is not necessary:

    shell$ mpirun -np 4 my_mpi_application

If you are using rsh/ssh to launch applications, you need to setup to
ensure that rsh and ssh will not prompt you for a password (see
http://www.open-mpi.org/faq/?category=rsh for more details on this
topic) and also provide a hostfile of which hosts to run on.  For
example:

    shell$ cat hostfile
    node1.example.com
    node2.example.com
    node3.example.com
    node4.example.com
    shell$ mpirun -np 4 -hostfile hostfile my_mpi_application
    <application runs on all 4 nodes>

In the following examples, replace <N> with the number of nodes to run on,
and <HOSTFILE> with the filename of a valid hostfile listing which nodes
to run on.

Example1: Running the OSU bandwidth (bw) and latency (lt)
tests 1000 iterations with a 16 byte packet:

    shell$ cd /usr/local/ofed/mpi/gcc/openmpi-1.1a3/tests/osutests-1.0
    shell$ mpirun -np <N> -hostfile <HOSTFILE> bw 1000 16

Example2: Running the Pallas benchmarks:

    shell$ cd /usr/local/ofed/mpi/gcc/openmpi-1.1a3/tests/PMB2.2.1
    shell$ mpirun -np <N> -hostfile <HOSTFILE> PMB-MPI1

Example3: Running the Presta benchmarks:

    shell$ cd /usr/local/ofed/mpi/gcc/openmpi-1.1a3/tests/presta1.2
    shell$ mpirun -np <N> -hostfile <HOSTFILE> allred 10 10 1000

