Skip to content
Snippets Groups Projects
Commit 8fab865f authored by Moe Jette's avatar Moe Jette
Browse files

Add description of Open MPI job launch technique.

parent 7411d072
No related branches found
No related tags found
No related merge requests found
......@@ -161,13 +161,24 @@ batch 1 DOWN 2 3448 82306 adev8
Instructions for using several varieties of MPI with SLURM are
provided below.</p>
<p> <a href="http://www.quadrics.com/">Quadrics MPI</a> relies upon SLURM to
<p> <a href="http://www.open-mpi.org/"><b>Open MPI</b></a> relies upon
SLURM to allocate resources for the job and then mpirun to initiate the
tasks. For example:
<pre>
$ srun -n4 -A # allocates four processors and spawns shell for job
$ mpirun -np 4 a.out
$ exit # exits shell spawned by initial srun command
</pre>
<p> <a href="http://www.quadrics.com/"><b>Quadrics MPI</b></a> relies upon SLURM to
allocate resources for the job and <span class="commandline">srun</span>
to initiate the tasks. One would build the MPI program in the normal manner
then initiate it using a command line of this sort:</p>
<p class="commandline"> srun [OPTIONS] &lt;program&gt; [program args]</p>
<pre>
$ srun [OPTIONS] &lt;program&gt; [program args]
</pre>
<p> <a href="http://www.lam-mpi.org/">LAM/MPI</a> relies upon the SLURM
<p> <a href="http://www.lam-mpi.org/"><b>LAM/MPI</b></a> relies upon the SLURM
<span class="commandline">srun</span> command to allocate resources using
either the <span class="commandline">--allocate</span> or the
<span class="commandline">--batch</span> option. In either case, specify
......@@ -178,7 +189,7 @@ the maximum number of tasks required for the job. Then execute the
Do not directly execute the <span class="commandline">srun</span> command
to launch LAM/MPI tasks. For example:
<pre>
$ srun -n16 -A # allocates resources and spawns shell for job
$ srun -n16 -A # allocates 16 processors and spawns shell for job
$ lamboot
$ mpirun -np 16 foo args
1234 foo running on adev0 (o)
......@@ -187,16 +198,17 @@ etc.
$ lamclean
$ lamhalt
$ exit # exits shell spawned by initial srun command
</pre> <p class="footer"><a href="#top">top</a></p>
</pre>
<p class="footer"><a href="#top">top</a></p>
<p><a href="http://www.hp.com/go/mpi">HP-MPI</a> uses the
<p><a href="http://www.hp.com/go/mpi"><b>HP-MPI</b></a> uses the
<span class="commandline">mpirun</span> command with the <b>-srun</b>
option to launch jobs. For example:
<pre>
$MPI_ROOT/bin/mpirun -TCP -srun -N8 ./a.out
</pre></p>
<p><a href="http:://www-unix.mcs.anl.gov/mpi/mpich2/">MPICH2</a> jobs
<p><a href="http:://www-unix.mcs.anl.gov/mpi/mpich2/"><b>MPICH2</b></a> jobs
are launched using the <b>srun</b> command. Just link your program with
SLURM's implementation of the PMI library so that tasks can communication
host and port information at startup. For example:
......@@ -212,7 +224,7 @@ libary integrated with SLURM</li>
of 1 or higher for the PMI libary to print debugging information</li>
</ul></p>
<p><a href="http://www.research.ibm.com/bluegene/">BlueGene MPI</a> relies
<p><a href="http://www.research.ibm.com/bluegene/"><b>BlueGene MPI</b></a> relies
upon SLURM to create the resource allocation and then uses the native
<span class="commandline">mpirun</span> command to launch tasks.
Build a job script containing one or more invocations of the
......@@ -220,13 +232,13 @@ Build a job script containing one or more invocations of the
the script to SLURM using <span class="commandline">srun</span>
command with the <b>--batch</b> option. For example:
<pre>
srun -N2 --batch my.script
$ srun -N2 --batch my.script
</pre>
Note that the node count specified with the <i>-N</i> option indicates
the base partition count.
See <a href="bluegene.html">BlueGene User and Administrator Guide</a>
for more information.</p>
<p style="text-align:center;">Last modified 6 December 2005</p>
<p style="text-align:center;">Last modified 18 January 2006</p>
<!--#include virtual="footer.txt"-->
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment