diff --git a/doc/html/quickstart.shtml b/doc/html/quickstart.shtml
index 75b89f63273b190b298c2d7aabc715ec51bd39ad..c21b9f1843b7dbc7ca0cce59d4b33de25c3ef8b3 100644
--- a/doc/html/quickstart.shtml
+++ b/doc/html/quickstart.shtml
@@ -161,13 +161,24 @@ batch          1 DOWN         2      3448       82306 adev8
 Instructions for using several varieties of MPI with SLURM are
 provided below.</p> 
 
-<p> <a href="http://www.quadrics.com/">Quadrics MPI</a> relies upon SLURM to 
+<p> <a href="http://www.open-mpi.org/"><b>Open MPI</b></a> relies upon
+SLURM to allocate resources for the job and then mpirun to initiate the 
+tasks. For example:
+<pre>
+$ srun -n4 -A	# allocates four processors and spawns shell for job
+$ mpirun -np 4 a.out
+$ exit          # exits shell spawned by initial srun command
+</pre>
+
+<p> <a href="http://www.quadrics.com/"><b>Quadrics MPI</b></a> relies upon SLURM to 
 allocate resources for the job and <span class="commandline">srun</span> 
 to initiate the tasks. One would build the MPI program in the normal manner 
 then initiate it using a command line of this sort:</p>
-<p class="commandline"> srun [OPTIONS] &lt;program&gt; [program args]</p>
+<pre>
+$ srun [OPTIONS] &lt;program&gt; [program args]
+</pre>
 
-<p> <a href="http://www.lam-mpi.org/">LAM/MPI</a> relies upon the SLURM 
+<p> <a href="http://www.lam-mpi.org/"><b>LAM/MPI</b></a> relies upon the SLURM 
 <span class="commandline">srun</span> command to allocate resources using 
 either the <span class="commandline">--allocate</span> or the 
 <span class="commandline">--batch</span> option. In either case, specify 
@@ -178,7 +189,7 @@ the maximum number of tasks required for the job. Then execute the
 Do not directly execute the <span class="commandline">srun</span> command 
 to launch LAM/MPI tasks. For example: 
 <pre>
-$ srun -n16 -A     # allocates resources and spawns shell for job
+$ srun -n16 -A     # allocates 16 processors and spawns shell for job
 $ lamboot
 $ mpirun -np 16 foo args
 1234 foo running on adev0 (o)
@@ -187,16 +198,17 @@ etc.
 $ lamclean
 $ lamhalt
 $ exit             # exits shell spawned by initial srun command
-</pre> <p class="footer"><a href="#top">top</a></p>
+</pre>
+<p class="footer"><a href="#top">top</a></p>
 
-<p><a href="http://www.hp.com/go/mpi">HP-MPI</a> uses the 
+<p><a href="http://www.hp.com/go/mpi"><b>HP-MPI</b></a> uses the 
 <span class="commandline">mpirun</span> command with the <b>-srun</b> 
 option to launch jobs. For example:
 <pre>
 $MPI_ROOT/bin/mpirun -TCP -srun -N8 ./a.out
 </pre></p>
 
-<p><a href="http:://www-unix.mcs.anl.gov/mpi/mpich2/">MPICH2</a> jobs 
+<p><a href="http:://www-unix.mcs.anl.gov/mpi/mpich2/"><b>MPICH2</b></a> jobs 
 are launched using the <b>srun</b> command. Just link your program with 
 SLURM's implementation of the PMI library so that tasks can communication 
 host and port information at startup. For example:
@@ -212,7 +224,7 @@ libary integrated with SLURM</li>
 of 1 or higher for the PMI libary to print debugging information</li>
 </ul></p>
 
-<p><a href="http://www.research.ibm.com/bluegene/">BlueGene MPI</a> relies 
+<p><a href="http://www.research.ibm.com/bluegene/"><b>BlueGene MPI</b></a> relies 
 upon SLURM to create the resource allocation and then uses the native
 <span class="commandline">mpirun</span> command to launch tasks. 
 Build a job script containing one or more invocations of the 
@@ -220,13 +232,13 @@ Build a job script containing one or more invocations of the
 the script to SLURM using <span class="commandline">srun</span>
 command with the <b>--batch</b> option. For example:
 <pre>
-srun -N2 --batch my.script
+$ srun -N2 --batch my.script
 </pre>
 Note that the node count specified with the <i>-N</i> option indicates
 the base partition count.
 See <a href="bluegene.html">BlueGene User and Administrator Guide</a> 
 for more information.</p>
 
-<p style="text-align:center;">Last modified 6 December 2005</p>
+<p style="text-align:center;">Last modified 18 January 2006</p>
 
 <!--#include virtual="footer.txt"-->