From a120dfee1daf5bc6d9962ac4242f7d0873f56db3 Mon Sep 17 00:00:00 2001
From: Moe Jette <jette1@llnl.gov>
Date: Thu, 13 Nov 2008 21:55:32 +0000
Subject: [PATCH] Fix a bunch of typos

---
 doc/html/cons_res.shtml        |  2 +-
 doc/html/cons_res_share.shtml  |  2 +-
 doc/html/crypto_plugins.shtml  |  2 +-
 doc/html/documentation.shtml   |  2 +-
 doc/html/faq.shtml             | 24 ++++++++++++------------
 doc/html/gang_scheduling.shtml |  2 +-
 doc/html/ibm.shtml             |  8 ++++----
 doc/html/jobcompplugins.shtml  |  6 +++---
 doc/html/maui.shtml            |  6 +++---
 doc/html/mc_support.shtml      |  6 +++---
 doc/html/mpiplugins.shtml      |  8 ++++----
 doc/html/news.shtml            | 19 +++++++++++--------
 doc/html/overview.shtml        | 23 +++++++++++++----------
 doc/html/platforms.shtml       |  6 +++---
 doc/html/plugins.shtml         |  2 +-
 15 files changed, 62 insertions(+), 56 deletions(-)

diff --git a/doc/html/cons_res.shtml b/doc/html/cons_res.shtml
index 368810a9ebc..db690ac339c 100644
--- a/doc/html/cons_res.shtml
+++ b/doc/html/cons_res.shtml
@@ -459,7 +459,7 @@ JOBID PARTITION   NAME   USER  ST   TIME  NODES NODELIST(REASON)
     5       lsf  sleep   root   R   1:52      3 linux[01-03]
 </pre>
 
-<p>Job 3 and Job 4 have finshed and Job 5 is still running on nodes linux[01-03].</p>
+<p>Job 3 and Job 4 have finished and Job 5 is still running on nodes linux[01-03].</p>
 
 <p>The advantage of the consumable resource scheduling policy
 is that the job throughput can increase dramatically. The overall job
diff --git a/doc/html/cons_res_share.shtml b/doc/html/cons_res_share.shtml
index 2221f4a2e58..138e29ca3f0 100644
--- a/doc/html/cons_res_share.shtml
+++ b/doc/html/cons_res_share.shtml
@@ -215,7 +215,7 @@ suited for use with the <CODE>select/cons_res</CODE> plugin which can
 allocate individual CPUs to jobs.</P>
 
 <P>Default and maximum values for memory on a per node or per CPU basis can 
-be configued using the following options: <CODE>DefMemPerCPU</CODE>,
+be configured using the following options: <CODE>DefMemPerCPU</CODE>,
 <CODE>DefMemPerNode</CODE>, <CODE>MaxMemPerCPU</CODE> and <CODE>MaxMemPerNode</CODE>.
 Users can use the <CODE>--mem</CODE> or <CODE>--mem-per-cpu</CODE> option
 at job submission time to specify their memory requirements.
diff --git a/doc/html/crypto_plugins.shtml b/doc/html/crypto_plugins.shtml
index 6de9f151245..86b04f8ac43 100644
--- a/doc/html/crypto_plugins.shtml
+++ b/doc/html/crypto_plugins.shtml
@@ -12,7 +12,7 @@ This is version 0 of the API.</p>
 <p>SLURM cryptographic plugins are SLURM plugins that implement 
 a digital signature mechanism. 
 The slurmctld daemon generates a job step credential, signs it, 
-and tranmits it to an srun program. 
+and transmits it to an srun program. 
 The srun program then transmits it to the slurmd daemons directly. 
 The slurmctld daemon does not communicate directly with the slurmd 
 daemons at this time for performance reasons, but the job step 
diff --git a/doc/html/documentation.shtml b/doc/html/documentation.shtml
index 9136821b4e6..e44897bbd98 100644
--- a/doc/html/documentation.shtml
+++ b/doc/html/documentation.shtml
@@ -26,7 +26,7 @@ Also see <a href="publications.html">Publications and Presentations</a>.
 <li><a href="preempt.shtml">Preemption</a></li>
 <li><a href="maui.shtml">Maui Scheduler Integration Guide</a></li>
 <li><a href="moab.shtml">Moab Cluster Suite Integration Guide</a></li>
-<li><a href="http://docs.hp.com/en/5991-4847/ch09s02.html">Submitting Jobs throuh LSF</a></li>
+<li><a href="http://docs.hp.com/en/5991-4847/ch09s02.html">Submitting Jobs through LSF</a></li>
 <li><a href="bluegene.shtml">Blue Gene User and Administrator Guide</a></li>
 <li><a href="ibm.shtml">IBM AIX User and Administrator Guide</a></li>
 <li><a href="power_save.shtml">Power Saving Guide</a></li>
diff --git a/doc/html/faq.shtml b/doc/html/faq.shtml
index 7d23173403b..c51625e0eb2 100644
--- a/doc/html/faq.shtml
+++ b/doc/html/faq.shtml
@@ -121,7 +121,7 @@ associated with it terminate before setting it DOWN and re-booting.</p>
 <p>Note that SLURM has two configuration parameters that may be used to 
 automate some of this process.
 <i>UnkillableStepProgram</i> specifies a program to execute when 
-non-killable proceses are identified.
+non-killable processes are identified.
 <i>UnkillableStepTimeout</i> specifies how long to wait for processes
 to terminate. 
 See the "man slurm.conf" for more information about these parameters.</p>
@@ -140,14 +140,14 @@ the slurmd daemon is initiated by the init daemon with the operating
 system default limits. This may be address either through use of the 
 ulimit command in the /etc/sysconfig/slurm file or enabling
 <a href="#pam">PAM in SLURM</a>.</li>
-<li>The user's hard resource limits on the allocated node sre lower than 
-the same user's soft  hard resource limits on the node from which the 
+<li>The user's hard resource limits on the allocated node are lower than 
+the same user's soft hard resource limits on the node from which the 
 job was submitted. It is recommended that the system administrator 
 establish uniform hard resource limits for users on all nodes 
 within a cluster to prevent this from occurring.</li>
 </ul></p>
 <p>NOTE: This may produce the error message &quot;Can't propagate RLIMIT_...&quot;.
-The error message is printed only if the user explicity specifies that
+The error message is printed only if the user explicitly specifies that
 the resource limit should be propagated or the srun command is running
 with verbose logging of actions from the slurmd daemon (e.g. "srun -d6 ...").</p>
 
@@ -241,7 +241,7 @@ There are significant limitations in the current backfill scheduler plugin.
 It was designed to perform backfill node scheduling for a homogeneous cluster.
 It does not manage scheduling on individual processors (or other consumable 
 resources). It also does not update the required or excluded node list of 
-individual jobs. These are the current limiations. You can use the 
+individual jobs. These are the current limitations. You can use the 
 scontrol show command to check if these conditions apply.</p> 
 <ul>
 <li>Partition: State=UP</li>
@@ -519,7 +519,7 @@ job steps being killed?</b></a><br>
 SLURM has a configuration parameter <i>InactiveLimit</i> intended 
 to kill jobs that do not spawn any job steps for a configurable
 period of time. Your system administrator may modify the <i>InactiveLimit</i>
-to satisfy your needs. Alternatly, you can just spawn a job step
+to satisfy your needs. Alternately, you can just spawn a job step
 at the beginning of your script to execute in the background. It
 will be purged when your script exits or your job otherwise terminates.
 A line of this sort near the beginning of your script should suffice:<br>
@@ -572,7 +572,7 @@ Set its value to one in order for DOWN nodes to automatically be
 returned to service once the <i>slurmd</i> daemon registers 
 with a valid node configuration.
 A value of zero is the default and results in a node staying DOWN 
-until an administrator explicity returns it to service using 
+until an administrator explicitly returns it to service using 
 the command &quot;scontrol update NodeName=whatever State=RESUME&quot;.
 See &quot;man slurm.conf&quot; and &quot;man scontrol&quot; for more 
 details.</p>
@@ -599,7 +599,7 @@ configure <i>SelectType=select/linear</i>.
 Each partition also has a configuration parameter <i>Shared</i>
 that enables more than one job to execute on each node. 
 See <i>man slurm.conf</i> for more information about these 
-configuration paramters.</p>
+configuration parameters.</p>
 
 <p><a name="inc_plugin"><b>6. When the SLURM daemon starts, it 
 prints &quot;cannot resolve X plugin operations&quot; and exits. 
@@ -647,7 +647,7 @@ For example, to set the locked memory limit to unlimited for all users:</p>
 </pre>
 <p>Finally, you need to disable SLURM's forwarding of the limits from the 
 session from which the <i>srun</i> initiating the job ran. By default 
-all resource limits are propogated from that session. For example, adding 
+all resource limits are propagated from that session. For example, adding 
 the following line to <i>slurm.conf</i> will prevent the locked memory 
 limit from being propagated:<i>PropagateResourceLimitsExcept=MEMLOCK</i>.</p>
 
@@ -813,7 +813,7 @@ any desired node resource specifications (<i>Procs</i>, <i>Sockets</i>,
 <i>CoresPerSocket</i>, <i>ThreadsPerCore</i>, and/or <i>TmpDisk</i>).
 SLURM will use the resource specification for each node that is 
 given in <i>slurm.conf</i> and will not check these specifications 
-against those actaully found on the node.
+against those actually found on the node.
 
 <p><a name="credential_replayed"><b>16. What does a 
 &quot;credential replayed&quot; 
@@ -967,7 +967,7 @@ in the SLURM distribution for a list of other options.
 <p><a name="slurmdbd"><b>28. Why should I use the slurmdbd instead of the
 regular database plugins?</b><br>
 While the normal storage plugins will work fine without the added
-layer of the slurmdbd there are some great benifits to using the
+layer of the slurmdbd there are some great benefits to using the
 slurmdbd.
 
 1. Added security.  Using the slurmdbd you can have an authenticated
@@ -1007,7 +1007,7 @@ Hierarchical communications are used for sending this message. If there
 are DOWN nodes in the communications hierarchy, messages will need to 
 be re-routed. This limits SLURM's ability to tightly synchroize the 
 execution of the <i>HealthCheckProgram</i> across the cluster, which
-could adversly impact performance of parallel applications. 
+could adversely impact performance of parallel applications. 
 The use of CRON or node startup scripts may be better suited to insure
 that <i>HealthCheckProgram</i> gets executed on nodes that are DOWN
 in SLURM. If you still want to have SLURM try to execute 
diff --git a/doc/html/gang_scheduling.shtml b/doc/html/gang_scheduling.shtml
index e8d37467bb1..ee8c060ea33 100644
--- a/doc/html/gang_scheduling.shtml
+++ b/doc/html/gang_scheduling.shtml
@@ -131,7 +131,7 @@ the "active bitmap".
 </P>
 <P>
 This <I>timeslicer thread</I> algorithm for rotating jobs is designed to prevent
-jobs from starving (remaining in the suspended state indefinitly) and to be as
+jobs from starving (remaining in the suspended state indefinitely) and to be as
 fair as possible in the distribution of runtime while still keeping all of the
 resources as busy as possible.
 </P>
diff --git a/doc/html/ibm.shtml b/doc/html/ibm.shtml
index 2d0a7f6d789..6a22793648f 100644
--- a/doc/html/ibm.shtml
+++ b/doc/html/ibm.shtml
@@ -30,7 +30,7 @@ prior to launching tasks.</p>
 <p>Each poe invocation (or SLURM job step) can have it's own network 
 specification.
 For example one poe may use IP mode communications and the next use
-User Space (US) mode communcations. 
+User Space (US) mode communications. 
 This enhancement to normal poe functionality may be accomplished by 
 setting the SLURM_NETWORK environment variable.
 The format of SLURM_NETWORK is "network.[protocol],[type],[usage],[mode]". 
@@ -46,7 +46,7 @@ One file is written for each node on which the job is executing, plus
 another for the script executing poe.a
 By default, the checkpoint files will be written to the current working
 directory of the job.
-Names and locations of these files can be controled using the 
+Names and locations of these files can be controlled using the 
 environment variables <b>MP_CKPTFILE</b> and <b>MP_CKPTDIR</b>.
 Use the squeue command to identify the job and job step of interest. 
 To initiate a checkpoint in which the job step will continue execution, 
@@ -61,11 +61,11 @@ use the command: <br>
 <p>Three unique components are required to use SLURM on an IBM system.</p>
 <ol>
 <li>The Federation switch plugin is required.  
-This component is packaged with the SLURM distrbution.</li>
+This component is packaged with the SLURM distribution.</li>
 <li>There is a process tracking kernel extension required. 
 This is used to insure that all processes associated with a job 
 are tracked.
-SLURM normatlly uses session ID and process group ID on Linux systems,
+SLURM normally uses session ID and process group ID on Linux systems,
 but these mechanisms can not prevent user processes from establishing 
 their own session or process group and thus "escape" from SLURM 
 tracking.
diff --git a/doc/html/jobcompplugins.shtml b/doc/html/jobcompplugins.shtml
index e208eb51195..f7fa3871fa9 100644
--- a/doc/html/jobcompplugins.shtml
+++ b/doc/html/jobcompplugins.shtml
@@ -62,15 +62,15 @@ SLURM_SUCCESS. </p>
 <p style="margin-left:.2in"><b>Description</b>: Specify the location to be used for job logging.</p>
 <p style="margin-left:.2in"><b>Argument</b>:<span class="commandline"> location</span>&nbsp; 
 &nbsp;&nbsp;(input) specification of where logging should be done. The interpretation of 
-this string is at the discression of the plugin implementation.</p>
+this string is at the discresion of the plugin implementation.</p>
 <p style="margin-left:.2in"><b>Returns</b>: SLURM_SUCCESS if successful. On failure, 
 the plugin should return SLURM_ERROR and set the errno to an appropriate value 
 to indicate the reason for failure.</p>
 <p class="footer"><a href="#top">top</a></p>
 
 <p class="commandline">int slurm_jobcomp_log_record (struct job_record *job_ptr);</p>
-<p style="margin-left:.2in"><b>Description</b>: Note termation of a job with the specified 
-characteristics.</p>
+<p style="margin-left:.2in"><b>Description</b>: Note termin ation of a job 
+with the specified characteristics.</p>
 <p style="margin-left:.2in"><b>Argument</b>: <br>
 <span class="commandline"> job_ptr</span>&nbsp;&nbsp;&nbsp;(input) Pointer to job record as defined
 in <i>src/slurmctld/slurmctld.h</i></p>
diff --git a/doc/html/maui.shtml b/doc/html/maui.shtml
index 51667d14574..98eb3e8415b 100644
--- a/doc/html/maui.shtml
+++ b/doc/html/maui.shtml
@@ -32,12 +32,12 @@ Then build Maui from its source distribution. This is a two step process:</p>
 <p>The key of 42 is arbitrary. You can use any value, but it will need to 
 be a number no larger than 4,294,967,295 (2^32) and specify the same 
 value as a SLURM configuration parameter described below.
-Maui developers have assured us the authenticaion key will eventually be 
+Maui developers have assured us the authentication key will eventually be 
 set in a configuration file rather than at build time.</p>
 
 <p>Update the Maui configuration file <i>maui.conf</i> (Copy the file
 maui-3.2.6p9/maui.cfg.dist to maui.conf). Add the following configuration 
-paramters to maui.conf:</p>
+parameters to maui.conf:</p>
 <pre>
 RMCFG[host]       TYPE=WIKI
 RMPORT            7321            # selected port
@@ -94,7 +94,7 @@ SchedulerPort=7321
 SchedulerAuth=42 (for Slurm version 1.1 and earlier only)
 </pre>
 <p>In this case, "SchedulerAuth" has been set to 42, which was the 
-authenticaiton key specified when Maui was configured above. 
+authentication key specified when Maui was configured above. 
 Just make sure the numbers match.</p>
 
 <p>For Slurm version 1.2 or higher, the authentication key 
diff --git a/doc/html/mc_support.shtml b/doc/html/mc_support.shtml
index 74197368b58..0944446c7b7 100644
--- a/doc/html/mc_support.shtml
+++ b/doc/html/mc_support.shtml
@@ -79,7 +79,7 @@ to dedicate to a job (minimum or range)
 </td></tr>
 <tr>
     <td> -B <i>S[:C[:T]]</i></td>
-    <td> Combined shorcut option for --sockets-per-node, --cores-per_cpu, --threads-per_core
+    <td> Combined shortcut option for --sockets-per-node, --cores-per_cpu, --threads-per_core
 </td></tr>
 <tr><td colspan=2>
 <b><a href="#srun_dist">New Distributions</b>
@@ -356,7 +356,7 @@ via <a href="configurator.html">configurator.html</a>.
 <p>The <tt>--ntasks-per-{node,socket,core}=<i>ntasks</i></tt> flags
 allow the user to request that no more than <tt><i>ntasks</i></tt>
 be invoked on each node, socket, or core.
-This is similiar to using <tt>--cpus-per-task=<i>ncpus</i></tt>
+This is similar to using <tt>--cpus-per-task=<i>ncpus</i></tt>
 but does not require knowledge of the actual number of cpus on
 each node.  In some cases, it is more convenient to be able to
 request that no more than a specific number of ntasks be invoked
@@ -896,7 +896,7 @@ JOBID ST TIME NODES MIN_PROCS MIN_SOCKETS MIN_CORES MIN_THREADS
 
 <p>The 'scontrol show job' command can be used to display
 the number of allocated CPUs per node as well as the socket, cores,
-and threads specified in the request and contraints.
+and threads specified in the request and constraints.
 
 <PRE>
 % srun -N 2 -B 2:1-1 sleep 100 &
diff --git a/doc/html/mpiplugins.shtml b/doc/html/mpiplugins.shtml
index bffb028a7c1..96c73e2d8bd 100644
--- a/doc/html/mpiplugins.shtml
+++ b/doc/html/mpiplugins.shtml
@@ -34,7 +34,7 @@ srun calls
 <br>
 <i>mpi_p_thr_create((srun_job_t *)job);</i>
 <br>
-which will set up the correct enviornment for the specified mpi.
+which will set up the correct environment for the specified mpi.
 <br>
 slurmd daemon runs
 <br>
@@ -48,7 +48,7 @@ which will set configure the slurmd to use the correct mpi as well to interact w
 <h2>Data Objects</h2>
 <p> These functions are expected to read and/or modify data structures directly in 
 the slurmd daemon's and srun memory. Slurmd is a multi-threaded program with independent 
-read and write locks on each data structure type. Thererfore the type of operations 
+read and write locks on each data structure type. Therefore the type of operations 
 permitted on various data structures is identified for each function.</p>
 
 <p class="footer"><a href="#top">top</a></p>
@@ -63,14 +63,14 @@ to that of the correct mpi.</p>
 <p style="margin-left:.2in"><b>Arguments</b>:<br><span class="commandline"> job</span>&nbsp; 
 &nbsp;&nbsp;(input) Pointer to the slurmd_job that is running.  Cannot be NULL.<br>
 <span class="commandline"> rank</span>&nbsp;
-&nbsp;&nbsp;(input) Primarially there for MVAPICH.  Used to send the rank fo the mpirun job. 
+&nbsp;&nbsp;(input) Primarily there for MVAPICH.  Used to send the rank fo the mpirun job. 
 This can be 0 if no rank information is needed for the mpi type.</p>
 <p style="margin-left:.2in"><b>Returns</b>: SLURM_SUCCESS if successful. On failure, 
 the plugin should return SLURM_ERROR.</p>
 
 <p class="commandline">int mpi_p_thr_create (srun_job_t *job);</p>
 <p style="margin-left:.2in"><b>Description</b>: Used by srun to spawn the thread for the mpi processes. 
-Most all the real proccessing happens here.</p>
+Most all the real processing happens here.</p>
 <p style="margin-left:.2in"><b>Arguments</b>:<span class="commandline"> job</span>&nbsp; 
 &nbsp;&nbsp;(input) Pointer to the srun_job that is running.  Cannot be NULL.</p>
 <p style="margin-left:.2in"><b>Returns</b>: SLURM_SUCCESS if successful. On failure, 
diff --git a/doc/html/news.shtml b/doc/html/news.shtml
index 9dacf3ecc9b..ca9497bed0f 100644
--- a/doc/html/news.shtml
+++ b/doc/html/news.shtml
@@ -22,8 +22,8 @@ Major enhancements include:
 <li>Support for binding tasks to the memory on a processor.</li>
 <li>The configuration parameter <i>HeartbeatInterval</i> is defunct.
 Half the values of configuration parameters <i>SlurmdTimeout</i> and 
-<i>SlurmctldTimeout</i> are used as the commununication frequency for 
-the slurmctld and slurmd daemons respecitively.</li>
+<i>SlurmctldTimeout</i> are used as the communication frequency for 
+the slurmctld and slurmd daemons respectively.</li>
 <li>Support for PAM to control resource limits by user on each 
 compute node used. See <i>UsePAM</i> configuration parameter.</li>
 <li>Support added for <i>xcpu</i> job launch.</li>
@@ -66,7 +66,7 @@ task launch directly from the <i>srun</i> command.</li>
 </ul>
 
 <h2><a name="13">Major Updates in SLURM Version 1.3</a></h2>
-<p>SLURM Version 1.3 was relased in March 2008.
+<p>SLURM Version 1.3 was released in March 2008.
 Major enhancements include:
 <ul>
 <li>Job accounting and completion data can be stored in a database 
@@ -77,22 +77,25 @@ database support across multiple clusters.</li>
 without an external scheduler).</li>
 <li>Cryptography logic moved to a separate plugin with the 
 option of using OpenSSL (default) or Munge (GPL).</li>
-<li>Improved scheduling of multple job steps within a job's allocation.</li>
+<li>Improved scheduling of multiple job steps within a job's allocation.</li>
 <li>Support for job specification of node features with node counts.</li> 
 <li><i>srun</i>'s --alloc, --attach, and --batch options removed (use 
 <i>salloc</i>, <i>sattach</i> or <i>sbatch</i> commands instead).</li>
-<li><i>srun --pty</i> option added to support remote pseudo terminial for 
+<li><i>srun --pty</i> option added to support remote pseudo terminal for 
 spawned tasks.</li>
 <li>Support added for a much richer job dependency specification
 including testing of exit codes and multiple dependencies.</li>
 </ul>
 
 <h2><a name="14">Major Updates in SLURM Version 1.4</a></h2>
-<p>SLURM Version 1.4 is scheduled for relased in May 2009.
+<p>SLURM Version 1.4 is scheduled for released in May 2009.
 Major enhancements include:
 <ul>
 <li>Idle nodes can now be completely powered down when idle and automatically
 restarted when their is work available.</li>
+<li>Jobs in higher priority partitions (queues) can automatically preempt jobs
+in lower priority queues. The preempted jobs will automatically resume execution
+upon completion of the higher priority job.</li>
 <li>Specific cores are allocated to jobs and jobs steps in order to effective
 preempt or gang schedule jobs.</li>
 <li>A new configuration parameter, <i>PrologSlurmctld</i>, can be used to 
@@ -103,13 +106,13 @@ support the booting of different operating systems for each job.</li>
 <p> Detailed plans for release dates and contents of future SLURM releases have 
 not been finalized. Anyone desiring to perform SLURM development should notify
 <a href="mailto:slurm-dev@lists.llnl.gov">slurm-dev@lists.llnl.gov</a>
-to coordinate activies. Future development plans includes:
+to coordinate activities. Future development plans includes:
 <ul>
 <li>Permit resource allocations (jobs) to change size.</li>
 <li>Add Kerberos credential support including credential forwarding 
 and refresh.</li>
 </ul>
 
-<p style="text-align:center;">Last modified 6 October 2008</p>
+<p style="text-align:center;">Last modified 13 November 2008</p>
 
 <!--#include virtual="footer.txt"-->
diff --git a/doc/html/overview.shtml b/doc/html/overview.shtml
index 5184d84641e..ce6d4743451 100644
--- a/doc/html/overview.shtml
+++ b/doc/html/overview.shtml
@@ -15,8 +15,7 @@ it arbitrates contention for resources by managing a queue of pending work.</p>
 <a href="https://www.llnl.gov/">Lawrence Livermore National Laboratory (LLNL)</a>,
 <a href="http://www.hp.com/">Hewlett-Packard</a>, 
 <a href="http://www.bull.com/">Bull</a>,
-<a href="http://www.lnxi.com/">Linux NetworX</a> and many other contributors.
-HP distributes and supports SLURM as a component in their XC System Software.</p>
+<a href="http://www.lnxi.com/">Linux NetworX</a> and many other contributors.</p>
 
 <h2>Architecture</h2>
 <p>SLURM has a centralized manager, <b>slurmctld</b>, to monitor resources and 
@@ -24,18 +23,22 @@ work. There may also be a backup manager to assume those responsibilities in the
 event of failure. Each compute server (node) has a <b>slurmd</b> daemon, which 
 can be compared to a remote shell: it waits for work, executes that work, returns 
 status, and waits for more work. 
-The <b>slurmd</b> daemons provide fault-tolerant hierarchical communciations.
+The <b>slurmd</b> daemons provide fault-tolerant hierarchical communications.
 There is an optional <b>slurmdbd</b> (Slurm DataBase Daemon) which can be used
 to record accounting information for multiple Slurm-managed clusters in a 
 single database.
 User tools include <b>srun</b> to initiate jobs, 
-<b>scancel</b> to terminate queued or running jobs, <b>sinfo</b> to report system 
-status, <b>squeue</b> to report the status of jobs, <b>sacct</b> to get information 
-about jobs and job steps that are running or have completed. 
+<b>scancel</b> to terminate queued or running jobs, 
+<b>sinfo</b> to report system status, 
+<b>squeue</b> to report the status of jobs, and 
+<b>sacct</b> to get information about jobs and job steps that are running or have completed.
 The <b>smap</b> and <b>sview</b> commands graphically reports system and 
-job status including network topology. There is also an administrative 
-tool <b>scontrol</b> available to monitor and/or modify configuration and state 
-information. APIs are available for all functions.</p>
+job status including network topology. 
+There is an administrative tool <b>scontrol</b> available to monitor 
+and/or modify configuration and state information on the cluster. 
+The administrative tool used to manage the database is <b>sacctmgr</b>.
+It can be used to identify the clusters, valid users, valid bank accounts, etc.
+APIs are available for all functions.</p>
 
 <div class="figure">
   <img src="arch.gif" width="550"><br>
@@ -167,6 +170,6 @@ PartitionName=DEFAULT MaxTime=UNLIMITED MaxNodes=4096
 PartitionName=batch Nodes=lx[0041-9999]
 </pre>
 
-<p style="text-align:center;">Last modified 11 March 2008</p>
+<p style="text-align:center;">Last modified 13 November 2008</p>
 
 <!--#include virtual="footer.txt"-->
diff --git a/doc/html/platforms.shtml b/doc/html/platforms.shtml
index c2a83ce31f3..5e06b83efaf 100644
--- a/doc/html/platforms.shtml
+++ b/doc/html/platforms.shtml
@@ -11,8 +11,8 @@ distributions using i386, ia64, and x86_64 architectures.</li>
 </ul>
 <h2>Interconnects</h2>
 <ul>
-<li><b>Blue Gene</b>&#151;SLURM support for IBM's Blue Gene system has been
-thoroughly tested.</li>
+<li><b>BlueGene</b>&#151;SLURM support for IBM's BlueGene/L and BlueGene/P 
+systems has been thoroughly tested.</li>
 <li><b>Ethernet</b>&#151;Ethernet requires no special support from SLURM and has 
 been thoroughly tested.</li>
 <li><b>IBM Federation</b>&#151;SLURM support for IBM's Federation Switch 
@@ -24,6 +24,6 @@ are available in all versions of SLURM and have been thoroughly tested.</li>
 <li><b>Other</b>&#151;SLURM ports to other systems will be gratefully accepted.</li>
 </ul>
 
-<p style="text-align:center;">Last modified 15 June 2007</p>
+<p style="text-align:center;">Last modified 13 November 2008</p>
 
 <!--#include virtual="footer.txt"-->
diff --git a/doc/html/plugins.shtml b/doc/html/plugins.shtml
index 7e0c45c5f3b..f1c9dcb53ae 100644
--- a/doc/html/plugins.shtml
+++ b/doc/html/plugins.shtml
@@ -101,7 +101,7 @@ are not available, so it is the installer's job to make sure the specified libra
 are available.</p>
 <h2>Performance</h2>
 <p>All plugin functions are expected to execute very quickly. If any function 
-entails delays (e.g. transations with other systems), it should be written to 
+entails delays (e.g. transactions with other systems), it should be written to 
 utilize a thread for that functionality. This thread may be created by the 
 <span class="commandline">init()</span> function and deleted by the 
 <span class="commandline">fini()</span> functions. See <b>plugins/sched/backfill</b>
-- 
GitLab