diff --git a/doc/html/bluegene.html b/doc/html/bluegene.html
index 62df273203bb272ff3be2ab2595d97ece39c8771..43177bdcc5760e14dcb6e43f2038ec2ae2cd8744 100644
--- a/doc/html/bluegene.html
+++ b/doc/html/bluegene.html
@@ -9,7 +9,7 @@
 <meta http-equiv="keywords" content="Simple Linux Utility for Resource Management, SLURM, resource management, 
 Linux clusters, high-performance computing, Livermore Computing">
 <meta name="LLNLRandR" content="UCRL-WEB-209488">
-<meta name="LLNLRandRdate" content="24 February 2005">
+<meta name="LLNLRandRdate" content="11 May 2005">
 <meta name="distribution" content="global">
 <meta name="description" content="Simple Linux Utility for Resource Management">
 <meta name="copyright"
@@ -63,7 +63,7 @@ described in this document.</p>
 
 <p>Blue Gene systems have several unique features making for a few 
 differences in how SLURM operates there. 
-The basic unit of resource allocation is a <i>base partition</i>.
+The basic unit of resource allocation is a <i>base partition</i> or <i>midplane</i>.
 The <i>base partitions</i> are connected in a three-dimensional torus. 
 Each <i>base partition</i> includes 512 <i>c-nodes</i> each containing two processors; 
 one designed primarily for computations and the other primarily for managing communications. 
@@ -117,7 +117,7 @@ location in the X, Y and Z dimensions with a zero origin.
 For example, "bgl012" represents the base partition whose location is at X=0, Y=1 and Z=2. 
 Since jobs must be allocated consecutive nodes in all three dimensions, we have developed 
 an abbreviated format for describing the nodes in one of these three-dimensional blocks. 
-The node's prefix is followed by the end-points of the block enclosed in square-brackets. 
+The node's prefix of "bgl" is followed by the end-points of the block enclosed in square-brackets. 
 For example, " bgl[620x731]" is used to represent the eight nodes enclosed in a block 
 with endpoints bgl620 and bgl731 (bgl620, bgl621, bgl630, bgl631, bgl720, bgl721, 
 bgl730 and bgl731).</p></a>
@@ -169,7 +169,7 @@ program locating some expected files. You should see "#define HAVE_BGL 1" and
 
 <p>The slurmctld daemon should execute on the system's service node.
 If an optional backup daemon is used, it must be in some location where 
-it is capable of writing to MMCS.
+it is capable of executing BGL Bridge APIs.
 One slurmd daemon should be configured to execute on one of the front end nodes. 
 That one slurmd daemon represents communications channel for every base partition. 
 A future release of SLURM will support multiple slurmd daemons on multiple
@@ -187,17 +187,29 @@ The value of <i>SchedulerType</i> should be set to "sched/builtin".
 The value of <i>Prolog</i> should be set to a program that will delay 
 execution until the bglblock identified by the MPIRUN_PARTITION environment 
 variable is ready for use. It is recommended that you construct a script 
-that serves this function and calls the supplied program <i>slurm_prolog</i>.
+that serves this function and calls the supplied program <i>sbin/slurm_prolog</i>.
 The value of <i>Epilog</i> should be set to a program that will wait
 until the bglblock identified by the MPIRUN_PARTITION environment
 variable has been freed. It is recommended that you construct a script
-that serves this function and calls the supplied program <i>slurm_epilog</i>.
+that serves this function and calls the supplied program <i>sbin/slurm_epilog</i>.
 The prolog and epilog programs are used to insure proper synchronization 
-between the slurmctld daemon, the user job, and MMCS.
-Since jobs with different geometries or other characteristics do not interfere 
+between the slurmctld daemon, the user job, and MMCS.</p>
+
+<p>Since jobs with different geometries or other characteristics do not interfere 
 with each other's scheduling, backfill scheduling is not presently meaningful.
 SLURM's builtin scheduler on Blue Gene will sort pending jobs and then attempt 
-to schedule all of them in priority order. </p>
+to schedule all of them in priority order. 
+This essentailly functions as if there is a separate queue for each job size.
+Note that SLURM does support different partitions with an assortment of 
+different scheduling parameters.
+For example, SLURM can have defined a partition for full system jobs that 
+is enabled to execute jobs only at certain times; while a default partition 
+could be configured to execute jobs at other times. 
+Jobs could still be queued in a partition that is configured in a DOWN 
+state and scheduled to execute when changed to an UP state. 
+Nodes can also be moved between slurm partitions either by changing 
+the slurm.conf file and restarting the slurmctld daemon or by using 
+the scontrol command. </p>
 
 <p>SLURM node and partition descriptions should make use of the 
 <a href="#naming">naming</a> conventions described above. For example,
@@ -225,10 +237,13 @@ Jobs must then execute in one of these pre-defined bglblocks.
 This is known as <i>static partitioning</i>. 
 Each of these bglblocks are explicitly configured with either a mesh or 
 torus interconnect.
+They must also not overlap, except for the implicitly defined full-system 
+bglblock.
 In addition to the normal <i>slurm.conf</i> file, a new 
 <i>bluegene.conf</i> configuration file is required with this information.
 Put <i>bluegene.conf</i> into the SLURM configuration directory with
-<i>slurm.conf</i>.
+<i>slurm.conf</i>. 
+A sample file is installed in <i>bluegene.conf.example</i>. 
 System administrators should use the smap tool to build appropriate 
 configuration file for static partitioning. 
 See the smap man page for more information.
@@ -255,10 +270,29 @@ At that time the <i>bluegene.conf</i> configuration file will become obsolete.
 Dynamic partition does involve substantial overhead including the 
 rebooting of c-nodes and I/O nodes.</p>
 
-<p>Assuming that you build RPMs for SLURM, note that the smap and bluegene 
-RPMs must be built on the service node (where the BGL Bridge API libraries 
-exist) and installed on both the service node and front-end nodes (which 
-lack the API libraries).</p>
+<p>SLURM versions 0.4.23 and higher are designed to utilize Bluegene driver 
+141(2005) or higher. This combination avoids rebooting bglblocks whenever
+possible so as to minimize the system overhead for boots (which can be tens 
+of minutes on large systems).
+When slurmctld is initially started on an idle system, the bglblocks 
+already defined in MMCS are read using the BGL Bridge APIs. 
+If these bglblocks do not correspond to those defined in the bluegene.conf 
+file, the old bglblocks with a prefix of "RMP" are destroyed and new ones 
+created. 
+When a job is scheduled, the appropriate bglblock is identified, 
+its node use (virtual or coprocessor) set, its user set, and it is 
+booted. 
+Subsequent jobs use this same bglblock without rebooting by changing 
+the associated user field.
+The bglblock will be freed and then rebooted in order to change its 
+node use (from virtual to coprocessor or vise-versa). 
+Bglblocks will also be freed and rebooted when going to or from full-system 
+jobs (two or more bglblocks sharing base partitions can not be in a 
+ready state at the same time).
+When this logic became available at LLNL, approximately 85 percent of 
+bglblock boots were eliminated and the overhead of job startup went
+from about 24% to about 6% of total job time.
+</p>
 
 <p class="footer"><a href="#top">top</a></p></td>
 
@@ -267,7 +301,7 @@ lack the API libraries).</p>
 <td colspan="3"><hr> <p>For information about this page, contact <a href="mailto:slurm-dev@lists.llnl.gov">slurm-dev@lists.llnl.gov</a>.</p>
 <p><a href="http://www.llnl.gov/"><img align=middle src="lll.gif" width="32" height="32" border="0"></a></p>
 <p class="footer">UCRL-WEB-209488<br>
-Last modified 24 February 2005</p></td>
+Last modified 11 May 2005</p></td>
 </tr>
 </table>
 </td>