diff --git a/doc/html/arch.gif b/doc/html/arch.gif
index 6d474b6970f85e2edfad55876aee9863a326157c..f605a1c5cc6ddbd4390fdeff5b500e2217abb0cc 100644
Binary files a/doc/html/arch.gif and b/doc/html/arch.gif differ
diff --git a/doc/html/overview.shtml b/doc/html/overview.shtml
index 756745462782db557fb7a7bc457edf214723b08b..575d264e3eed2ae225b54f72c41b285e7e01a437 100644
--- a/doc/html/overview.shtml
+++ b/doc/html/overview.shtml
@@ -23,7 +23,9 @@ HP distributes and supports SLURM as a component in their XC System Software.</p
 work. There may also be a backup manager to assume those responsibilities in the 
 event of failure. Each compute server (node) has a <b>slurmd</b> daemon, which 
 can be compared to a remote shell: it waits for work, executes that work, returns 
-status, and waits for more work. User tools include <b>srun</b> to initiate jobs, 
+status, and waits for more work. 
+The <b>slurmd</b> daemons provide fault-tolerant hierarchical communciations.
+User tools include <b>srun</b> to initiate jobs, 
 <b>scancel</b> to terminate queued or running jobs, <b>sinfo</b> to report system 
 status, and <b>squeue</b> to report the status of jobs. 
 The <b>smap</b> and <b>sview</b> commands graphically reports system and 
diff --git a/doc/html/quickstart.shtml b/doc/html/quickstart.shtml
index 4b2400c9356b1a1302dd1316096552081e2724fb..7de0187abbebb09435c1b6adda9de8e4d3c5c597 100644
--- a/doc/html/quickstart.shtml
+++ b/doc/html/quickstart.shtml
@@ -17,8 +17,9 @@ work.</p>
 <h2>Architecture</h2>
 <p>As depicted in Figure 1, SLURM consists of a <b>slurmd</b> daemon running on 
 each compute node and a central <b>slurmctld</b> daemon running on a management node 
-(with optional fail-over twin). The user commands include: <b>srun</b>, 
-<b>sbcast</b>, <b>scancel</b>, 
+(with optional fail-over twin). 
+The <b>slurmd</b> daemons provide fault-tolerant hierarchical communciations.
+The user commands include: <b>srun</b>, <b>sbcast</b>, <b>scancel</b>, 
 <b>sinfo</b>, <b>srun</b>, <b>smap</b>, <b>squeue</b>, and <b>scontrol</b>.  
 All of the commands can run anywhere in the cluster.</p>