diff --git a/doc/html/high_throughput.shtml b/doc/html/high_throughput.shtml
index c9634ca4fb66363a9343aef3163817313c2eec7b..404f4892cc4784e0cd96c76c1cb29c3e0c629d14 100644
--- a/doc/html/high_throughput.shtml
+++ b/doc/html/high_throughput.shtml
@@ -138,11 +138,18 @@ scheduling multiple jobs simultaneously may be possible.
 This option may improve system responsiveness when large numbers of jobs
 (many hundreds) are submitted at the same time, but it will delay the
 initiation time of individual jobs.</li>
-<li>A variation of <b>defer</b> would be to configure <b>default_queue_depth</b>
-to a relatively small number to avoid attempting to schedule large numbers of
-jobs every time some job completes or another routine action occurs. (NOTE:
-the default value of <b>default_queue_depth</b> should be fine in most
-cases).</li>
+<li><b>sched_min_interval</b> is yet another configuration parameter to control
+how frequently the scheduling logic runs. It can still be triggered on each
+job submit, job terminiation, or other state change which could permit a new
+job to be started. However that triggering does not cause the scheduling logic
+to be started immediately, but only within the configured <b>sched_interval</b>.
+For example, if sched_min_interval=2 (seconds) and 100 jobs are submitted within
+a 2 second time window, then the scheduling logic will be executed one time
+rather than 100 times (with the default configuration).</li>
+<li>Besides controlling how frequently the scheduling logic is executed, the
+<b>default_queue_depth</b> configuration parameter controls how many jobs are
+considered to be started in each scheduler iteration. The default value of
+default_queue_depth is 100 (jobs), which should be fine in most cases.</li>
 <li>The <i>sched/backfill</i> plugin has relatively high overhead if used with
 large numbers of job. Configuring <b>bf_max_job_test</b> to a modest size (say 100
 jobs or less) and <b>bf_interval</b> to 30 seconds or more will limit the
@@ -211,6 +218,6 @@ speedup can be achieved by setting the CommitDelay option in the
 <li><b>PurgeSuspendAfter</b>=1month</li>
 </ul>
 
-<p style="text-align:center;">Last modified 15 December 2015</p>
+<p style="text-align:center;">Last modified 28 December 2015</p>
 
 <!--#include virtual="footer.txt"-->
diff --git a/doc/man/man5/slurm.conf.5 b/doc/man/man5/slurm.conf.5
index 6e99034eaabfc7d314cfe3c704dbfed985f8fe05..e7b2a88990d8d4eb8e9f1bc5edc59f41341ec35f 100644
--- a/doc/man/man5/slurm.conf.5
+++ b/doc/man/man5/slurm.conf.5
@@ -2503,18 +2503,12 @@ The default value is 2,000,000 microseconds (2 seconds).
 .TP
 \fBdefault_queue_depth=#\fR
 The default number of jobs to attempt scheduling (i.e. the queue depth) when a
-running job completes or other routine actions occur. The full queue will be
-tested on a less frequent basis as defined by the \fBsched_interval\fR option
-described below. The default value is 100.
+running job completes or other routine actions occur, however the frequency
+with which the scheduler is run may be limited by using the \fBdefer\fR or
+\fBsched_min_interval\fR parameters described below.
+The full queue will be tested on a less frequent basis as defined by the
+\fBsched_interval\fR option described below. The default value is 100.
 See the \fBpartition_job_depth\fR option to limit depth by partition.
-In the case of large clusters (more than 1000 nodes), configuring a relatively
-small value may be desirable.
-Specifying a large value (say 1000 or higher) can be expected to result in
-poor system responsiveness since this scheduling logic will not release
-locks for other events to occur.
-It would be better to let the backfill scheduler process a larger number of jobs
-(see \fBbf_max_job_test\fR, \fBbf_continue\fR  and other options here for more
-information).
 .TP
 \fBdefer\fR
 Setting this option will avoid attempting to schedule each job