From 8f1b5e4cb01b1de727aebd54173ba1d0e5de222c Mon Sep 17 00:00:00 2001
From: Moe Jette <jette1@llnl.gov>
Date: Thu, 10 Jul 2008 20:13:56 +0000
Subject: [PATCH] minor tweaks in explanation of memory limit enforcement.

---
 doc/html/cons_res_share.shtml  | 2 ++
 doc/html/gang_scheduling.shtml | 2 ++
 doc/html/preempt.shtml         | 8 +++-----
 3 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/doc/html/cons_res_share.shtml b/doc/html/cons_res_share.shtml
index b007fdf70ab..2221f4a2e58 100644
--- a/doc/html/cons_res_share.shtml
+++ b/doc/html/cons_res_share.shtml
@@ -217,6 +217,8 @@ allocate individual CPUs to jobs.</P>
 <P>Default and maximum values for memory on a per node or per CPU basis can 
 be configued using the following options: <CODE>DefMemPerCPU</CODE>,
 <CODE>DefMemPerNode</CODE>, <CODE>MaxMemPerCPU</CODE> and <CODE>MaxMemPerNode</CODE>.
+Users can use the <CODE>--mem</CODE> or <CODE>--mem-per-cpu</CODE> option
+at job submission time to specify their memory requirements.
 Enforcement of a job's memory allocation is performed by the accounting 
 plugin, which periodically gathers data about running jobs. Set 
 <CODE>JobAcctGather</CODE> and <CODE>JobAcctFrequency</CODE> to 
diff --git a/doc/html/gang_scheduling.shtml b/doc/html/gang_scheduling.shtml
index b249cb6eb7f..e8d37467bb1 100644
--- a/doc/html/gang_scheduling.shtml
+++ b/doc/html/gang_scheduling.shtml
@@ -59,6 +59,8 @@ a memory requirement, we also recommend configuring
 It may also be desirable to configure
 <I>MaxMemPerCPU</I> (maximum memory per allocated CPU) or
 <I>MaxMemPerNode</I> (maximum memory per allocated node) in <I>slurm.conf</I>.
+Users can use the <I>--mem</I> or <I>--mem-per-cpu</I> option
+at job submission time to specify their memory requirements.
 </LI>
 <LI>
 <B>JobAcctGatherType and JobAcctGatherFrequency</B>:
diff --git a/doc/html/preempt.shtml b/doc/html/preempt.shtml
index f9fd8c0b9db..2f9bd34df49 100644
--- a/doc/html/preempt.shtml
+++ b/doc/html/preempt.shtml
@@ -50,7 +50,9 @@ a memory requirement, we also recommend configuring
 <I>DefMemPerNode</I> (default memory per allocated node). 
 It may also be desirable to configure 
 <I>MaxMemPerCPU</I> (maximum memory per allocated CPU) or 
-<I>MaxMemPerNode</I> (maximum memory per allocated node) in <I>slurm.conf</I>.
+<I>MaxMemPerNode</I> (maximum memory per allocated node) in <I>slurm.conf</I>.
+Users can use the <I>--mem</I> or <I>--mem-per-cpu</I> option
+at job submission time to specify their memory requirements.
 </LI>
 <LI>
 <B>JobAcctGatherType and JobAcctGatherFrequency</B>:
@@ -161,7 +163,6 @@ Here are the Partition settings:
 [user@n16 ~]$ <B>grep PartitionName /shared/slurm/slurm.conf</B>
 PartitionName=active Priority=1 Default=YES Shared=FORCE:1 Nodes=n[12-16]
 PartitionName=hipri  Priority=2             Shared=FORCE:1 Nodes=n[12-16]
-[user@n16 ~]$ 
 </PRE>
 <P>
 The <I>runit.pl</I> script launches a simple load-generating app that runs
@@ -186,7 +187,6 @@ JOBID PARTITION     NAME   USER  ST   TIME  NODES NODELIST
   487    active runit.pl   user   R   0:05      1 n14
   488    active runit.pl   user   R   0:05      1 n15
   489    active runit.pl   user   R   0:04      1 n16
-[user@n16 ~]$
 </PRE>
 <P>
 Now submit a short-running 3-node job to the <I>hipri</I> partition:
@@ -202,7 +202,6 @@ JOBID PARTITION     NAME   USER  ST   TIME  NODES NODELIST
   486    active runit.pl   user   S   0:27      1 n13
   487    active runit.pl   user   S   0:26      1 n14
   490     hipri runit.pl   user   R   0:03      3 n[12-14]
-[user@n16 ~]$
 </PRE>
 <P>
 Job 490 in the <I>hipri</I> partition preempted jobs 485, 486, and 487 from
@@ -221,7 +220,6 @@ JOBID PARTITION     NAME   USER  ST   TIME  NODES NODELIST
   487    active runit.pl   user   R   0:29      1 n14
   488    active runit.pl   user   R   0:59      1 n15
   489    active runit.pl   user   R   0:58      1 n16
-[user@n16 ~]$
 </PRE>
 
 
-- 
GitLab