Skip to content
Snippets Groups Projects
Commit 8f1b5e4c authored by Moe Jette's avatar Moe Jette
Browse files

minor tweaks in explanation of memory limit enforcement.

parent 1b45fcf8
No related branches found
No related tags found
No related merge requests found
...@@ -217,6 +217,8 @@ allocate individual CPUs to jobs.</P> ...@@ -217,6 +217,8 @@ allocate individual CPUs to jobs.</P>
<P>Default and maximum values for memory on a per node or per CPU basis can <P>Default and maximum values for memory on a per node or per CPU basis can
be configued using the following options: <CODE>DefMemPerCPU</CODE>, be configued using the following options: <CODE>DefMemPerCPU</CODE>,
<CODE>DefMemPerNode</CODE>, <CODE>MaxMemPerCPU</CODE> and <CODE>MaxMemPerNode</CODE>. <CODE>DefMemPerNode</CODE>, <CODE>MaxMemPerCPU</CODE> and <CODE>MaxMemPerNode</CODE>.
Users can use the <CODE>--mem</CODE> or <CODE>--mem-per-cpu</CODE> option
at job submission time to specify their memory requirements.
Enforcement of a job's memory allocation is performed by the accounting Enforcement of a job's memory allocation is performed by the accounting
plugin, which periodically gathers data about running jobs. Set plugin, which periodically gathers data about running jobs. Set
<CODE>JobAcctGather</CODE> and <CODE>JobAcctFrequency</CODE> to <CODE>JobAcctGather</CODE> and <CODE>JobAcctFrequency</CODE> to
......
...@@ -59,6 +59,8 @@ a memory requirement, we also recommend configuring ...@@ -59,6 +59,8 @@ a memory requirement, we also recommend configuring
It may also be desirable to configure It may also be desirable to configure
<I>MaxMemPerCPU</I> (maximum memory per allocated CPU) or <I>MaxMemPerCPU</I> (maximum memory per allocated CPU) or
<I>MaxMemPerNode</I> (maximum memory per allocated node) in <I>slurm.conf</I>. <I>MaxMemPerNode</I> (maximum memory per allocated node) in <I>slurm.conf</I>.
Users can use the <I>--mem</I> or <I>--mem-per-cpu</I> option
at job submission time to specify their memory requirements.
</LI> </LI>
<LI> <LI>
<B>JobAcctGatherType and JobAcctGatherFrequency</B>: <B>JobAcctGatherType and JobAcctGatherFrequency</B>:
......
...@@ -50,7 +50,9 @@ a memory requirement, we also recommend configuring ...@@ -50,7 +50,9 @@ a memory requirement, we also recommend configuring
<I>DefMemPerNode</I> (default memory per allocated node). <I>DefMemPerNode</I> (default memory per allocated node).
It may also be desirable to configure It may also be desirable to configure
<I>MaxMemPerCPU</I> (maximum memory per allocated CPU) or <I>MaxMemPerCPU</I> (maximum memory per allocated CPU) or
<I>MaxMemPerNode</I> (maximum memory per allocated node) in <I>slurm.conf</I>. <I>MaxMemPerNode</I> (maximum memory per allocated node) in <I>slurm.conf</I>.
Users can use the <I>--mem</I> or <I>--mem-per-cpu</I> option
at job submission time to specify their memory requirements.
</LI> </LI>
<LI> <LI>
<B>JobAcctGatherType and JobAcctGatherFrequency</B>: <B>JobAcctGatherType and JobAcctGatherFrequency</B>:
...@@ -161,7 +163,6 @@ Here are the Partition settings: ...@@ -161,7 +163,6 @@ Here are the Partition settings:
[user@n16 ~]$ <B>grep PartitionName /shared/slurm/slurm.conf</B> [user@n16 ~]$ <B>grep PartitionName /shared/slurm/slurm.conf</B>
PartitionName=active Priority=1 Default=YES Shared=FORCE:1 Nodes=n[12-16] PartitionName=active Priority=1 Default=YES Shared=FORCE:1 Nodes=n[12-16]
PartitionName=hipri Priority=2 Shared=FORCE:1 Nodes=n[12-16] PartitionName=hipri Priority=2 Shared=FORCE:1 Nodes=n[12-16]
[user@n16 ~]$
</PRE> </PRE>
<P> <P>
The <I>runit.pl</I> script launches a simple load-generating app that runs The <I>runit.pl</I> script launches a simple load-generating app that runs
...@@ -186,7 +187,6 @@ JOBID PARTITION NAME USER ST TIME NODES NODELIST ...@@ -186,7 +187,6 @@ JOBID PARTITION NAME USER ST TIME NODES NODELIST
487 active runit.pl user R 0:05 1 n14 487 active runit.pl user R 0:05 1 n14
488 active runit.pl user R 0:05 1 n15 488 active runit.pl user R 0:05 1 n15
489 active runit.pl user R 0:04 1 n16 489 active runit.pl user R 0:04 1 n16
[user@n16 ~]$
</PRE> </PRE>
<P> <P>
Now submit a short-running 3-node job to the <I>hipri</I> partition: Now submit a short-running 3-node job to the <I>hipri</I> partition:
...@@ -202,7 +202,6 @@ JOBID PARTITION NAME USER ST TIME NODES NODELIST ...@@ -202,7 +202,6 @@ JOBID PARTITION NAME USER ST TIME NODES NODELIST
486 active runit.pl user S 0:27 1 n13 486 active runit.pl user S 0:27 1 n13
487 active runit.pl user S 0:26 1 n14 487 active runit.pl user S 0:26 1 n14
490 hipri runit.pl user R 0:03 3 n[12-14] 490 hipri runit.pl user R 0:03 3 n[12-14]
[user@n16 ~]$
</PRE> </PRE>
<P> <P>
Job 490 in the <I>hipri</I> partition preempted jobs 485, 486, and 487 from Job 490 in the <I>hipri</I> partition preempted jobs 485, 486, and 487 from
...@@ -221,7 +220,6 @@ JOBID PARTITION NAME USER ST TIME NODES NODELIST ...@@ -221,7 +220,6 @@ JOBID PARTITION NAME USER ST TIME NODES NODELIST
487 active runit.pl user R 0:29 1 n14 487 active runit.pl user R 0:29 1 n14
488 active runit.pl user R 0:59 1 n15 488 active runit.pl user R 0:59 1 n15
489 active runit.pl user R 0:58 1 n16 489 active runit.pl user R 0:58 1 n16
[user@n16 ~]$
</PRE> </PRE>
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment