Skip to content
Snippets Groups Projects
Commit 22d674fa authored by Morris Jette's avatar Morris Jette
Browse files

Remove doc refs to --job-mem

This job option was removed long ago. Remove references to it
from our web pages.
parent 6ace4268
No related branches found
No related tags found
No related merge requests found
...@@ -66,6 +66,7 @@ SelectTypeParameter in the slurm.conf.</li> ...@@ -66,6 +66,7 @@ SelectTypeParameter in the slurm.conf.</li>
<pre> <pre>
# #
# Excerpts from sample slurm.conf file # Excerpts from sample slurm.conf file
#
SelectType=select/cons_res SelectType=select/cons_res
SelectTypeParameters=CR_Core_Memory SelectTypeParameters=CR_Core_Memory
...@@ -117,9 +118,9 @@ hydra[12-16] 5 allNodes* ... 4 2:2:1 2007 ...@@ -117,9 +118,9 @@ hydra[12-16] 5 allNodes* ... 4 2:2:1 2007
<p>Using select/cons_res plug-in with CR_Memory</p> <p>Using select/cons_res plug-in with CR_Memory</p>
<pre> <pre>
Example: Example:
# srun -N 5 -n 20 --job-mem=1000 sleep 100 & <-- running # srun -N 5 -n 20 --mem=1000 sleep 100 & <-- running
# srun -N 5 -n 20 --job-mem=10 sleep 100 & <-- running # srun -N 5 -n 20 --mem=10 sleep 100 & <-- running
# srun -N 5 -n 10 --job-mem=1000 sleep 100 & <-- queued and waiting for resources # srun -N 5 -n 10 --mem=1000 sleep 100 & <-- queued and waiting for resources
# squeue # squeue
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
...@@ -131,8 +132,8 @@ JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) ...@@ -131,8 +132,8 @@ JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
<p>Using select/cons_res plug-in with CR_Socket_Memory (2 sockets/node)</p> <p>Using select/cons_res plug-in with CR_Socket_Memory (2 sockets/node)</p>
<pre> <pre>
Example 1: Example 1:
# srun -N 5 -n 5 --job-mem=1000 sleep 100 & <-- running # srun -N 5 -n 5 --mem=1000 sleep 100 & <-- running
# srun -n 1 -w hydra12 --job-mem=2000 sleep 100 & <-- queued and waiting for resources # srun -n 1 -w hydra12 --mem=2000 sleep 100 & <-- queued and waiting for resources
# squeue # squeue
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
...@@ -140,8 +141,8 @@ JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) ...@@ -140,8 +141,8 @@ JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
1889 allNodes sleep sballe R 0:08 5 hydra[12-16] 1889 allNodes sleep sballe R 0:08 5 hydra[12-16]
Example 2: Example 2:
# srun -N 5 -n 10 --job-mem=10 sleep 100 & <-- running # srun -N 5 -n 10 --mem=10 sleep 100 & <-- running
# srun -n 1 --job-mem=10 sleep 100 & <-- queued and waiting for resourcessqueue # srun -n 1 --mem=10 sleep 100 & <-- queued and waiting for resourcessqueue
# squeue # squeue
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
...@@ -152,9 +153,9 @@ JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) ...@@ -152,9 +153,9 @@ JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
<p>Using select/cons_res plug-in with CR_CPU_Memory (4 CPUs/node)</p> <p>Using select/cons_res plug-in with CR_CPU_Memory (4 CPUs/node)</p>
<pre> <pre>
Example 1: Example 1:
# srun -N 5 -n 5 --job-mem=1000 sleep 100 & <-- running # srun -N 5 -n 5 --mem=1000 sleep 100 & <-- running
# srun -N 5 -n 5 --job-mem=10 sleep 100 & <-- running # srun -N 5 -n 5 --mem=10 sleep 100 & <-- running
# srun -N 5 -n 5 --job-mem=1000 sleep 100 & <-- queued and waiting for resources # srun -N 5 -n 5 --mem=1000 sleep 100 & <-- queued and waiting for resources
# squeue # squeue
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
...@@ -163,8 +164,8 @@ JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) ...@@ -163,8 +164,8 @@ JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
1834 allNodes sleep sballe R 0:07 5 hydra[12-16] 1834 allNodes sleep sballe R 0:07 5 hydra[12-16]
Example 2: Example 2:
# srun -N 5 -n 20 --job-mem=10 sleep 100 & <-- running # srun -N 5 -n 20 --mem=10 sleep 100 & <-- running
# srun -n 1 --job-mem=10 sleep 100 & <-- queued and waiting for resources # srun -n 1 --mem=10 sleep 100 & <-- queued and waiting for resources
# squeue # squeue
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
...@@ -340,6 +341,6 @@ one mpi process per node.</p> ...@@ -340,6 +341,6 @@ one mpi process per node.</p>
<p class="footer"><a href="#top">top</a></p> <p class="footer"><a href="#top">top</a></p>
<p style="text-align:center;">Last modified 17 January 2014</p> <p style="text-align:center;">Last modified 21 April 2014</p>
<!--#include virtual="footer.txt"--> <!--#include virtual="footer.txt"-->
...@@ -43,8 +43,8 @@ allowing a process to run on more than one logical processor. ...@@ -43,8 +43,8 @@ allowing a process to run on more than one logical processor.
<a name=flags> <a name=flags>
<h2>Overview of new srun flags</h2></a> <h2>Overview of new srun flags</h2></a>
<p> Several new flags have been defined to allow users to <p> Several flags have been defined to allow users to
better take advantage of the new architecture by better take advantage of this architecture by
explicitly specifying the number of sockets, cores, and threads required explicitly specifying the number of sockets, cores, and threads required
by their application. Table 1 summarizes the new multi-core flags. by their application. Table 1 summarizes the new multi-core flags.
...@@ -89,8 +89,12 @@ by their application. Table 1 summarizes the new multi-core flags. ...@@ -89,8 +89,12 @@ by their application. Table 1 summarizes the new multi-core flags.
<b><a href="#srun_consres">Memory as a consumable resource</a></b> <b><a href="#srun_consres">Memory as a consumable resource</a></b>
</td></tr> </td></tr>
<tr> <tr>
<td> --job-mem=<i>mem</i></td> <td> --mem=<i>mem</i></td>
<td> maximum amount of real memory per node required by the job. <td> amount of real memory per node required by the job.
</td></tr>
<tr>
<td> --mem-per-cpu=<i>mem</i></td>
<td> amount of real memory per allocated CPU required by the job.
</td></tr> </td></tr>
<tr><td colspan=2> <tr><td colspan=2>
<b><a href="#srun_ntasks">Task invocation control</a></b> <b><a href="#srun_ntasks">Task invocation control</a></b>
...@@ -125,17 +129,13 @@ by their application. Table 1 summarizes the new multi-core flags. ...@@ -125,17 +129,13 @@ by their application. Table 1 summarizes the new multi-core flags.
Table 1: New srun flags to support the multi-core/multi-threaded environment Table 1: New srun flags to support the multi-core/multi-threaded environment
</center> </center>
<p>It is important to note that many of these <p>It is important to note that many of these flags are only meaningful if the
flags are only meaningful if the processes' affinity is set. In order for processes' have some affinity to specific CPUs and (optionally) memory.
the affinity to be set, the task/affinity plugin must be first enabled in Task affinity is configured using the TaskPlugin parameter in the slurm.conf file.
slurm.conf: Several options exist for the TaskPlugin depending upon system architecture
and available software, any of them except "task/none" will find tasks to CPUs.
<PRE> See the "Task Launch" section if generating slurm.conf via
TaskPlugin=task/affinity # enable task affinity <a href="configurator.html">configurator.html</a>.</p>
</PRE>
<p>See the "Task Launch" section if generating slurm.conf via
<a href="configurator.html">configurator.html</a>.
<a name="srun_lowlevelmc"> <a name="srun_lowlevelmc">
<h3>Low-level --cpu_bind=... - Explicit binding interface</h3></a> <h3>Low-level --cpu_bind=... - Explicit binding interface</h3></a>
...@@ -150,9 +150,11 @@ TaskPlugin=task/affinity # enable task affinity ...@@ -150,9 +150,11 @@ TaskPlugin=task/affinity # enable task affinity
no[ne] don't bind tasks to CPUs (default) no[ne] don't bind tasks to CPUs (default)
rank bind by task rank rank bind by task rank
map_cpu:<i>&lt;list&gt;</i> specify a CPU ID binding for each task map_cpu:<i>&lt;list&gt;</i> specify a CPU ID binding for each task
where <i>&lt;list&gt;</i> is <i>&lt;cpuid1&gt;,&lt;cpuid2&gt;,...&lt;cpuidN&gt;</i> where <i>&lt;list&gt;</i> is
mask_cpu:<i>&lt;list&gt;</i> specify a CPU ID binding mask for each task <i>&lt;cpuid1&gt;,&lt;cpuid2&gt;,...&lt;cpuidN&gt;</i>
where <i>&lt;list&gt;</i> is <i>&lt;mask1&gt;,&lt;mask2&gt;,...&lt;maskN&gt;</i> mask_cpu:<i>&lt;list&gt;</i> specify a CPU ID binding mask for each
task where <i>&lt;list&gt;</i> is
<i>&lt;mask1&gt;,&lt;mask2&gt;,...&lt;maskN&gt;</i>
sockets auto-generated masks bind to sockets sockets auto-generated masks bind to sockets
cores auto-generated masks bind to cores cores auto-generated masks bind to cores
threads auto-generated masks bind to threads threads auto-generated masks bind to threads
...@@ -274,18 +276,17 @@ to -m block:cyclic with --cpu_bind=thread.</p> ...@@ -274,18 +276,17 @@ to -m block:cyclic with --cpu_bind=thread.</p>
<a name="srun_consres"> <a name="srun_consres">
<h3>Memory as a Consumable Resource</h3></a> <h3>Memory as a Consumable Resource</h3></a>
<p>The --job-mem flag specifies the maximum amount of memory in MB <p>The --mem flag specifies the maximum amount of memory in MB
needed by the job per node. This flag is used to support the memory needed by the job per node. This flag is used to support the memory
as a consumable resource allocation strategy.</p> as a consumable resource allocation strategy.</p>
<PRE> <PRE>
--job-mem=<i>MB</i> maximum amount of real memory per node --mem=<i>MB</i> maximum amount of real memory per node
required by the job. required by the job.
--mem >= --job-mem if --mem is specified.
</PRE> </PRE>
<p>This flag allows the scheduler to co-allocate jobs on specific nodes <p>This flag allows the scheduler to co-allocate jobs on specific nodes
given that their added memory requirement do not exceed the amount given that their added memory requirement do not exceed the total amount
of memory on the nodes.</p> of memory on the nodes.</p>
...@@ -296,7 +297,7 @@ SelectType=select/cons_res # enable consumable resources ...@@ -296,7 +297,7 @@ SelectType=select/cons_res # enable consumable resources
SelectTypeParameters=CR_Memory # memory as a consumable resource SelectTypeParameters=CR_Memory # memory as a consumable resource
</PRE> </PRE>
<p> Using memory as a consumable resource can also be combined with <p> Using memory as a consumable resource is typically combined with
the CPU, Socket, or Core consumable resources using SelectTypeParameters the CPU, Socket, or Core consumable resources using SelectTypeParameters
values of: CR_CPU_Memory, CR_Socket_Memory or CR_Core_Memory values of: CR_CPU_Memory, CR_Socket_Memory or CR_Core_Memory
...@@ -727,7 +728,7 @@ parts* 4 2:2:1 2 2 1 ...@@ -727,7 +728,7 @@ parts* 4 2:2:1 2 2 1
the following identifiers are available:</p> the following identifiers are available:</p>
<PRE> <PRE>
%m Minimum size of memory (in MB) requested by the job %m Size of memory (in MB) requested by the job
%H Number of requested sockets per node %H Number of requested sockets per node
%I Number of requested cores per socket %I Number of requested cores per socket
%J Number of requested threads per core %J Number of requested threads per core
...@@ -755,45 +756,16 @@ JOBID ST TIME NODES SOCKETS CORES THREADS S:C:T NODELIST(REASON) ...@@ -755,45 +756,16 @@ JOBID ST TIME NODES SOCKETS CORES THREADS S:C:T NODELIST(REASON)
16 R 1:26 1 2 2 1 2:2:1 hydra15 16 R 1:26 1 2 2 1 2:2:1 hydra15
</PRE> </PRE>
<p> <p>The squeue command can also display the memory size of jobs, for example:</p>
The display of the minimum size of memory requested by the job has
been extended to also show the amount of memory requested by
the --job-mem flag. If --job-mem and --mem are set to the
same value, a single number is display for MIN_MEMORY. Otherwise
a range is reported:
<p>submit job 21:
<pre>
% srun sleep 100 &
</pre>
<p>submit job 22: <PRE>
<pre> % sbatch --mem=123 tmp
% srun --job-mem=2048MB --mem=1024MB sleep 100 & Submitted batch job 24
srun: mem < job-mem - resizing mem to be equal to job-mem
</pre>
<p>submit job 23:
<pre>
% srun --job-mem=2048MB --mem=10240MB sleep 100 &
</pre>
<pre> $ squeue -o "%.5i %.2t %.4M %.5D %m"
% squeue -o "%.5i %.2t %.4M %.5D %m"
JOBID ST TIME NODES MIN_MEMORY JOBID ST TIME NODES MIN_MEMORY
21 PD 0:00 1 0-1 24 R 0:05 1 123
22 PD 0:00 1 2048 </PRE>
23 PD 0:00 1 2048-10240
17 R 1:12 1 0
18 R 1:11 1 0
19 R 1:11 1 0
20 R 1:10 1 0
</pre>
<p>In the above examples, note that once a job starts running, the
MIN_* constraints are all reported as zero regardless of what
their initial values were (since they are meaningless once
the job starts running).
<p>See also 'squeue --help' and 'man squeue'</p> <p>See also 'squeue --help' and 'man squeue'</p>
...@@ -977,7 +949,7 @@ using NodeName: ...@@ -977,7 +949,7 @@ using NodeName:
</PRE> </PRE>
<!--------------------------------------------------------------------------> <!-------------------------------------------------------------------------->
<p style="text-align:center;">Last modified 22 July 2010</p> <p style="text-align:center;">Last modified 21 April 2014</p>
<!--#include virtual="footer.txt"--> <!--#include virtual="footer.txt"-->
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment