Skip to content
Snippets Groups Projects
Commit a1510a83 authored by Tim Wickberg's avatar Tim Wickberg
Browse files

Update docs for Shared -> OverSubscribe and Priority -> PriorityTier

Also remove remove misleading note "Unless PreemptType=preempt/partition_prio
the partition Priority is not critical"; it does still impact scheduling
when nodes overlap partitions.
parent 982dc63d
No related branches found
No related tags found
No related merge requests found
......@@ -39,6 +39,8 @@ documents those changes that are of interest to users and administrators.
-- Configuration parameter "CpuFreqDef" used to set default governor for job
step not specifying --cpu-freq (previously the parameter was unused).
-- Fix sshare -o<format> to correctly display new lengths.
-- Update documentation to rename Shared option to OverSubscribe.
-- Update documentation to rename partition Priority option to PriorityTier.
* Changes in Slurm 16.05.0pre2
==============================
......
......@@ -30,7 +30,7 @@ as well as any combination of the logical processors with Memory:</li>
<li><b>Socket</b> (<i>CR_Socket</i>): Socket as a consumable resource.</li>
<li/><b>Core</b> (<i>CR_Core</i>): Core as a consumable resource.</li>
<li><b>Memory</b> (<i>CR_Memory</i>) Memory <u>only</u> as a
consumable resource. Note! CR_Memory assumes Shared=Yes</li>
consumable resource. Note! CR_Memory assumes OverSubscribe=Yes</li>
<li><b>Socket and Memory</b> (<i>CR_Socket_Memory</i>): Socket
and Memory as consumable resources.</li>
<li><b>Core and Memory</b> (<i>CR_Core_Memory</i>): Core and
......@@ -57,8 +57,8 @@ resource. It is important to specify enough memory since Slurm will not allow
the application to use more than the requested amount of real memory. The
default value for --mem is 1 MB. see srun man page for more details.</li>
<li><b>All CR_s assume Shared=No</b> or Shared=Force EXCEPT for
<b>CR_MEMORY</b> which <b>assumes Shared=Yes</b></li>
<li><b>All CR_s assume OverSubscribe=No</b> or OverSubscribe=Force EXCEPT for
<b>CR_MEMORY</b> which <b>assumes OverSubscribe=Yes</b></li>
<li>The consumable resource plugin is enabled via SelectType and
SelectTypeParameter in the slurm.conf.</li>
......
......@@ -563,10 +563,10 @@ PartitionName=DEFAULT MaxNodes=178
PartitionName=DEFAULT OverSubscribe=EXCLUSIVE State=UP DefaultTime=60
# "User Support" partition with a higher priority
PartitionName=usup Hidden=YES Priority=10 MaxTime=720 AllowGroups=staff
PartitionName=usup Hidden=YES PriorityTier=10 MaxTime=720 AllowGroups=staff
# normal partition available to all users
PartitionName=day Default=YES Priority=1 MaxTime=01:00:00
PartitionName=day Default=YES PriorityTier=1 MaxTime=01:00:00
</pre>
<p>Slurm supports an optional <i>cray.conf</i> file containing Cray-specific
......
......@@ -132,18 +132,16 @@ select/cons_res plugin.</LI>
</UL>
</LI>
<LI>
<B>Priority</B>: Configure the partition's <I>Priority</I> setting relative to
other partitions to control the preemptive behavior when
<B>PriorityTier</B>: Configure the partition's <I>PriorityTier</I> setting
relative to other partitions to control the preemptive behavior when
<I>PreemptType=preempt/partition_prio</I>.
This option is not relevant if <I>PreemptType=preempt/qos</I>.
If two jobs from two
different partitions are allocated to the same resources, the job in the
partition with the greater <I>Priority</I> value will preempt the job in the
partition with the lesser <I>Priority</I> value. If the <I>Priority</I> values
of the two partitions are equal then no preemption will occur. The default
<I>Priority</I> value is 1.
<BR><B>NOTE:</B> Unless <I>PreemptType=preempt/partition_prio</I> the
partition <I>Priority</I> is not critical.
partition with the greater <I>PriorityTier</I> value will preempt the job in the
partition with the lesser <I>PriorityTier</I> value. If the <I>PriorityTier</I>
values of the two partitions are equal then no preemption will occur. The
default <I>PriorityTier</I> value is 1.
</LI>
<LI>
<B>OverSubscribe</B>: Configure the partition's <I>OverSubscribe</I> setting to
......@@ -167,7 +165,7 @@ with <I>SelectType=cons_res</I>.
To enable preemption after making the configuration changes described above,
restart Slurm if it is already running. Any change to the plugin settings in
Slurm requires a full restart of the daemons. If you just change the partition
<I>Priority</I> or <I>OverSubscribe</I> setting, this can be updated with
<I>PriorityTier</I> or <I>OverSubscribe</I> setting, this can be updated with
<I>scontrol reconfig</I>.
</P>
......@@ -280,8 +278,8 @@ Here are the Partition settings:
<PRE>
[user@n16 ~]$ <B>grep PartitionName /shared/slurm/slurm.conf</B>
PartitionName=DEFAULT OverSubscribe=FORCE:1 Nodes=n[12-16]
PartitionName=active Priority=1 Default=YES
PartitionName=hipri Priority=2
PartitionName=active PriorityTier=1 Default=YES
PartitionName=hipri PriorityTier=2
</PRE>
<P>
The <I>runit.pl</I> script launches a simple load-generating app that runs
......@@ -348,9 +346,9 @@ job preemption mechanisms.
</P>
<PRE>
# Excerpt from slurm.conf
PartitionName=low Nodes=linux Default=YES OverSubscribe=NO Priority=10 PreemptMode=requeue
PartitionName=med Nodes=linux Default=NO OverSubscribe=FORCE:1 Priority=20 PreemptMode=suspend
PartitionName=hi Nodes=linux Default=NO OverSubscribe=FORCE:1 Priority=30 PreemptMode=off
PartitionName=low Nodes=linux Default=YES OverSubscribe=NO PriorityTier=10 PreemptMode=requeue
PartitionName=med Nodes=linux Default=NO OverSubscribe=FORCE:1 PriorityTier=20 PreemptMode=suspend
PartitionName=hi Nodes=linux Default=NO OverSubscribe=FORCE:1 PriorityTier=30 PreemptMode=off
</PRE>
<PRE>
$ sbatch tmp
......
......@@ -183,13 +183,13 @@ below. See the man page for more information.</p>
<pre>
adev0: scontrol show partition
PartitionName=debug TotalNodes=5 TotalCPUs=40 RootOnly=NO
Default=YES OverSubscribe=FORCE:4 Priority=1 State=UP
Default=YES OverSubscribe=FORCE:4 PriorityTier=1 State=UP
MaxTime=00:30:00 Hidden=NO
MinNodes=1 MaxNodes=26 DisableRootJobs=NO AllowGroups=ALL
Nodes=adev[1-5] NodeIndices=0-4
PartitionName=batch TotalNodes=10 TotalCPUs=80 RootOnly=NO
Default=NO OverSubscribe=FORCE:4 Priority=1 State=UP
Default=NO OverSubscribe=FORCE:4 PriorityTier=1 State=UP
MaxTime=16:00:00 Hidden=NO
MinNodes=1 MaxNodes=26 DisableRootJobs=NO AllowGroups=ALL
Nodes=adev[6-15] NodeIndices=5-14
......
......@@ -430,7 +430,7 @@ the command is executed.
The job allocation can not share nodes with other running jobs (or just other
users with the "=user" option or with the "=mcs" option).
The default shared/exclusive behavior depends on system configuration and the
partition's \fBShared\fR option takes precedence over the job's option.
partition's \fBOverSubscribe\fR option takes precedence over the job's option.
.TP
\fB\-F\fR, \fB\-\-nodefile\fR=<\fInode file\fR>
......
......@@ -468,7 +468,7 @@ See the \fB\-\-input\fR option for filename specification options.
The job allocation can not share nodes with other running jobs (or just other
users with the "=user" option or with the "=mcs" option).
The default shared/exclusive behavior depends on system configuration and the
partition's \fBShared\fR option takes precedence over the job's option.
partition's \fBOverSubscribe\fR option takes precedence over the job's option.
.TP
\fB\-\-export\fR=<\fIenvironment variables | ALL | NONE\fR>
......
......@@ -1263,11 +1263,7 @@ Possible values are "YES" and "NO".
.TP
\fIShared\fP=<yes|no|exclusive|force>[:<job_count>]
See \fIOverSubscribe\fP option above.
Specify if nodes in this partition can be shared by multiple jobs.
Possible values are "YES", "NO", "EXCLUSIVE" and "FORCE".
An optional job count specifies how many jobs can be allocated to use
each resource.
Renamed to \fIOverSubscribe\fP, see option descriptions above.
.TP
\fIState\fP=<up|down|drain|inactive>
......
......@@ -667,7 +667,7 @@ allocations.
When used to initiate a job, the job allocation cannot share nodes with
other running jobs (or just other users with the "=user" option or "=mcs" option).
The default shared/exclusive behavior depends on system configuration and the
partition's \fBShared\fR option takes precedence over the job's option.
partition's \fBOverSubscribe\fR option takes precedence over the job's option.
This option can also be used when initiating more than one job step within
an existing resource allocation, where you want separate processors to
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment