Skip to content
Snippets Groups Projects
Commit 0ed4efa0 authored by Moe Jette's avatar Moe Jette
Browse files

clarify -c option inconsistencies with job/step allocations.

add missing option descriptions for: --sockets-per-node, --cores-per-socket, --threads-per-core
parent d7e96305
No related branches found
No related tags found
No related merge requests found
...@@ -145,7 +145,8 @@ indicates that the job requires 16 nodes at that at least four of those ...@@ -145,7 +145,8 @@ indicates that the job requires 16 nodes at that at least four of those
nodes must have the feature "graphics." nodes must have the feature "graphics."
Constraints with node counts may only be combined with AND operators. Constraints with node counts may only be combined with AND operators.
If no nodes have the requested features, then the job will be rejected If no nodes have the requested features, then the job will be rejected
by the slurm job manager. by the slurm job manager. This option is used for job allocations, but ignored
for job step allocations.
.TP .TP
\fB\-\-contiguous\fR \fB\-\-contiguous\fR
...@@ -154,6 +155,12 @@ Not honored with the \fBtopology/tree\fR or \fBtopology/3d_torus\fR ...@@ -154,6 +155,12 @@ Not honored with the \fBtopology/tree\fR or \fBtopology/3d_torus\fR
plugins, both of which can modify the node ordering. plugins, both of which can modify the node ordering.
Not honored for a job step's allocation. Not honored for a job step's allocation.
.TP
\fB\-\-cores\-per\-socket\fR=<\fIcores\fR>
Allocate the specified number of cores per socket. This may be used to avoid
allocating more than one core per socket on multi\-core sockets. This option
is used for job allocations, but ignored for job step allocations.
.TP .TP
\fB\-\-cpu_bind\fR=[{\fIquiet,verbose\fR},]\fItype\fR \fB\-\-cpu_bind\fR=[{\fIquiet,verbose\fR},]\fItype\fR
Bind tasks to CPUs. Used only when the task/affinity plugin is enabled. Bind tasks to CPUs. Used only when the task/affinity plugin is enabled.
...@@ -242,7 +249,7 @@ Bind to a NUMA locality domain by rank ...@@ -242,7 +249,7 @@ Bind to a NUMA locality domain by rank
Bind by mapping NUMA locality domain IDs to tasks as specified where Bind by mapping NUMA locality domain IDs to tasks as specified where
<list> is <ldom1>,<ldom2>,...<ldomN>. <list> is <ldom1>,<ldom2>,...<ldomN>.
The locality domain IDs are interpreted as decimal values unless they are The locality domain IDs are interpreted as decimal values unless they are
preceded with '0x' in which case they areinterpreted as hexadecimal values. preceded with '0x' in which case they are interpreted as hexadecimal values.
Not supported unless the entire node is allocated to the job. Not supported unless the entire node is allocated to the job.
.TP .TP
.B mask_ldom:<list> .B mask_ldom:<list>
...@@ -291,6 +298,12 @@ unable to execute more than a total of 4 tasks. ...@@ -291,6 +298,12 @@ unable to execute more than a total of 4 tasks.
This option may also be useful to spawn tasks without allocating This option may also be useful to spawn tasks without allocating
resources to the job step from the job's allocation when running resources to the job step from the job's allocation when running
multiple job steps with the \fB\-\-exclusive\fR option. multiple job steps with the \fB\-\-exclusive\fR option.
\fBWARNING\fR: There are configurations and options interpreted differently by
job and job step requests which can result in inconsistent for this option.
For example \fIsrun \-c2 \-\-threads\-per\-core=1 prog\fR may allocate two
cores for the job, but if each of those cores contains two threads, the job
allocation will include four CPUs. The job step allocation will then launch two
threads per CPU for a total of two tasks.
.TP .TP
\fB\-d\fR, \fB\-\-dependency\fR=<\fIdependency_list\fR> \fB\-d\fR, \fB\-\-dependency\fR=<\fIdependency_list\fR>
...@@ -359,7 +372,7 @@ parameter in slurm.conf. ...@@ -359,7 +372,7 @@ parameter in slurm.conf.
When used to initiate a job, the job allocation cannot share nodes with When used to initiate a job, the job allocation cannot share nodes with
other running jobs. This is the oposite of \-\-share, whichever option other running jobs. This is the oposite of \-\-share, whichever option
is seen last on the command line will win. (The default shared/exclusive is seen last on the command line will win. (The default shared/exclusive
behaviour depends on system configuration.) behavior depends on system configuration.)
This option can also be used when initiating more than job step within This option can also be used when initiating more than job step within
an existing resource allocation and you want separate processors to an existing resource allocation and you want separate processors to
...@@ -837,7 +850,7 @@ The default value is specified by the system configuration parameter ...@@ -837,7 +850,7 @@ The default value is specified by the system configuration parameter
.TP .TP
\fB\-p\fR, \fB\-\-partition\fR=<\fIpartition name\fR> \fB\-p\fR, \fB\-\-partition\fR=<\fIpartition name\fR>
Request a specific partition for the resource allocation. If not specified, Request a specific partition for the resource allocation. If not specified,
the default behaviour is to allow the slurm controller to select the default the default behavior is to allow the slurm controller to select the default
partition as designated by the system administrator. partition as designated by the system administrator.
.TP .TP
...@@ -973,6 +986,12 @@ value between 0 [quiet, only errors are displayed] and 4 [verbose ...@@ -973,6 +986,12 @@ value between 0 [quiet, only errors are displayed] and 4 [verbose
operation]. The slurmd debug information is copied onto the stderr of operation]. The slurmd debug information is copied onto the stderr of
the job. By default only errors are displayed. the job. By default only errors are displayed.
.TP
\fB\-\-sockets\-per\-node\fR=<\fIsockets\fR>
Allocate the specified number of sockets per node. This may be used to avoid
allocating more than one task per node on multi\-socket nodes. This option
is used for job allocations, but ignored for job step allocations.
.TP .TP
\fB\-T\fR, \fB\-\-threads\fR=<\fInthreads\fR> \fB\-T\fR, \fB\-\-threads\fR=<\fInthreads\fR>
Allows limiting the number of concurrent threads used to Allows limiting the number of concurrent threads used to
...@@ -1034,6 +1053,12 @@ Acceptable time formats include "minutes", "minutes:seconds", ...@@ -1034,6 +1053,12 @@ Acceptable time formats include "minutes", "minutes:seconds",
"hours:minutes:seconds", "days\-hours", "days\-hours:minutes" and "hours:minutes:seconds", "days\-hours", "days\-hours:minutes" and
"days\-hours:minutes:seconds". "days\-hours:minutes:seconds".
.TP
\fB\-\-threads\-per\-core\fR=<\fIthreads\fR>
Allocate the specified number of threads per core. This may be used to avoid
allocating more than one task per core on hyper\-threaded nodes. This option
is used for job allocations, but ignored for job step allocations.
.TP .TP
\fB\-\-tmp\fR=<\fIMB\fR> \fB\-\-tmp\fR=<\fIMB\fR>
Specify a minimum amount of temporary disk space. Specify a minimum amount of temporary disk space.
...@@ -1763,7 +1788,7 @@ dev[7\-10] ...@@ -1763,7 +1788,7 @@ dev[7\-10]
.fi .fi
.PP .PP
The follwing script runs two job steps in parallel The following script runs two job steps in parallel
within an allocated set of nodes. within an allocated set of nodes.
.nf .nf
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment