Skip to content
Snippets Groups Projects
Commit 627e6498 authored by Moe Jette's avatar Moe Jette
Browse files

Initial draft of scontrol man page

parent 1b69e365
No related branches found
No related tags found
No related merge requests found
.TH SCONTROL "1" "July 2002" "scontrol 0.1" "Slurm components"
.SH "NAME"
scontrol \- Used view and modify Slurm configuration and state.
.SH "SYNOPSIS"
\fBscontrol\fR [\fIOPTIONS\fR...] [\fICOMMAND\fR...]
.SH "DESCRIPTION"
\fBscontrol\fR is used to view or modify Slurm configuration including: job,
job step, node, partition, and overall system configuration. Most of the
commands can only be executed by user root. If an attempt to view or modify
configuration information is made by an unauthorized user, an error message
will be printed and the requested action will not occur. If no command is
entered on the execute line, \fBscontrol\fR will operate in an interactive
mode and prompt for input. It will continue prompting for input and executing
commands until explicitly terminated. If a command is entered on the execute
line, \fBscontrol\fR will execute that command and terminate. All commands
and options are case-insensitive, although node names and partition names
are case-sensitive (node names "LX" and "lx" are distinct).
.TP
OPTIONS
.TP
\fB-h\fR
Print a help message describing the usage of scontrol.
.TP
\fB-q\fR
Print no warning or informational messages, only fatal error messages.
.TP
\fB-v\fR
Print detailed event logging. This includes time-stamps on data structures,
record counts, etc.
.TP
COMMAND
.TP
\fIexit\fP
Terminate the execution of scontrol.
.TP
\fIhelp\fP
Display a description of scontrol options and commands.
.TP
\fIquiet\fP
Print no warning or informational messages, only fatal error messages.
.TP
\fIquit\fP
Terminate the execution of scontrol.
.TP
\fIreconfigure\fP
Instruct the Slurm controller to re-read its configuration file.
This mechanism would be used to register the physical addition or removal of
nodes from the cluster or recognize the change of a node's configuration,
such as the addition of memory or processors. Running jobs continue execution.
.TP
\fIshow\fP \fIENTITY\fP \fPID\fP
Display the state of the specified entity with the specified identification.
\fIENTITY\fP may be \fIbuild\fP, \fIjob\fP, \fInode\fP or \fIpartition\fP.
\fIID\fP can be used to identify a specific element of the identified
entity: the build parameter, job ID, node name or partition name for
entities \fIbuild\fP, \fIjob\fP, \fInode\fP and \fIpartition\fP respectively.
Multiple node names may be specified using simple regular expressions
(e.g. "lx[10-20]"). All other \fIID\fP values must identify a single
element. By default, all elements of the entity type specified are printed.
.TP
\fIupdate\fP \fISPECIFICATION\fP
Update job, node or partition configuration per the supplied specification.
\fISPECIFICATION\fP is in the same format as the Slurm configuration file
and the output of the \fIshow\fP command described above. It may be desirable
to execute the \fIshow\fP command (described above) on the specific entity
you which to update, then use cut-and-paste tools to enter updated configuration
values to the \fIupdate\fP. Note that while most configuration values can be
changed using this command, not all can be changed using this mechanism. In
particular, the hardware configuration of a node or the physical addition or
removal of nodes from the cluster may only be accomplished through editing
the Slurm configuration file and executing the \fIreconfigure\fP command
(described above).
.TP
\fIverbose\fP
Print detailed event logging. This includes time-stamps on data structures,
record counts, etc.
.TP
\fIversion\fP
Display the version number of scontrol being executed.
.SH "EXAMPLE"
.eo
.br
# scontrol
.br
scontrol: show part class
.br
PartitionName=class MaxTime=30 MaxNodes=2 TotalNodes=10 TotalCPUs=160 Key=NO
.br
Default=NO Shared=NO State=UP Nodes=lx[0031-0040] AllowGroups=students
.br
scontrol: update PartitionName=class MaxTime=300 MaxNodes=4
.br
scontrol: show job 65539
.br
JobId=65539 UserId=1500 JobState=PENDING TimeLimit=100 Priority=100 Partition=batch
.br
Name=job01 NodeList=(null) StartTime=0 EndTime=0 Shared=0
.br
ReqProcs=1000 ReqNodes=400 Contiguous=1 MinProcs=4 MinMemory=1024 MinTmpDisk=2034
.br
ReqNodeList=lx[3000-3003] Features=(null) JobScript=/bin/hostname
.br
scontrol: update JobId=65539 TimeLimit=200 Priority=500
.br
scontrol: quit
.ec
.SH "SEE ALSO"
\fBslurm_load_ctl_conf\fR(3), \fBslurm_load_jobs\fR(3), \fBslurm_load_node\fR(3),
\fBslurm_load_partitions\fR(3),
\fBslurm_reconfigure\fR(3),
\fBslurm_update_job\fR(3), \fBslurm_update_node\fR(3), \fBslurm_update_partition\fR(3)
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment