Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
S
Slurm
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package Registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
tud-zih-energy
Slurm
Commits
5b1764a8
Commit
5b1764a8
authored
19 years ago
by
Moe Jette
Browse files
Options
Downloads
Patches
Plain Diff
Update man pages for new bgl code
parent
134655ce
No related branches found
No related tags found
No related merge requests found
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
doc/man/man1/sinfo.1
+11
-11
11 additions, 11 deletions
doc/man/man1/sinfo.1
doc/man/man1/smap.1
+2
-2
2 additions, 2 deletions
doc/man/man1/smap.1
doc/man/man1/squeue.1
+20
-19
20 additions, 19 deletions
doc/man/man1/squeue.1
with
33 additions
and
32 deletions
doc/man/man1/sinfo.1
+
11
−
11
View file @
5b1764a8
.TH SINFO "1" "
February
2006" "sinfo 1.
0
" "Slurm components"
.TH SINFO "1" "
March
2006" "sinfo 1.
1
" "Slurm components"
.SH "NAME"
.SH "NAME"
sinfo \- view information about SLURM nodes and partitions.
sinfo \- view information about SLURM nodes and partitions.
...
@@ -249,7 +249,7 @@ partitions without a job time limit.
...
@@ -249,7 +249,7 @@ partitions without a job time limit.
\fBMEMORY\fR
\fBMEMORY\fR
Size of real memory in megabytes on these nodes.
Size of real memory in megabytes on these nodes.
.TP
.TP
\fBNODELIST\fR
\fBNODELIST\fR
or \fBBP_LIST\fR (BlueGene systems only)
Names of nodes associated with this configuration/partition.
Names of nodes associated with this configuration/partition.
.TP
.TP
\fBNODES\fR
\fBNODES\fR
...
@@ -297,14 +297,14 @@ any new work. If the node remains non\-responsive, it will
...
@@ -297,14 +297,14 @@ any new work. If the node remains non\-responsive, it will
be placed in the \fBDOWN\fR state (except in the case of
be placed in the \fBDOWN\fR state (except in the case of
\fBDRAINED\fR, \fBDRAINING\fR, or \fBCOMPLETING\fR nodes).
\fBDRAINED\fR, \fBDRAINING\fR, or \fBCOMPLETING\fR nodes).
.TP 12
.TP 12
ALLOCATED
\fB
ALLOCATED
\fR
The node has been allocated to one or more jobs.
The node has been allocated to one or more jobs.
.TP
.TP
ALLOCATED+
\fB
ALLOCATED+
\fR
The node is allocated to one or more active jobs plus
The node is allocated to one or more active jobs plus
one or more jobs are in the process of COMPLETING.
one or more jobs are in the process of COMPLETING.
.TP
.TP
COMPLETING
\fB
COMPLETING
\fR
All jobs associated with this node are in the process of
All jobs associated with this node are in the process of
COMPLETING. This node state will be removed when
COMPLETING. This node state will be removed when
all of the job's processes have terminated and the SLURM
all of the job's processes have terminated and the SLURM
...
@@ -312,7 +312,7 @@ epilog program (if any) has terminated. See the \fBEpilog\fR
...
@@ -312,7 +312,7 @@ epilog program (if any) has terminated. See the \fBEpilog\fR
parameter description in the \fBslurm.conf\fR man page for
parameter description in the \fBslurm.conf\fR man page for
more information.
more information.
.TP
.TP
DOWN
\fB
DOWN
\fR
The node is unavailable for use. SLURM can automatically
The node is unavailable for use. SLURM can automatically
place nodes in this state if some failure occurs. System
place nodes in this state if some failure occurs. System
administrators may also explicitly place nodes in this state. If
administrators may also explicitly place nodes in this state. If
...
@@ -321,13 +321,13 @@ return it to service. See the \fBReturnToService\fR
...
@@ -321,13 +321,13 @@ return it to service. See the \fBReturnToService\fR
and \fBSlurmdTimeout\fR parameter descriptions in the
and \fBSlurmdTimeout\fR parameter descriptions in the
\fBslurm.conf\fR(5) man page for more information.
\fBslurm.conf\fR(5) man page for more information.
.TP
.TP
DRAINED
\fB
DRAINED
\fR
The node is unavailable for use per system administrator
The node is unavailable for use per system administrator
request. See the \fBupdate node\fR command in the
request. See the \fBupdate node\fR command in the
\fBscontrol\fR(1) man page or the \fBslurm.conf\fR(5) man page
\fBscontrol\fR(1) man page or the \fBslurm.conf\fR(5) man page
for more information.
for more information.
.TP
.TP
DRAINING
\fB
DRAINING
\fR
The node is currently executing a job, but will not be allocated
The node is currently executing a job, but will not be allocated
to additional jobs. The node state will be changed to state
to additional jobs. The node state will be changed to state
\fBDRAINED\fR when the last job on it completes. Nodes enter
\fBDRAINED\fR when the last job on it completes. Nodes enter
...
@@ -335,10 +335,10 @@ this state per system administrator request. See the \fBupdate
...
@@ -335,10 +335,10 @@ this state per system administrator request. See the \fBupdate
node\fR command in the \fBscontrol\fR(1) man page or the
node\fR command in the \fBscontrol\fR(1) man page or the
\fBslurm.conf\fR(5) man page for more information.
\fBslurm.conf\fR(5) man page for more information.
.TP
.TP
IDLE
\fB
IDLE
\fR
The node is not allocated to any jobs and is available for use.
The node is not allocated to any jobs and is available for use.
.TP
.TP
UNKNOWN
\fB
UNKNOWN
\fR
The SLURM controller has just started and the node's state
The SLURM controller has just started and the node's state
has not yet been determined.
has not yet been determined.
...
@@ -430,7 +430,7 @@ Not Responding dev8
...
@@ -430,7 +430,7 @@ Not Responding dev8
.ec
.ec
.SH "COPYING"
.SH "COPYING"
Copyright (C) 2002 The Regents of the University of California.
Copyright (C) 2002
\-2006
The Regents of the University of California.
Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
UCRL-CODE-217948.
UCRL-CODE-217948.
.LP
.LP
...
...
This diff is collapsed.
Click to expand it.
doc/man/man1/smap.1
+
2
−
2
View file @
5b1764a8
.TH SMAP "1" "
December
200
5
" "smap 1.
0
" "Slurm components"
.TH SMAP "1" "
March
200
6
" "smap 1.
1
" "Slurm components"
.SH "NAME"
.SH "NAME"
smap \- graphically view information about SLURM jobs, partitions, and set
smap \- graphically view information about SLURM jobs, partitions, and set
...
@@ -410,7 +410,7 @@ compiled into smap.
...
@@ -410,7 +410,7 @@ compiled into smap.
The location of the SLURM configuration file.
The location of the SLURM configuration file.
.SH "COPYING"
.SH "COPYING"
Copyright (C) 2004 The Regents of the University of California.
Copyright (C) 2004
\-2006
The Regents of the University of California.
Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
UCRL-CODE-217948.
UCRL-CODE-217948.
.LP
.LP
...
...
This diff is collapsed.
Click to expand it.
doc/man/man1/squeue.1
+
20
−
19
View file @
5b1764a8
.TH SQUEUE "1" "
December
200
5
" "squeue 1.
0
" "Slurm components"
.TH SQUEUE "1" "
March
200
6
" "squeue 1.
1
" "Slurm components"
.SH "NAME"
.SH "NAME"
squeue \- view information about jobs located in the SLURM scheduling queue.
squeue \- view information about jobs located in the SLURM scheduling queue.
...
@@ -140,7 +140,8 @@ For job steps this field shows the elapsed time since execution began
...
@@ -140,7 +140,8 @@ For job steps this field shows the elapsed time since execution began
and thus will be inaccurate for job steps which have been suspended.
and thus will be inaccurate for job steps which have been suspended.
.TP
.TP
\fB%n\fR
\fB%n\fR
List of node names explicitly requested by the job
List of node names (or base partitions on BlueGene systems) explicitly
requested by the job
.TP
.TP
\fB%N\fR
\fB%N\fR
List of nodes allocated to the job or job step. In the case of a
List of nodes allocated to the job or job step. In the case of a
...
@@ -262,26 +263,26 @@ These codes identify the reason that a job is waiting for execution.
...
@@ -262,26 +263,26 @@ These codes identify the reason that a job is waiting for execution.
A job may be waiting for more than one reason, in which case only
A job may be waiting for more than one reason, in which case only
one of those reasons is displayed.
one of those reasons is displayed.
.TP 20
.TP 20
Dependency
\fB
Dependency
\fR
This job is waiting for a dependent job to complete.
This job is waiting for a dependent job to complete.
.TP
.TP
None
\fB
None
\fR
No reason is set for this job.
No reason is set for this job.
.TP
.TP
PartitionDown
\fB
PartitionDown
\fR
The partition required by this job is in a DOWN state.
The partition required by this job is in a DOWN state.
.TP
.TP
PartitionNodeLimit
\fB
PartitionNodeLimit
\fR
The number of nodes required by this job is outside of it's
The number of nodes required by this job is outside of it's
partitions current limits.
partitions current limits.
.TP
.TP
PartitionTimeLimit
\fB
PartitionTimeLimit
\fR
The job's time limit exceeds it's partition's current time limit.
The job's time limit exceeds it's partition's current time limit.
.TP
.TP
Priority
\fB
Priority
\fR
One or more higher priority jobs exist for this partition.
One or more higher priority jobs exist for this partition.
.TP
.TP
Resources
\fB
Resources
\fR
The job is waiting for resources to become availble.
The job is waiting for resources to become availble.
.SH "JOB STATE CODES"
.SH "JOB STATE CODES"
...
@@ -290,32 +291,32 @@ execution.
...
@@ -290,32 +291,32 @@ execution.
The typical states are PENDING, RUNNING, SUSPENDED, COMPLETING, and COMPLETED.
The typical states are PENDING, RUNNING, SUSPENDED, COMPLETING, and COMPLETED.
An explanation of each state follows.
An explanation of each state follows.
.TP 20
.TP 20
CA CANCELLED
\fB
CA CANCELLED
\fR
Job was explicitly cancelled by the user or system administrator.
Job was explicitly cancelled by the user or system administrator.
The job may or may not have been initiated.
The job may or may not have been initiated.
.TP
.TP
CD COMPLETED
\fB
CD COMPLETED
\fR
Job has terminated all processes on all nodes.
Job has terminated all processes on all nodes.
.TP
.TP
CG COMPLETING
\fB
CG COMPLETING
\fR
Job is in the process of completing. Some processes on some nodes may still be active.
Job is in the process of completing. Some processes on some nodes may still be active.
.TP
.TP
F FAILED
\fB
F FAILED
\fR
Job terminated with non\-zero exit code or other failure condition.
Job terminated with non\-zero exit code or other failure condition.
.TP
.TP
NF NODE_FAIL
\fB
NF NODE_FAIL
\fR
Job terminated due to failure of one or more allocated nodes.
Job terminated due to failure of one or more allocated nodes.
.TP
.TP
PD PENDING
\fB
PD PENDING
\fR
Job is awaiting resource allocation.
Job is awaiting resource allocation.
.TP
.TP
R RUNNING
\fB
R RUNNING
\fR
Job currently has an allocation.
Job currently has an allocation.
.TP
.TP
S SUSPENDED
\fB
S SUSPENDED
\fR
Job has an allocation, but execution has been suspended.
Job has an allocation, but execution has been suspended.
.TP
.TP
TO TIMEOUT
\fB
TO TIMEOUT
\fR
Job terminated upon reaching its time limit.
Job terminated upon reaching its time limit.
.SH "ENVIRONMENT VARIABLES"
.SH "ENVIRONMENT VARIABLES"
...
@@ -401,7 +402,7 @@ Print information only about job step 65552.1:
...
@@ -401,7 +402,7 @@ Print information only about job step 65552.1:
.ec
.ec
.SH "COPYING"
.SH "COPYING"
Copyright (C) 2002 The Regents of the University of California.
Copyright (C) 2002
\-2006
The Regents of the University of California.
Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
UCRL-CODE-217948.
UCRL-CODE-217948.
.LP
.LP
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment