Skip to content
Snippets Groups Projects
Commit a64a1d0c authored by Marcin Stolarek's avatar Marcin Stolarek Committed by Ben Roberts
Browse files

Docs - squeue nodecount simplify comment making it more general

%D - Number of nodes is not fully evaluated before the job start. In
real environment when -N is not specified this may be really complicated
so it doesn't make sense to give the really smallest possible number
here for PD job.

Bug 8113
parent 6c5d1438
No related branches found
No related tags found
No related merge requests found
.TH squeue "1" "Slurm Commands" "April 2019" "Slurm Commands"
.TH squeue "1" "Slurm Commands" "February 2020" "Slurm Commands"
.SH "NAME"
squeue \- view information about jobs located in the Slurm scheduling queue.
......@@ -198,9 +198,8 @@ Number of nodes allocated to the job or the minimum number of nodes
required by a pending job. The actual number of nodes allocated to a pending
job may exceed this number if the job specified a node range count (e.g.
minimum and maximum node counts) or the job specifies a processor
count instead of a node count and the cluster contains nodes with varying
processor counts. As a job is completing this number will reflect the
current number of nodes allocated.
count instead of a node count. As a job is completing this number will reflect
the current number of nodes allocated.
(Valid for jobs only)
.TP
\fB%e\fR
......@@ -716,9 +715,8 @@ Number of nodes allocated to the job or the minimum number of nodes
required by a pending job. The actual number of nodes allocated to a pending
job may exceed this number if the job specified a node range count (e.g.
minimum and maximum node counts) or the job specifies a processor
count instead of a node count and the cluster contains nodes with varying
processor counts. As a job is completing this number will reflect the
current number of nodes allocated.
count instead of a node count. As a job is completing this number will reflect
the current number of nodes allocated.
(Valid for jobs only)
.TP
\fBnumtasks\fR
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment