Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
S
Slurm
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
tud-zih-energy
Slurm
Commits
f84107d1
Commit
f84107d1
authored
9 years ago
by
Gennaro Oliva
Committed by
Brian Christiansen
9 years ago
Browse files
Options
Downloads
Patches
Plain Diff
Man pages fixes.
parent
6cc4a9f7
No related branches found
No related tags found
No related merge requests found
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
doc/man/man1/scontrol.1
+4
-0
4 additions, 0 deletions
doc/man/man1/scontrol.1
doc/man/man5/slurm.conf.5
+2
-2
2 additions, 2 deletions
doc/man/man5/slurm.conf.5
with
6 additions
and
2 deletions
doc/man/man1/scontrol.1
+
4
−
0
View file @
f84107d1
...
@@ -779,11 +779,13 @@ The list of nodes allocated to the job.
...
@@ -779,11 +779,13 @@ The list of nodes allocated to the job.
The NodeIndices expose the internal indices into the node table
The NodeIndices expose the internal indices into the node table
associated with the node(s) allocated to the job.
associated with the node(s) allocated to the job.
.TP
.TP
.na
\fINtasksPerN:B:S:C\fP=
\fINtasksPerN:B:S:C\fP=
<tasks_per_node>:<tasks_per_baseboard>:<tasks_per_socket>:<tasks_per_core>
<tasks_per_node>:<tasks_per_baseboard>:<tasks_per_socket>:<tasks_per_core>
Specifies the number of tasks to be started per hardware component (node,
Specifies the number of tasks to be started per hardware component (node,
baseboard, socket and core).
baseboard, socket and core).
Unconstrained values may be shown as "0" or "*".
Unconstrained values may be shown as "0" or "*".
.ad
.TP
.TP
\fIPreemptTime\fP
\fIPreemptTime\fP
Time at which job was signaled that it was selected for preemption.
Time at which job was signaled that it was selected for preemption.
...
@@ -796,10 +798,12 @@ Time the job ran prior to last suspend.
...
@@ -796,10 +798,12 @@ Time the job ran prior to last suspend.
\fIReason\fP
\fIReason\fP
The reason job is not running: e.g., waiting "Resources".
The reason job is not running: e.g., waiting "Resources".
.TP
.TP
.na
\fIReqB:S:C:T\fP=
\fIReqB:S:C:T\fP=
<baseboard_count>:<socket_per_baseboard_count>:<core_per_socket_count>:<thread_per_core_count>
<baseboard_count>:<socket_per_baseboard_count>:<core_per_socket_count>:<thread_per_core_count>
Specifies the count of various hardware components requested by the job.
Specifies the count of various hardware components requested by the job.
Unconstrained values may be shown as "0" or "*".
Unconstrained values may be shown as "0" or "*".
.ad
.TP
.TP
\fISecsPreSuspend\fP=<seconds>
\fISecsPreSuspend\fP=<seconds>
If the job is suspended, this is the run time accumulated by the job
If the job is suspended, this is the run time accumulated by the job
...
...
This diff is collapsed.
Click to expand it.
doc/man/man5/slurm.conf.5
+
2
−
2
View file @
f84107d1
...
@@ -1283,8 +1283,8 @@ May not exceed 65533.
...
@@ -1283,8 +1283,8 @@ May not exceed 65533.
.TP
.TP
\fBMemLimitEnforce\fR
\fBMemLimitEnforce\fR
If set to "no" then Slurm will not terminate the job or the job step
If set to "no" then Slurm will not terminate the job or the job step
if they exceeds the value requested using the
-
-mem-per-cpu option of
if they exceeds the value requested using the
\-\
-mem
\
-per
\
-cpu option of
salloc/sbatch/srun. This is useful if jobs need to specify
-
-mem-per-cpu
salloc/sbatch/srun. This is useful if jobs need to specify
\-\
-mem
\
-per
\
-cpu
for scheduling but they should not be terminate if they exceed the
for scheduling but they should not be terminate if they exceed the
estimated value. The default value is 'yes', terminate the job/step
estimated value. The default value is 'yes', terminate the job/step
if exceed the requested memory.
if exceed the requested memory.
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment