Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
S
Slurm
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package Registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
tud-zih-energy
Slurm
Commits
8f1b5e4c
Commit
8f1b5e4c
authored
16 years ago
by
Moe Jette
Browse files
Options
Downloads
Patches
Plain Diff
minor tweaks in explanation of memory limit enforcement.
parent
1b45fcf8
No related branches found
Branches containing commit
No related tags found
No related merge requests found
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
doc/html/cons_res_share.shtml
+2
-0
2 additions, 0 deletions
doc/html/cons_res_share.shtml
doc/html/gang_scheduling.shtml
+2
-0
2 additions, 0 deletions
doc/html/gang_scheduling.shtml
doc/html/preempt.shtml
+3
-5
3 additions, 5 deletions
doc/html/preempt.shtml
with
7 additions
and
5 deletions
doc/html/cons_res_share.shtml
+
2
−
0
View file @
8f1b5e4c
...
...
@@ -217,6 +217,8 @@ allocate individual CPUs to jobs.</P>
<P>Default and maximum values for memory on a per node or per CPU basis can
be configued using the following options: <CODE>DefMemPerCPU</CODE>,
<CODE>DefMemPerNode</CODE>, <CODE>MaxMemPerCPU</CODE> and <CODE>MaxMemPerNode</CODE>.
Users can use the <CODE>--mem</CODE> or <CODE>--mem-per-cpu</CODE> option
at job submission time to specify their memory requirements.
Enforcement of a job's memory allocation is performed by the accounting
plugin, which periodically gathers data about running jobs. Set
<CODE>JobAcctGather</CODE> and <CODE>JobAcctFrequency</CODE> to
...
...
This diff is collapsed.
Click to expand it.
doc/html/gang_scheduling.shtml
+
2
−
0
View file @
8f1b5e4c
...
...
@@ -59,6 +59,8 @@ a memory requirement, we also recommend configuring
It may also be desirable to configure
<I>MaxMemPerCPU</I> (maximum memory per allocated CPU) or
<I>MaxMemPerNode</I> (maximum memory per allocated node) in <I>slurm.conf</I>.
Users can use the <I>--mem</I> or <I>--mem-per-cpu</I> option
at job submission time to specify their memory requirements.
</LI>
<LI>
<B>JobAcctGatherType and JobAcctGatherFrequency</B>:
...
...
This diff is collapsed.
Click to expand it.
doc/html/preempt.shtml
+
3
−
5
View file @
8f1b5e4c
...
...
@@ -50,7 +50,9 @@ a memory requirement, we also recommend configuring
<I>DefMemPerNode</I> (default memory per allocated node).
It may also be desirable to configure
<I>MaxMemPerCPU</I> (maximum memory per allocated CPU) or
<I>MaxMemPerNode</I> (maximum memory per allocated node) in <I>slurm.conf</I>.
<I>MaxMemPerNode</I> (maximum memory per allocated node) in <I>slurm.conf</I>.
Users can use the <I>--mem</I> or <I>--mem-per-cpu</I> option
at job submission time to specify their memory requirements.
</LI>
<LI>
<B>JobAcctGatherType and JobAcctGatherFrequency</B>:
...
...
@@ -161,7 +163,6 @@ Here are the Partition settings:
[user@n16 ~]$ <B>grep PartitionName /shared/slurm/slurm.conf</B>
PartitionName=active Priority=1 Default=YES Shared=FORCE:1 Nodes=n[12-16]
PartitionName=hipri Priority=2 Shared=FORCE:1 Nodes=n[12-16]
[user@n16 ~]$
</PRE>
<P>
The <I>runit.pl</I> script launches a simple load-generating app that runs
...
...
@@ -186,7 +187,6 @@ JOBID PARTITION NAME USER ST TIME NODES NODELIST
487 active runit.pl user R 0:05 1 n14
488 active runit.pl user R 0:05 1 n15
489 active runit.pl user R 0:04 1 n16
[user@n16 ~]$
</PRE>
<P>
Now submit a short-running 3-node job to the <I>hipri</I> partition:
...
...
@@ -202,7 +202,6 @@ JOBID PARTITION NAME USER ST TIME NODES NODELIST
486 active runit.pl user S 0:27 1 n13
487 active runit.pl user S 0:26 1 n14
490 hipri runit.pl user R 0:03 3 n[12-14]
[user@n16 ~]$
</PRE>
<P>
Job 490 in the <I>hipri</I> partition preempted jobs 485, 486, and 487 from
...
...
@@ -221,7 +220,6 @@ JOBID PARTITION NAME USER ST TIME NODES NODELIST
487 active runit.pl user R 0:29 1 n14
488 active runit.pl user R 0:59 1 n15
489 active runit.pl user R 0:58 1 n16
[user@n16 ~]$
</PRE>
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment