Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
S
Slurm
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
tud-zih-energy
Slurm
Commits
cab9a2e9
Commit
cab9a2e9
authored
15 years ago
by
Don Lipari
Browse files
Options
Downloads
Patches
Plain Diff
Updated the moab.shtml page in the next 2.1 build to state that the
JobPriority=run option is not yet operational.
parent
9603a226
No related branches found
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
doc/html/moab.shtml
+22
-21
22 additions, 21 deletions
doc/html/moab.shtml
with
22 additions
and
21 deletions
doc/html/moab.shtml
+
22
−
21
View file @
cab9a2e9
...
@@ -47,7 +47,7 @@ partition configuration specifications.</p>
...
@@ -47,7 +47,7 @@ partition configuration specifications.</p>
<p>The default value of <i>SchedulerPort</i> is 7321.</p>
<p>The default value of <i>SchedulerPort</i> is 7321.</p>
<p>SLURM version 2.0 and higher have internal scheduling capabilities
<p>SLURM version 2.0 and higher have internal scheduling capabilities
that are not compat
a
ble with Moab.
that are not compat
i
ble with Moab.
<ol>
<ol>
<li>Do not configure SLURM to use the "priority/multifactor" plugin
<li>Do not configure SLURM to use the "priority/multifactor" plugin
as it would set job priorities which conflict with those set by Moab.</li>
as it would set job priorities which conflict with those set by Moab.</li>
...
@@ -81,11 +81,11 @@ This use of this key is essential to insure that a user
...
@@ -81,11 +81,11 @@ This use of this key is essential to insure that a user
not build his own program to cancel other user's jobs in
not build his own program to cancel other user's jobs in
SLURM.
SLURM.
This should be no more than 32-bit unsigned integer and match
This should be no more than 32-bit unsigned integer and match
the
the encryption key in Maui (<i>--with-key</i> on the
the encryption key in Maui (<i>--with-key</i> on the
configure line) or Moab (<i>KEY</i> parameter in the
configure line) or Moab (<i>KEY</i> parameter in the
<i>moab-private.cfg</i> file).
<i>moab-private.cfg</i> file).
Note that SLURM's wiki plugin does not include a mechanism
Note that SLURM's wiki plugin does not include a mechanism
to submit new jobs, so even without this key nobody c
ould
to submit new jobs, so even without this key
,
nobody c
an
run jobs as another user.</p>
run jobs as another user.</p>
<p><b>EPort</b> is an event notification port in Moab.
<p><b>EPort</b> is an event notification port in Moab.
...
@@ -110,7 +110,7 @@ BackupAddr configured in slurm.conf.</p>
...
@@ -110,7 +110,7 @@ BackupAddr configured in slurm.conf.</p>
<p><b>ExcludePartitions</b> is used to identify partitions
<p><b>ExcludePartitions</b> is used to identify partitions
whose jobs are to be scheduled directly by SLURM rather
whose jobs are to be scheduled directly by SLURM rather
than Moab.
than Moab.
This only
e
ffects jobs which are submitted using S
lurm
This only
a
ffects jobs which are submitted using S
LURM
commands (i.e. srun, salloc or sbatch, NOT msub from Moab).
commands (i.e. srun, salloc or sbatch, NOT msub from Moab).
These jobs will be scheduled on a First-Come-First-Served
These jobs will be scheduled on a First-Come-First-Served
basis.
basis.
...
@@ -120,7 +120,7 @@ will be outside of Moab's control.
...
@@ -120,7 +120,7 @@ will be outside of Moab's control.
Note that Moab controls for resource reservation, fair share
Note that Moab controls for resource reservation, fair share
scheduling, etc. will not apply to the initiation of these jobs.
scheduling, etc. will not apply to the initiation of these jobs.
If more than one partition is to be scheduled directly by
If more than one partition is to be scheduled directly by
S
lurm
, use a comma separator between their names.</p>
S
LURM
, use a comma separator between their names.</p>
<p><b>HidePartitionJobs</b> identifies partitions whose jobs are not
<p><b>HidePartitionJobs</b> identifies partitions whose jobs are not
to be reported to Moab.
to be reported to Moab.
...
@@ -130,10 +130,10 @@ If more than one partition is to have its jobs hidden, use a comma
...
@@ -130,10 +130,10 @@ If more than one partition is to have its jobs hidden, use a comma
separator between their names.</p>
separator between their names.</p>
<p><b>HostFormat</b> controls the format of job task lists built
<p><b>HostFormat</b> controls the format of job task lists built
by S
lurm
and reported to Moab.
by S
LURM
and reported to Moab.
The default value is "0", for which each host name is listed
The default value is "0", for which each host name is listed
individually, once per processor (e.g. "tux0:tux0:tux1:tux1:...").
individually, once per processor (e.g. "tux0:tux0:tux1:tux1:...").
A value of "1" uses S
lurm
hostlist expressions with processor
A value of "1" uses S
LURM
hostlist expressions with processor
counts (e.g. "tux[0-16]*2").
counts (e.g. "tux[0-16]*2").
This is currently experimental.
This is currently experimental.
...
@@ -146,16 +146,17 @@ The default value is 10 seconds.
...
@@ -146,16 +146,17 @@ The default value is 10 seconds.
The value should match <i>JOBAGGREGATIONTIME</i> configured
The value should match <i>JOBAGGREGATIONTIME</i> configured
in the <i>moab.cnf</i> file.</p>
in the <i>moab.cnf</i> file.</p>
<p><b>JobPriority</b> controls the scheduling of newly arriving
<p><b>JobPriority</b> controls the scheduling of newly arriving jobs
jobs in SLURM.
in SLURM. Possible values are "hold" and "run" with "hold" being the
SLURM can either place all newly arriving jobs in a HELD state
default. When <i>JobPriority=hold</i>, SLURM places all newly arriving
(priority = 0) and let Moab decide when and where to run the jobs
jobs in a HELD state (priority = 0) and lets Moab decide when and
or SLURM can control when and where to run jobs.
where to run the jobs. When <i>JobPriority=run</i>, SLURM controls
In the later case, Moab can modify the priorities of pending jobs
when and where to run jobs.
to re-order the job queue or just monitor system state.
<b>Note:</b> The "run" option implementation has yet to be completed.
Possible values are "hold" and "run" with "hold" being the default.</p>
Once the "run" option is available, Moab will be able to modify the
priorities of pending jobs to re-order the job queue.</p>
<p>Here is a sample <i>wiki.conf</i> file
<h4>Sample <i>wiki.conf</i> file</h4>
<pre>
<pre>
# wiki.conf
# wiki.conf
# SLURM's wiki plugin configuration file
# SLURM's wiki plugin configuration file
...
@@ -226,10 +227,10 @@ as user root:</p>
...
@@ -226,10 +227,10 @@ as user root:</p>
</pre>
</pre>
<p> For typical batch jobs, the job transfer from Moab to
<p> For typical batch jobs, the job transfer from Moab to
SLURM is performed using <i>sbatch</i> and occurs instantaneously.
SLURM is performed using <i>sbatch</i> and occurs instantaneously.
The environment is loaded
ed
by a SLURM daemon (slurmd) when the
The environment is loaded by a SLURM daemon (slurmd) when the
batch job begins execution.
batch job begins execution.
For interactive jobs (<i>msub -I ...</i>), the job transfer
For interactive jobs (<i>msub -I ...</i>), the job transfer
from Moab to SLURM can
not be completed until the environment
from Moab to SLURM cannot be completed until the environment
variables are loaded, during which time the Moab daemon is
variables are loaded, during which time the Moab daemon is
completely non-responsive.
completely non-responsive.
To insure that Moab remains operational, SLURM will abort the above
To insure that Moab remains operational, SLURM will abort the above
...
@@ -247,7 +248,7 @@ cache files for users. The program can be found in the SLURM
...
@@ -247,7 +248,7 @@ cache files for users. The program can be found in the SLURM
distribution at <i>contribs/env_cache_builder.c</i>.
distribution at <i>contribs/env_cache_builder.c</i>.
This program can support a longer timeout than Moab, but
This program can support a longer timeout than Moab, but
will report errors for users for whom the environment file
will report errors for users for whom the environment file
can
not be automatically build (typically due to the user's
cannot be automatically build (typically due to the user's
"dot" files spawning another shell so the desired command
"dot" files spawning another shell so the desired command
never execution).
never execution).
For such user, you can manually build a cache file.
For such user, you can manually build a cache file.
...
@@ -280,6 +281,6 @@ Write the output to a file with the same name as the user in the
...
@@ -280,6 +281,6 @@ Write the output to a file with the same name as the user in the
<p class="footer"><a href="#top">top</a></p>
<p class="footer"><a href="#top">top</a></p>
<p style="text-align:center;">Last modified 14
May
2009</p>
<p style="text-align:center;">Last modified 14
December
2009</p>
<!--#include virtual="footer.txt"-->
<!--#include virtual="footer.txt"-->
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment