Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
S
Slurm
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
tud-zih-energy
Slurm
Commits
d0c764f3
Commit
d0c764f3
authored
13 years ago
by
Danny Auble
Browse files
Options
Downloads
Patches
Plain Diff
white space cleanup
parent
f9f83931
No related branches found
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
doc/html/faq.shtml
+11
-11
11 additions, 11 deletions
doc/html/faq.shtml
with
11 additions
and
11 deletions
doc/html/faq.shtml
+
11
−
11
View file @
d0c764f3
...
@@ -40,7 +40,7 @@
...
@@ -40,7 +40,7 @@
(e.g. place it into a <i>hold</i> state)?</a></li>
(e.g. place it into a <i>hold</i> state)?</a></li>
<li><a href="#mem_limit">Why are jobs not getting the appropriate
<li><a href="#mem_limit">Why are jobs not getting the appropriate
memory limit?</a></li>
memory limit?</a></li>
<li><a href="#mailing_list">Is an archive available of messages posted to
<li><a href="#mailing_list">Is an archive available of messages posted to
the <i>slurm-dev</i> mailing list?</a></li>
the <i>slurm-dev</i> mailing list?</a></li>
<li><a href="#job_size">Can I change my job's size after it has started
<li><a href="#job_size">Can I change my job's size after it has started
running?</a></li>
running?</a></li>
...
@@ -641,7 +641,7 @@ problem described above.
...
@@ -641,7 +641,7 @@ problem described above.
Use the same solution for the AS (Address Space), RSS (Resident Set Size),
Use the same solution for the AS (Address Space), RSS (Resident Set Size),
or other limits as needed.</p>
or other limits as needed.</p>
<p><a name="mailing_list"><b>23. Is an archive available of messages posted to
<p><a name="mailing_list"><b>23. Is an archive available of messages posted to
the <i>slurm-dev</i> mailing list?</b></a><br>
the <i>slurm-dev</i> mailing list?</b></a><br>
Yes, it is at <a href="http://groups.google.com/group/slurm-devel">
Yes, it is at <a href="http://groups.google.com/group/slurm-devel">
http://groups.google.com/group/slurm-devel</a></p>
http://groups.google.com/group/slurm-devel</a></p>
...
@@ -665,7 +665,7 @@ job to be expanded.</li>
...
@@ -665,7 +665,7 @@ job to be expanded.</li>
<p>Use the <i>scontrol</i> command to change a job's size either by specifying
<p>Use the <i>scontrol</i> command to change a job's size either by specifying
a new node count (<i>NumNodes=</i>) for the job or identify the specific nodes
a new node count (<i>NumNodes=</i>) for the job or identify the specific nodes
(<i>NodeList=</i>) that you want the job to retain.
(<i>NodeList=</i>) that you want the job to retain.
Any job steps running on the nodes which are reliquished by the job will be
Any job steps running on the nodes which are reliquished by the job will be
killed unless initiated with the <i>--no-kill</i> option.
killed unless initiated with the <i>--no-kill</i> option.
After the job size is changed, some environment variables created by SLURM
After the job size is changed, some environment variables created by SLURM
...
@@ -1288,15 +1288,15 @@ Index: src/slurmctld/ping_nodes.c
...
@@ -1288,15 +1288,15 @@ Index: src/slurmctld/ping_nodes.c
--- src/slurmctld/ping_nodes.c (revision 15166)
--- src/slurmctld/ping_nodes.c (revision 15166)
+++ src/slurmctld/ping_nodes.c (working copy)
+++ src/slurmctld/ping_nodes.c (working copy)
@@ -283,9 +283,6 @@
@@ -283,9 +283,6 @@
node_ptr = &node_record_table_ptr[i];
node_ptr = &node_record_table_ptr[i];
base_state = node_ptr->node_state & NODE_STATE_BASE;
base_state = node_ptr->node_state & NODE_STATE_BASE;
- if (base_state == NODE_STATE_DOWN)
- if (base_state == NODE_STATE_DOWN)
- continue;
- continue;
-
-
#ifdef HAVE_FRONT_END /* Operate only on front-end */
#ifdef HAVE_FRONT_END /* Operate only on front-end */
if (i > 0)
if (i > 0)
continue;
continue;
</pre>
</pre>
<p><a name="batch_lost"><b>32. What is the meaning of the error
<p><a name="batch_lost"><b>32. What is the meaning of the error
...
@@ -1411,10 +1411,10 @@ advantage of its filtering and formatting options. For example:
...
@@ -1411,10 +1411,10 @@ advantage of its filtering and formatting options. For example:
$ squeue -tpd -h -o "scontrol update jobid=%i priority=1000" >my.script
$ squeue -tpd -h -o "scontrol update jobid=%i priority=1000" >my.script
</pre></p>
</pre></p>
<p><a name="amazon_ec2"><b>41. Can SLURM be used to run jobs on
<p><a name="amazon_ec2"><b>41. Can SLURM be used to run jobs on
Amazon's EC2?</b></a></br>
Amazon's EC2?</b></a></br>
<p>Yes, here is a description of use SLURM use with
<p>Yes, here is a description of use SLURM use with
<a href="http://aws.amazon.com/ec2/">Amazon's EC2</a> courtesy of
<a href="http://aws.amazon.com/ec2/">Amazon's EC2</a> courtesy of
Ashley Pittman:</p>
Ashley Pittman:</p>
<p>I do this regularly and have no problem with it, the approach I take is to
<p>I do this regularly and have no problem with it, the approach I take is to
start as many instances as I want and have a wrapper around
start as many instances as I want and have a wrapper around
...
@@ -1448,7 +1448,7 @@ pathname (starting with "/").
...
@@ -1448,7 +1448,7 @@ pathname (starting with "/").
Otherwise it will be found in directory used for saving state
Otherwise it will be found in directory used for saving state
(<i>SlurmdSpoolDir</i>).</p>
(<i>SlurmdSpoolDir</i>).</p>
<p>For <i>slurmstepd</i> the core file will depend upon when the failure
<p>For <i>slurmstepd</i> the core file will depend upon when the failure
occurs. It will either be in spawned job's working directory on the same
occurs. It will either be in spawned job's working directory on the same
location as that described above for the <i>slurmd</i> daemon.</p>
location as that described above for the <i>slurmd</i> daemon.</p>
<p><a name="totalview"><b>43. How can TotalView be configured to operate with
<p><a name="totalview"><b>43. How can TotalView be configured to operate with
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment