Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
S
Slurm
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package Registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
tud-zih-energy
Slurm
Commits
7dc9fb93
Commit
7dc9fb93
authored
11 years ago
by
Morris Jette
Browse files
Options
Downloads
Patches
Plain Diff
Update web pages
parent
e213de3c
No related branches found
No related tags found
No related merge requests found
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
doc/html/faq.shtml
+10
-0
10 additions, 0 deletions
doc/html/faq.shtml
doc/html/slurm.shtml
+10
-12
10 additions, 12 deletions
doc/html/slurm.shtml
with
20 additions
and
12 deletions
doc/html/faq.shtml
+
10
−
0
View file @
7dc9fb93
...
...
@@ -149,6 +149,8 @@ priority/multifactor plugin?</a></li>
script for Slurm?</a></li>
<li><a href="#add_nodes">What process should I follow to add nodes to Slurm?</a></li>
<li><a href="#licenses">Can Slurm be configured to manage licenses?</a></li>
<li><a href="#salloc_default_command">Can the salloc command be configured to
launch a shell on a node in the job's allocation?</a></li>
</ol>
...
...
@@ -1653,6 +1655,14 @@ without restarting the slurmctld daemon, but it is possible to dynamically
reserve licenses and remove them from being available to jobs on the system
(e.g. "scontrol update reservation=licenses_held licenses=foo:5,bar:2").</p>
<p><a name="salloc_default_command"><b>50. Can the salloc command be configured to
launch a shell on a node in the job's allocation?</b></a></br>
Yes, just use the SallocDefaultCommand configuration parameter in your
slurm.conf file as shown below.</p>
<pre>
SallocDefaultCommand="srun -n1 -N1 --mem-per-cpu=0 --pty --preserve-env --mpi=none $SHELL
</pre>
<p class="footer"><a href="#top">top</a></p>
<p style="text-align:center;">Last modified 6 June 2013</p>
...
...
This diff is collapsed.
Click to expand it.
doc/html/slurm.shtml
+
10
−
12
View file @
7dc9fb93
...
...
@@ -16,10 +16,7 @@ pending work. </p>
In its simplest configuration, it can be installed and configured in a
couple of minutes (see <a href="http://www.linux-mag.com/id/7239/1/">
Caos NSA and Perceus: All-in-one Cluster Software Stack</a>
by Jeffrey B. Layton) and has been used by
<a href="http://www.intel.com/">Intel</a> for their 48-core
<a href="http://www.hpcwire.com/features/Intel-Unveils-48-Core-Research-Chip-78378487.html">
"cluster on a chip"</a>.
by Jeffrey B. Layton).
More complex configurations can satisfy the job scheduling needs of
world-class computer centers and rely upon a
<a href="http://www.mysql.com/">MySQL</a> database for archiving
...
...
@@ -58,11 +55,18 @@ help identify load imbalances and other anomalies.</li>
<p>Slurm provides workload management on many of the most powerful computers in
the world including:
<ul>
<li><a href="http://www.top500.org/blog/lists/2013/06/press-release/">
Tianhe-2</a> designed by
<a href="http://english.nudt.edu.cn">The National University of Defense Technology (NUDT)</a>
in China has 16,000 nodes, each with two Intel Xeon IvyBridge processors and
three Xeon Phi processors for a total of 3.1 million cores and a peak
performance of 33.86 Petaflops.</li>
<li><a href="https://asc.llnl.gov/computing_resources/sequoia/">Sequoia</a>,
an <a href="http://www.ibm.com">IBM</a> BlueGene/Q system at
<a href="https://www.llnl.gov">Lawrence Livermore National Laboratory</a>
with 1.6 petabytes of memory, 96 racks, 98,304 compute nodes, and 1.6
million cores, with a peak performance of over
20
Petaflops.</li>
million cores, with a peak performance of over
17.17
Petaflops.</li>
<li><a href="http://www.tacc.utexas.edu/stampede">Stampede</a> at the
<a href="http://www.tacc.utexas.edu">Texas Advanced Computing Center/University of Texas</a>
...
...
@@ -72,12 +76,6 @@ Intel Phi co-processors, plus
128 <a href="http://www.nvidia.com">NVIDIA</a> GPUs
delivering 2.66 Petaflops.</li>
<li><a href="http://www.nytimes.com/2010/10/28/technology/28compute.html?_r=1&partner=rss&emc=rss">
Tianhe-1A</a> designed by
<a href="http://english.nudt.edu.cn">The National University of Defense Technology (NUDT)</a>
in China with 14,336 Intel CPUs and 7,168 NVDIA Tesla M2050 GPUs,
with a peak performance of 2.507 Petaflops.</li>
<li><a href="http://www-hpc.cea.fr/en/complexe/tgcc-curie.htm">TGCC Curie</a>,
owned by <a href="http://www.genci.fr">GENCI</a> and operated in the TGCC by
<a href="http://www.cea.fr">CEA</a>, Curie is offering 3 different fractions
...
...
@@ -112,6 +110,6 @@ named after Monte Rosa in the Swiss-Italian Alps, elevation 4,634m.
</ul>
<p style="text-align:center;">Last modified
7 December
201
2
</p>
<p style="text-align:center;">Last modified
24 June
201
3
</p>
<!--#include virtual="footer.txt"-->
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment