From 7dc9fb9336f782573724f3b81bb961fe1b2e05d7 Mon Sep 17 00:00:00 2001
From: Morris Jette <jette@schedmd.com>
Date: Mon, 24 Jun 2013 16:44:41 -0700
Subject: [PATCH] Update web pages

---
 doc/html/faq.shtml   | 10 ++++++++++
 doc/html/slurm.shtml | 22 ++++++++++------------
 2 files changed, 20 insertions(+), 12 deletions(-)

diff --git a/doc/html/faq.shtml b/doc/html/faq.shtml
index 1c76057e4fc..5fd3d93e186 100644
--- a/doc/html/faq.shtml
+++ b/doc/html/faq.shtml
@@ -149,6 +149,8 @@ priority/multifactor plugin?</a></li>
 script for Slurm?</a></li>
 <li><a href="#add_nodes">What process should I follow to add nodes to Slurm?</a></li>
 <li><a href="#licenses">Can Slurm be configured to manage licenses?</a></li>
+<li><a href="#salloc_default_command">Can the salloc command be configured to
+launch a shell on a node in the job's allocation?</a></li>
 </ol>
 
 
@@ -1653,6 +1655,14 @@ without restarting the slurmctld daemon, but it is possible to dynamically
 reserve licenses and remove them from being available to jobs on the system
 (e.g. "scontrol update reservation=licenses_held licenses=foo:5,bar:2").</p>
 
+<p><a name="salloc_default_command"><b>50. Can the salloc command be configured to
+launch a shell on a node in the job's allocation?</b></a></br>
+Yes, just use the SallocDefaultCommand configuration parameter in your
+slurm.conf file as shown below.</p>
+<pre>
+SallocDefaultCommand="srun -n1 -N1 --mem-per-cpu=0 --pty --preserve-env --mpi=none $SHELL
+</pre>
+
 <p class="footer"><a href="#top">top</a></p>
 
 <p style="text-align:center;">Last modified 6 June 2013</p>
diff --git a/doc/html/slurm.shtml b/doc/html/slurm.shtml
index 14f2368d7be..0d610f5bb9f 100644
--- a/doc/html/slurm.shtml
+++ b/doc/html/slurm.shtml
@@ -16,10 +16,7 @@ pending work. </p>
 In its simplest configuration, it can be installed and configured in a
 couple of minutes (see <a href="http://www.linux-mag.com/id/7239/1/">
 Caos NSA and Perceus: All-in-one Cluster Software Stack</a>
-by Jeffrey B. Layton) and has been used by
-<a href="http://www.intel.com/">Intel</a> for their 48-core
-<a href="http://www.hpcwire.com/features/Intel-Unveils-48-Core-Research-Chip-78378487.html">
-"cluster on a chip"</a>.
+by Jeffrey B. Layton).
 More complex configurations can satisfy the job scheduling needs of 
 world-class computer centers and rely upon a
 <a href="http://www.mysql.com/">MySQL</a> database for archiving
@@ -58,11 +55,18 @@ help identify load imbalances and other anomalies.</li>
 <p>Slurm provides workload management on many of the most powerful computers in
 the world including:
 <ul>
+<li><a href="http://www.top500.org/blog/lists/2013/06/press-release/">
+Tianhe-2</a> designed by 
+<a href="http://english.nudt.edu.cn">The National University of Defense Technology (NUDT)</a>
+in China has 16,000 nodes, each with two Intel Xeon IvyBridge processors and
+three Xeon Phi processors for a total of 3.1 million cores and a peak
+performance of 33.86 Petaflops.</li>
+
 <li><a href="https://asc.llnl.gov/computing_resources/sequoia/">Sequoia</a>,
 an <a href="http://www.ibm.com">IBM</a> BlueGene/Q system at
 <a href="https://www.llnl.gov">Lawrence Livermore National Laboratory</a>
 with 1.6 petabytes of memory, 96 racks, 98,304 compute nodes, and 1.6
-million cores, with a peak performance of over 20 Petaflops.</li>
+million cores, with a peak performance of over 17.17 Petaflops.</li>
 
 <li><a href="http://www.tacc.utexas.edu/stampede">Stampede</a> at the
 <a href="http://www.tacc.utexas.edu">Texas Advanced Computing Center/University of Texas</a>
@@ -72,12 +76,6 @@ Intel Phi co-processors, plus
 128 <a href="http://www.nvidia.com">NVIDIA</a> GPUs
 delivering 2.66 Petaflops.</li>
 
-<li><a href="http://www.nytimes.com/2010/10/28/technology/28compute.html?_r=1&partner=rss&emc=rss">
-Tianhe-1A</a> designed by 
-<a href="http://english.nudt.edu.cn">The National University of Defense Technology (NUDT)</a>
-in China with 14,336 Intel CPUs and 7,168 NVDIA Tesla M2050 GPUs,
-with a peak performance of 2.507 Petaflops.</li>
-
 <li><a href="http://www-hpc.cea.fr/en/complexe/tgcc-curie.htm">TGCC Curie</a>,
 owned by <a href="http://www.genci.fr">GENCI</a> and operated in the TGCC by
 <a href="http://www.cea.fr">CEA</a>, Curie is offering 3 different fractions
@@ -112,6 +110,6 @@ named after Monte Rosa in the Swiss-Italian Alps, elevation 4,634m.
 
 </ul>
 
-<p style="text-align:center;">Last modified 7 December 2012</p>
+<p style="text-align:center;">Last modified 24 June 2013</p>
 
 <!--#include virtual="footer.txt"-->
-- 
GitLab