From 447439b1406c7c6d8c308983d1e6fc23a59895d5 Mon Sep 17 00:00:00 2001 From: Morris Jette <jette@schedmd.com> Date: Tue, 2 Oct 2012 11:03:36 -0700 Subject: [PATCH] Replace "resource manager" with "workload manager" in some web pages --- doc/html/download.shtml | 6 +++--- doc/html/gang_scheduling.shtml | 4 ++-- doc/html/ibm-pe.shtml | 4 ++-- doc/html/news.shtml | 2 +- doc/html/overview.shtml | 4 ++-- doc/html/quickstart.shtml | 4 ++-- doc/html/slurm.shtml | 10 +++++----- 7 files changed, 17 insertions(+), 17 deletions(-) diff --git a/doc/html/download.shtml b/doc/html/download.shtml index b09e1ccb907..4860ca35e18 100644 --- a/doc/html/download.shtml +++ b/doc/html/download.shtml @@ -44,7 +44,7 @@ you will need to build and install MUNGE, available from <li><a href="http://sourceforge.net/projects/auks/">AUKS</a><br> AUKS is an utility designed to ease Kerberos V credential support addition to non-interactive applications, like batch systems (SLURM, LSF, Torque, etc.). -It includes a plugin for the SLURM resource manager. AUKS is not used as +It includes a plugin for the SLURM workload manager. AUKS is not used as an authentication plugin by the SLURM code itself, but provides a mechanism for the application to manage Kerberos V credentials.</li> </ul><br> @@ -139,7 +139,7 @@ http://io-watchdog.googlecode.com/files/io-watchdog-0.6.tar.bz2</a></li><br> <li><b>PAM Module (pam_slurm)</b><br> Pluggable Authentication Module (PAM) for restricting access to compute nodes -where SLURM performs resource management. Access to the node is restricted to +where SLURM performs workload management. Access to the node is restricted to user root and users who have been allocated resources on that node. NOTE: pam_slurm is included within the SLURM distribution for version 2.1 or higher. @@ -219,6 +219,6 @@ Portable Hardware Locality (hwloc)</a></li> </ul> -<p style="text-align:center;">Last modified 13 August 2012</p> +<p style="text-align:center;">Last modified 2 October 2012</p> <!--#include virtual="footer.txt"--> diff --git a/doc/html/gang_scheduling.shtml b/doc/html/gang_scheduling.shtml index 1838bc1529d..5ef52e7102c 100644 --- a/doc/html/gang_scheduling.shtml +++ b/doc/html/gang_scheduling.shtml @@ -13,7 +13,7 @@ completes. See the <a href="preempt.html">Preemption</a> document for more information. </P> <P> -A resource manager that supports timeslicing can improve responsiveness +A workload manager that supports timeslicing can improve responsiveness and utilization by allowing more jobs to begin running sooner. Shorter-running jobs no longer have to wait in a queue behind longer-running jobs. @@ -527,6 +527,6 @@ For now this idea could be experimented with by disabling memory support in the selector and submitting appropriately sized jobs. </P> -<p style="text-align:center;">Last modified 29 June 2012</p> +<p style="text-align:center;">Last modified 2 October 2012</p> <!--#include virtual="footer.txt"--> diff --git a/doc/html/ibm-pe.shtml b/doc/html/ibm-pe.shtml index 50b15a5ac7b..b489869688f 100644 --- a/doc/html/ibm-pe.shtml +++ b/doc/html/ibm-pe.shtml @@ -400,7 +400,7 @@ For example:</p> <p>The poe command interacts with SLURM by loading a SLURM library providing a variety of functions for its use. You must specify the location of that -library and note that SLURM is the resource manager in the file named +library and note that SLURM is the workload manager in the file named "/etc/poe.limits". The library name is "libpermapi.so" and it is in installed with the other SLURM libraries in the subdirectory "lib/slurm". A sample "/etc/poe.limits" file is @@ -494,6 +494,6 @@ startsrc -s pnsd -a -D <p class="footer"><a href="#top">top</a></p> -<p style="text-align:center;">Last modified 6 September 2012</p></td> +<p style="text-align:center;">Last modified 2 October 2012</p></td> <!--#include virtual="footer.txt"--> diff --git a/doc/html/news.shtml b/doc/html/news.shtml index 4b8ed1b9fa7..d00a3f292ce 100644 --- a/doc/html/news.shtml +++ b/doc/html/news.shtml @@ -84,6 +84,6 @@ trojan library, then that library will be used by the SLURM daemon with unpredictable results. This was fixed in SLURM version 2.1.14.</li> </ul> -<p style="text-align:center;">Last modified 28 September 2012</p> +<p style="text-align:center;">Last modified 2 October 2012</p> <!--#include virtual="footer.txt"--> diff --git a/doc/html/overview.shtml b/doc/html/overview.shtml index 3809134bbb0..d71fd2b96a5 100644 --- a/doc/html/overview.shtml +++ b/doc/html/overview.shtml @@ -5,7 +5,7 @@ <p>The Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. SLURM requires no kernel modifications for -its operation and is relatively self-contained. As a cluster resource manager, +its operation and is relatively self-contained. As a cluster workload manager, SLURM has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and @@ -202,6 +202,6 @@ PartitionName=DEFAULT MaxTime=UNLIMITED MaxNodes=4096 PartitionName=batch Nodes=lx[0041-9999] </pre> -<p style="text-align:center;">Last modified 5 May 2011</p> +<p style="text-align:center;">Last modified 2 October 2012</p> <!--#include virtual="footer.txt"--> diff --git a/doc/html/quickstart.shtml b/doc/html/quickstart.shtml index f7b33e95ab2..366933bf0ef 100644 --- a/doc/html/quickstart.shtml +++ b/doc/html/quickstart.shtml @@ -6,7 +6,7 @@ <p>The Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. SLURM requires no kernel modifications for -its operation and is relatively self-contained. As a cluster resource manager, +its operation and is relatively self-contained. As a cluster workload manager, SLURM has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and @@ -385,6 +385,6 @@ with SLURM are provided below. <li><a href="mpi_guide.html#quadrics_mpi">Quadrics MPI</a></li> </ul></p> -<p style="text-align:center;">Last modified 24 February 2012</p> +<p style="text-align:center;">Last modified 2 October 2012</p> <!--#include virtual="footer.txt"--> diff --git a/doc/html/slurm.shtml b/doc/html/slurm.shtml index c1ac526b51a..f7c902fc577 100644 --- a/doc/html/slurm.shtml +++ b/doc/html/slurm.shtml @@ -1,8 +1,8 @@ <!--#include virtual="header.txt"--> -<h1>SLURM: A Highly Scalable Resource Manager</h1> +<h1>SLURM Workload Manager</h1> -<p>SLURM is an open-source resource manager designed for Linux clusters of +<p>SLURM is an open-source workload manager designed for Linux clusters of all sizes. It provides three key functions. First it allocates exclusive and/or non-exclusive access to resources @@ -28,7 +28,7 @@ world-class computer centers and rely upon a or supporting sophisticated <a href="priority_multifactor.html">job prioritization</a> algorithms.</p> -<p>While other resource managers do exist, SLURM is unique in several +<p>While other workload managers do exist, SLURM is unique in several respects: <ul> <li><b>Scalability</b>: It is designed to operate in a heterogeneous cluster @@ -55,7 +55,7 @@ can specify size and time limit ranges.</li> help identify load imbalances and other anomalies.</li> </ul></p> -<p>SLURM provides resource management on many of the most powerful computers in +<p>SLURM provides workload management on many of the most powerful computers in the world including: <ul> <li><a href="https://asc.llnl.gov/computing_resources/sequoia/">Sequoia</a> @@ -104,6 +104,6 @@ for molecular dynamics simulation using 512 custom-designed ASICs and a three-dimensional torus interconnect.</li> </ul> -<p style="text-align:center;">Last modified 28 September 2012</p> +<p style="text-align:center;">Last modified 2 October 2012</p> <!--#include virtual="footer.txt"--> -- GitLab