From 18d55013b17d648df18e7521ce2bb5cfd4e78e1c Mon Sep 17 00:00:00 2001 From: Morris Jette <jette@schedmd.com> Date: Wed, 13 Nov 2013 13:01:09 -0800 Subject: [PATCH] Update large systems named on web page --- doc/html/big_sys.shtml | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/doc/html/big_sys.shtml b/doc/html/big_sys.shtml index 7f302cfa7a3..786b6e83d49 100644 --- a/doc/html/big_sys.shtml +++ b/doc/html/big_sys.shtml @@ -4,8 +4,11 @@ <p>This document contains Slurm administrator information specifically for clusters containing 1,024 nodes or more. -The largest system currently managed by Slurm is 122,880 compute nodes -and 1,966,080 cores (IBM Bluegene/Q at Lawrence Livermore National Laboratory). +Large system currently managed by Slurm include +Tianhe-2 (at the National University of Defense Technology in China with +16,000 compute nodes and 3.1 million cores) and +Sequoia (IBM Bluegene/Q at Lawrence Livermore National Laboratory with +98,304 compute nodes and 1.6 million cores). Slurm operation on systems orders of magnitude larger has been validated using emulation. Getting optimal performance at that scale does require some tuning and @@ -156,6 +159,6 @@ the hard limit in order to process all of the standard input and output connections to the launched tasks. It is recommended that you set the open file hard limit to 8192 across the cluster.</p> -<p style="text-align:center;">Last modified 5 August 2013</p> +<p style="text-align:center;">Last modified 13 November 2013</p> <!--#include virtual="footer.txt"--> -- GitLab