diff --git a/doc/html/big_sys.shtml b/doc/html/big_sys.shtml index 7f302cfa7a314ab60ef941171657b326fc3dd8ae..786b6e83d49bc25a9c7201266007c7191957496d 100644 --- a/doc/html/big_sys.shtml +++ b/doc/html/big_sys.shtml @@ -4,8 +4,11 @@ <p>This document contains Slurm administrator information specifically for clusters containing 1,024 nodes or more. -The largest system currently managed by Slurm is 122,880 compute nodes -and 1,966,080 cores (IBM Bluegene/Q at Lawrence Livermore National Laboratory). +Large system currently managed by Slurm include +Tianhe-2 (at the National University of Defense Technology in China with +16,000 compute nodes and 3.1 million cores) and +Sequoia (IBM Bluegene/Q at Lawrence Livermore National Laboratory with +98,304 compute nodes and 1.6 million cores). Slurm operation on systems orders of magnitude larger has been validated using emulation. Getting optimal performance at that scale does require some tuning and @@ -156,6 +159,6 @@ the hard limit in order to process all of the standard input and output connections to the launched tasks. It is recommended that you set the open file hard limit to 8192 across the cluster.</p> -<p style="text-align:center;">Last modified 5 August 2013</p> +<p style="text-align:center;">Last modified 13 November 2013</p> <!--#include virtual="footer.txt"-->