From bfe32c54725d9ec1c9aa98bc1ce5fb3e2891e38f Mon Sep 17 00:00:00 2001
From: Martin Schroschk <martin.schroschk@tu-dresden.de>
Date: Tue, 28 Sep 2021 23:27:35 +0200
Subject: [PATCH] Brief review

---
 .../docs/jobs_and_resources/sd_flex.md        | 25 +++++++++----------
 doc.zih.tu-dresden.de/wordlist.aspell         |  2 ++
 2 files changed, 14 insertions(+), 13 deletions(-)

diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
index 04624da4e..6816f9758 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
@@ -1,24 +1,23 @@
-# Large shared-memory node - HPE Superdome Flex
+# Large Shared-Memory Node - HPE Superdome Flex
 
--   Hostname: taurussmp8
--   Access to all shared file systems
--   Slurm partition `julia`
--   32 x Intel(R) Xeon(R) Platinum 8276M CPU @ 2.20GHz (28 cores)
--   48 TB RAM (usable: 47 TB - one TB is used for cache coherence
-    protocols)
--   370 TB of fast NVME storage available at `/nvme/<projectname>`
+- Hostname: `taurussmp8`
+- Access to all shared file systems
+- Slurm partition `julia`
+- 32 x Intel(R) Xeon(R) Platinum 8276M CPU @ 2.20GHz (28 cores)
+- 48 TB RAM (usable: 47 TB - one TB is used for cache coherence protocols)
+- 370 TB of fast NVME storage available at `/nvme/<projectname>`
 
-## Local temporary NVMe storage
+## Local Temporary NVMe Storage
 
 There are 370 TB of NVMe devices installed. For immediate access for all projects, a volume of 87 TB
-of fast NVMe storage is available at `/nvme/1/<projectname>`. For testing, we have set a quota of 100
-GB per project on this NVMe storage.This is
+of fast NVMe storage is available at `/nvme/1/<projectname>`. For testing, we have set a quota of
+100 GB per project on this NVMe storage.
 
 With a more detailed proposal on how this unique system (large shared memory + NVMe storage) can
 speed up their computations, a project's quota can be increased or dedicated volumes of up to the
 full capacity can be set up.
 
-## Hints for usage
+## Hints for Usage
 
 - granularity should be a socket (28 cores)
 - can be used for OpenMP applications with large memory demands
@@ -35,5 +34,5 @@ full capacity can be set up.
   this unique system (large shared memory + NVMe storage) can speed up
   their computations, we will gladly increase this limit, for selected
   projects.
-- Test users might have to clean-up their /nvme storage within 4 weeks
+- Test users might have to clean-up their `/nvme` storage within 4 weeks
   to make room for large projects.
diff --git a/doc.zih.tu-dresden.de/wordlist.aspell b/doc.zih.tu-dresden.de/wordlist.aspell
index 744930ee8..1b4dedb5f 100644
--- a/doc.zih.tu-dresden.de/wordlist.aspell
+++ b/doc.zih.tu-dresden.de/wordlist.aspell
@@ -84,6 +84,7 @@ Horovod
 hostname
 Hostnames
 HPC
+HPE
 HPL
 html
 hyperparameter
@@ -230,6 +231,7 @@ stderr
 stdout
 subdirectories
 subdirectory
+Superdome
 SUSE
 SXM
 TBB
-- 
GitLab