diff --git a/doc.zih.tu-dresden.de/docs/archive/system_altix.md b/doc.zih.tu-dresden.de/docs/archive/system_altix.md
index aa61353f4bec0c143b7c86892d8f3cb0a3c41d00..d3208237453cbbaf685e6fd4d9d4e1b28575b0c1 100644
--- a/doc.zih.tu-dresden.de/docs/archive/system_altix.md
+++ b/doc.zih.tu-dresden.de/docs/archive/system_altix.md
@@ -72,7 +72,7 @@ The current SGI Altix is based on the dual core Intel Itanium 2
 processor (code name "Montecito"). One core has the following basic
 properties:
 
-|                                     |                            |
+| Component                           | Count                      |
 |-------------------------------------|----------------------------|
 | clock rate                          | 1.6 GHz                    |
 | integer units                       | 6                          |
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
index c09260cf8d814a6a6835f981a25d1e8700c71df2..34505f93de1673aea883574459157b41c9f56357 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
@@ -1,4 +1,10 @@
-# Large Shared-Memory Node - HPE Superdome Flex
+# HPE Superdome Flex
+
+The HPE Superdome Flex is a large shared memory node. It is especially well suited for data
+intensive application scenarios, for example to process extremely large data sets completely in main
+memory or in very fast NVMe memory.
+
+## Configuration Details
 
 - Hostname: `taurussmp8`
 - Access to all shared filesystems
@@ -10,29 +16,19 @@
 ## Local Temporary NVMe Storage
 
 There are 370 TB of NVMe devices installed. For immediate access for all projects, a volume of 87 TB
-of fast NVMe storage is available at `/nvme/1/<projectname>`. For testing, we have set a quota of
-100 GB per project on this NVMe storage.
+of fast NVMe storage is available at `/nvme/1/<projectname>`. A quota of
+100 GB per project on this NVMe storage is set.
 
-With a more detailed proposal on how this unique system (large shared memory + NVMe storage) can
-speed up their computations, a project's quota can be increased or dedicated volumes of up to the
-full capacity can be set up.
+With a more detailed proposal to [hpcsupport@zih.tu-dresden.de](mailto:hpcsupport@zih.tu-dresden.de)
+on how this unique system (large shared memory + NVMe storage) can speed up their computations, a
+project's quota can be increased or dedicated volumes of up to the full capacity can be set up.
 
 ## Hints for Usage
 
-- granularity should be a socket (28 cores)
-- can be used for OpenMP applications with large memory demands
+- Granularity should be a socket (28 cores)
+- Can be used for OpenMP applications with large memory demands
 - To use OpenMPI it is necessary to export the following environment
   variables, so that OpenMPI uses shared memory instead of Infiniband
   for message transport. `export OMPI_MCA_pml=ob1;   export  OMPI_MCA_mtl=^mxm`
 - Use `I_MPI_FABRICS=shm` so that Intel MPI doesn't even consider
   using Infiniband devices itself, but only shared-memory instead
-
-## Open for Testing
-
-- At the moment we have set a quota of 100 GB per project on this NVMe
-  storage. As soon as the first projects come up with proposals how
-  this unique system (large shared memory + NVMe storage) can speed up
-  their computations, we will gladly increase this limit, for selected
-  projects.
-- Test users might have to clean-up their `/nvme` storage within 4 weeks
-  to make room for large projects.