diff --git a/doc.zih.tu-dresden.de/docs/use_of_hardware/SDFlex.md b/doc.zih.tu-dresden.de/docs/use_of_hardware/SDFlex.md
index c5db1653742bcd64a36f056ed93337a4f5711b75..8f536f9cd58d5bbe3fae80565e3af515284a84ad 100644
--- a/doc.zih.tu-dresden.de/docs/use_of_hardware/SDFlex.md
+++ b/doc.zih.tu-dresden.de/docs/use_of_hardware/SDFlex.md
@@ -8,35 +8,32 @@
     protocols)
 -   370 TB of fast NVME storage available at `/nvme/<projectname>`
 
-### Local temporary NVMe storage
+## Local temporary NVMe storage
 
-There are 370 TB of NVMe devices installed. For immediate access for all
-projects, a volume of 87 TB of fast NVMe storage is available at
-/nvme/1/\<projectname>. For testing, we have set a quota of 100 GB per
-project on this NVMe storage.This is
+There are 370 TB of NVMe devices installed. For immediate access for all projects, a volume of 87 TB
+of fast NVMe storage is available at `/nvme/1/<projectname>`. For testing, we have set a quota of 100
+GB per project on this NVMe storage.This is
 
-With a more detailled proposal on how this unique system (large shared
-memory + NVMe storage) can speed up their computations, a project's
-quota can be increased or dedicated volumes of up to the full capacity
-can be set up.
+With a more detailled proposal on how this unique system (large shared memory + NVMe storage) can
+speed up their computations, a project's quota can be increased or dedicated volumes of up to the
+full capacity can be set up.
 
 ## Hints for usage
 
--   granularity should be a socket (28 cores)
--   can be used for OpenMP applications with large memory demands
--   To use OpenMPI it is necessary to export the following environment
-    variables, so that OpenMPI uses shared memory instead of Infiniband
-    for message transport. \<pre>export OMPI_MCA_pml=ob1;   export
-    OMPI_MCA_mtl=^mxm\</pre>
--   Use `I_MPI_FABRICS=shm` so that Intel MPI doesn't even consider
-    using InfiniBand devices itself, but only shared-memory instead
+- granularity should be a socket (28 cores)
+- can be used for OpenMP applications with large memory demands
+- To use OpenMPI it is necessary to export the following environment
+  variables, so that OpenMPI uses shared memory instead of Infiniband
+  for message transport. `export OMPI_MCA_pml=ob1;   export  OMPI_MCA_mtl=^mxm`
+- Use `I_MPI_FABRICS=shm` so that Intel MPI doesn't even consider
+  using InfiniBand devices itself, but only shared-memory instead
 
 ## Open for Testing
 
--   At the moment we have set a quota of 100 GB per project on this NVMe
-    storage. As soon as the first projects come up with proposals how
-    this unique system (large shared memory + NVMe storage) can speed up
-    their computations, we will gladly increase this limit, for selected
-    projects.
--   Test users might have to clean-up their /nvme storage within 4 weeks
-    to make room for large projects.
+- At the moment we have set a quota of 100 GB per project on this NVMe
+  storage. As soon as the first projects come up with proposals how
+  this unique system (large shared memory + NVMe storage) can speed up
+  their computations, we will gladly increase this limit, for selected
+  projects.
+- Test users might have to clean-up their /nvme storage within 4 weeks
+  to make room for large projects.