diff --git a/doc.zih.tu-dresden.de/docs/archive/knl_nodes.md b/doc.zih.tu-dresden.de/docs/archive/knl_nodes.md
index 78e4cabc7b4f40574e084834d69175cdd9fa29ef..ff7d4d9168e0a904e3e7258a18236cc65a9ea27b 100644
--- a/doc.zih.tu-dresden.de/docs/archive/knl_nodes.md
+++ b/doc.zih.tu-dresden.de/docs/archive/knl_nodes.md
@@ -1,57 +1,53 @@
 # Intel Xeon Phi (Knights Landing)
 
-Xeon Phi nodes are **Out of Service**!
+!!! warning
+
+    This page is deprceated. The Xeon Phi nodes are **out of service**.
 
 The nodes `taurusknl[1-32]` are equipped with
 
-- Intel Xeon Phi procesors: 64 cores Intel Xeon Phi 7210 (1,3 GHz)
+- Intel Xeon Phi processors: 64 cores Intel Xeon Phi 7210 (1,3 GHz)
 - 96 GB RAM DDR4
 - 16 GB MCDRAM
 - `/scratch`, `/lustre/ssd`, `/projects`, `/home` are mounted
 
 Benchmarks, so far (single node):
 
-- HPL (Linpack): 1863.74 GFlops
-- SGEMM (single precision) MKL: 4314 GFlops
+- HPL (LINPACK): 1863.74 GFLOPS
+- SGEMM (single precision) MKL: 4314 GFLOPS
 - Stream (only 1.4 GiB memory used): 431 GB/s
 
 Each of them can run 4 threads, so one can start a job here with e.g.
 
-```Bash
-srun -p knl -N 1 --mem=90000 -n 1 -c 64 a.out
+```console
+marie@login$ srun -p knl -N 1 --mem=90000 -n 1 -c 64 a.out
 ```
 
-In order to get their optimal performance please re-compile your code
-with the most recent Intel compiler and explicitely set the compiler
-flag `-xMIC-AVX512`.
+In order to get their optimal performance please re-compile your code with the most recent Intel
+compiler and explicitly set the compiler flag `-xMIC-AVX512`.
 
-MPI works now, we recommend to use the latest Intel MPI version
-(intelmpi/2017.1.132). To utilize the OmniPath Fabric properly, make
-sure to use the "ofi" fabric provider, which is the new default set by
-the module file.
+MPI works now, we recommend to use the latest Intel MPI version (intelmpi/2017.1.132). To utilize
+the OmniPath Fabric properly, make sure to use the "ofi" fabric provider, which is the new default
+set by the module file.
 
-Most nodes have a fixed configuration for cluster mode (Quadrant) and
-memory mode (Cache). For testing purposes, we have configured a few
-nodes with different modes (other configurations are possible upon
-request):
+Most nodes have a fixed configuration for cluster mode (Quadrant) and memory mode (Cache). For
+testing purposes, we have configured a few nodes with different modes (other configurations are
+possible upon request):
 
 | Nodes              | Cluster Mode | Memory Mode |
 |:-------------------|:-------------|:------------|
 | `taurusknl[1-28]`  | Quadrant     | Cache       |
-| `taurusknl29`        | Quadrant     | Flat        |
+| `taurusknl29`      | Quadrant     | Flat        |
 | `taurusknl[30-32]` | SNC4         | Flat        |
 
-They have SLURM features set, so that you can request them specifically
-by using the SLURM parameter `--constraint` where multiple values can
-be linked with the & operator, e.g. `--constraint="SNC4&Flat"`. If you
-don't set a constraint, your job will run preferably on the nodes with
-Quadrant+Cache.
+They have Slurm features set, so that you can request them specifically by using the Slurm parameter
+`--constraint` where multiple values can be linked with the & operator, e.g.
+`--constraint="SNC4&Flat"`. If you don't set a constraint, your job will run preferably on the nodes
+with Quadrant+Cache.
 
-Note that your performance might take a hit if your code is not
-NUMA-aware and does not make use of the Flat memory mode while running
-on the nodes that have those modes set, so you might want to use
-`--constraint="Quadrant&Cache"` in such a case to ensure your job does not
-run on an unfavorable node (which might happen if all the others are
-already allocated).
+Note that your performance might take a hit if your code is not NUMA-aware and does not make use of
+the Flat memory mode while running on the nodes that have those modes set, so you might want to use
+`--constraint="Quadrant&Cache"` in such a case to ensure your job does not run on an unfavorable
+node (which might happen if all the others are already allocated).
 
-[Knl Best Practice Guide](https://prace-ri.eu/training-support/best-practice-guides/best-practice-guide-knights-landing/)
+[KNL Best Practice Guide](https://prace-ri.eu/training-support/best-practice-guides/best-practice-guide-knights-landing/)
diff --git a/doc.zih.tu-dresden.de/mkdocs.yml b/doc.zih.tu-dresden.de/mkdocs.yml
index fe7e7459dc9da48dea514805706f4d7d7972c813..6069771f5ceefef39881088750272ef074104b11 100644
--- a/doc.zih.tu-dresden.de/mkdocs.yml
+++ b/doc.zih.tu-dresden.de/mkdocs.yml
@@ -128,6 +128,7 @@ nav:
       - System Titan: archive/system_titan.md
       - System Triton: archive/system_triton.md
       - System Venus: archive/system_venus.md
+      - KNL Nodes: archive/knl_nodes.md
     - UNICORE Rest API: archive/unicore_rest_api.md
     - Vampir Trace: archive/vampir_trace.md
     - Windows Batchjobs: archive/windows_batch.md
diff --git a/doc.zih.tu-dresden.de/wordlist.aspell b/doc.zih.tu-dresden.de/wordlist.aspell
index ab4da040e64199ed321389665808c9fefe255807..011c877881dd378b7fdf3f0aabfe30dc20cd8243 100644
--- a/doc.zih.tu-dresden.de/wordlist.aspell
+++ b/doc.zih.tu-dresden.de/wordlist.aspell
@@ -10,6 +10,7 @@ CPU
 CPUs
 CUDA
 CXFS
+DDR
 DFG
 EasyBuild
 fastfs
@@ -19,6 +20,7 @@ Flink
 Fortran
 GFLOPS
 gfortran
+GiB
 gnuplot
 Gnuplot
 GPU
@@ -27,6 +29,7 @@ Haswell
 HDFS
 Horovod
 HPC
+HPL
 icc
 icpc
 ifort
@@ -36,6 +39,8 @@ Itanium
 jpg
 Jupyter
 Keras
+KNL
+LINPACK
 LoadLeveler
 lsf
 LSF
@@ -82,6 +87,7 @@ scancel
 scontrol
 scp
 SGI
+SGEMM
 SHA
 SHMEM
 SLES
@@ -100,4 +106,5 @@ Theano
 tmp
 Trition
 Vampir
+Xeon
 ZIH