diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/barnard_test.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/barnard_test.md
index 0a0004d7eb70090662d53705ac02c4ef116a1f0f..1529565f8555712da22f15e16141d8be3ad7d301 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/barnard_test.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/barnard_test.md
@@ -57,7 +57,7 @@ These `/data/horse` and `/data/walrus` can be accesed via workspaces. Please ref
 !!! Warning
 
     All old filesystems fill be shutdown by the end of 2023.
- 
+
     To work with your data from Taurus you might have to move/copy them to the new storages.
 
 For this, we have four new [datamover nodes](/data_transfer/datamover) that have mounted all storages
@@ -110,7 +110,7 @@ of the old and new system. (Do not use the datamovers from Taurus!)
 ??? "Migration from `/lustre/ssd` or `/beegfs`"
 
     **You** are entirely responsible for the transfer of these data to the new location.
-    Start the dtrsync process as soon as possible. (And maybe repeat it at a later time.) 
+    Start the dtrsync process as soon as possible. (And maybe repeat it at a later time.)
 
 ??? "Migration from `/lustre/scratch2` aka `/scratch`"
 
@@ -120,7 +120,7 @@ of the old and new system. (Do not use the datamovers from Taurus!)
     to `/data/walrus/warm_archive/ws`.
 
     In case you need to update this (Gigabytes, not Terabytes!) please run `dtrsync` like in
-    `dtrsync -a /data/old/lustre/scratch2/ws/0/my-workspace/newest/  /data/horse/lustre/scratch2/ws/0/my-workspace/newest/`   
+    `dtrsync -a /data/old/lustre/scratch2/ws/0/my-workspace/newest/  /data/horse/lustre/scratch2/ws/0/my-workspace/newest/`
 
 ??? "Migration from `/warm_archive`"
 
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/mpi_issues.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/mpi_issues.md
index 6fac180b08d19e24ba28e658539f9664e16c0c93..95f6eb58990233e85c5dfa535e0c1bde0c29ade6 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/mpi_issues.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/mpi_issues.md
@@ -28,7 +28,7 @@ or setting the option as argument, in case you invoke `mpirun` directly
 mpirun --mca io ^ompio ...
 ```
 
-## Mpirun on partition `alpha`and `ml`
+## Mpirun on partition `alpha` and `ml`
 
 Using `mpirun` on partitions `alpha` and `ml` leads to wrong resource distribution when more than
 one node is involved. This yields a strange distribution like e.g. `SLURM_NTASKS_PER_NODE=15,1`