diff --git a/doc.zih.tu-dresden.de/docs/archive/no_ib_jobs.md b/doc.zih.tu-dresden.de/docs/archive/no_ib_jobs.md
index 9ccce6361bcaa0bc024644f348708354d269a04f..49007a12354190a0fdde97a14a1a6bda922ea38d 100644
--- a/doc.zih.tu-dresden.de/docs/archive/no_ib_jobs.md
+++ b/doc.zih.tu-dresden.de/docs/archive/no_ib_jobs.md
@@ -25,8 +25,8 @@ Infiniband access if (and only if) they have set the `--tmp`-option as well:
 >units can be specified using the suffix \[K\|M\|G\|T\]. This option
 >applies to job allocations.
 
-Keep in mind: Since the scratch file system are not available and the
-project file system is read-only mounted at the compute nodes you have
+Keep in mind: Since the scratch filesystem are not available and the
+project filesystem is read-only mounted at the compute nodes you have
 to work in /tmp.
 
 A simple job script should do this:
@@ -34,7 +34,7 @@ A simple job script should do this:
 - create a temporary directory on the compute node in `/tmp` and go
   there
 - start the application (under /sw/ or /projects/)using input data
-  from somewhere in the project file system
+  from somewhere in the project filesystem
 - archive and transfer the results to some global location
 
 ```Bash
diff --git a/doc.zih.tu-dresden.de/docs/archive/system_altix.md b/doc.zih.tu-dresden.de/docs/archive/system_altix.md
index 951b06137a599fc95239e5d50144fd2fa205e096..aa61353f4bec0c143b7c86892d8f3cb0a3c41d00 100644
--- a/doc.zih.tu-dresden.de/docs/archive/system_altix.md
+++ b/doc.zih.tu-dresden.de/docs/archive/system_altix.md
@@ -22,9 +22,9 @@ The jobs for these partitions (except Neptun) are scheduled by the [Platform LSF
 batch system running on `mars.hrsk.tu-dresden.de`. The actual placement of a submitted job may
 depend on factors like memory size, number of processors, time limit.
 
-### File Systems
+### Filesystems
 
-All partitions share the same CXFS file systems `/work` and `/fastfs`.
+All partitions share the same CXFS filesystems `/work` and `/fastfs`.
 
 ### ccNUMA Architecture
 
@@ -123,8 +123,8 @@ nodes with dedicated resources for the user's job. Normally a job can be submitt
 
 #### LSF
 
-The batch system on Atlas is LSF. For general information on LSF, please follow
-[this link](platform_lsf.md).
+The batch system on Atlas is LSF, see also the
+[general information on LSF](platform_lsf.md).
 
 #### Submission of Parallel Jobs
 
diff --git a/doc.zih.tu-dresden.de/docs/archive/system_atlas.md b/doc.zih.tu-dresden.de/docs/archive/system_atlas.md
index 0e744c4ab702afac9d3ac413ccfb5abd58fef817..2bebd5511e69f98370aea0c721cee272f940fbc6 100644
--- a/doc.zih.tu-dresden.de/docs/archive/system_atlas.md
+++ b/doc.zih.tu-dresden.de/docs/archive/system_atlas.md
@@ -22,7 +22,7 @@ kernel. Currently, the following hardware is installed:
 
 Mars and Deimos users: Please read the [migration hints](migrate_to_atlas.md).
 
-All nodes share the `/home` and `/fastfs` file system with our other HPC systems. Each
+All nodes share the `/home` and `/fastfs` filesystem with our other HPC systems. Each
 node has 180 GB local disk space for scratch mounted on `/tmp`. The jobs for the compute nodes are
 scheduled by the [Platform LSF](platform_lsf.md) batch system from the login nodes
 `atlas.hrsk.tu-dresden.de` .
@@ -86,8 +86,8 @@ user's job. Normally a job can be submitted with these data:
 
 #### LSF
 
-The batch system on Atlas is LSF. For general information on LSF, please follow
-[this link](platform_lsf.md).
+The batch system on Atlas is LSF, see also the
+[general information on LSF](platform_lsf.md).
 
 #### Submission of Parallel Jobs
 
diff --git a/doc.zih.tu-dresden.de/docs/archive/system_venus.md b/doc.zih.tu-dresden.de/docs/archive/system_venus.md
index 2c0a1fe2b83b1c4e7d09f5e2f6495db8658cb7f9..56acf9b47081726c9662150f638ff430e099020c 100644
--- a/doc.zih.tu-dresden.de/docs/archive/system_venus.md
+++ b/doc.zih.tu-dresden.de/docs/archive/system_venus.md
@@ -19,9 +19,9 @@ the Linux operating system SLES 11 SP 3 with a kernel version 3.x.
 From our experience, most parallel applications benefit from using the additional hardware
 hyperthreads.
 
-### File Systems
+### Filesystems
 
-Venus uses the same `home` file system as all our other HPC installations.
+Venus uses the same `home` filesystem as all our other HPC installations.
 For computations, please use `/scratch`.
 
 ## Usage
@@ -77,8 +77,8 @@ nodes with dedicated resources for the user's job. Normally a job can be submitt
 - files for redirection of output and error messages,
 - executable and command line parameters.
 
-The batch system on Venus is Slurm. For general information on Slurm, please follow
-[this link](../jobs_and_resources/slurm.md).
+The batch system on Venus is Slurm. Please see
+[general information on Slurm](../jobs_and_resources/slurm.md).
 
 #### Submission of Parallel Jobs
 
@@ -92,10 +92,10 @@ On Venus, you can only submit jobs with a core number which is a multiple of 8 (
 srun -n 16 a.out
 ```
 
-**Please note:** There are different MPI libraries on Taurus and Venus,
+**Please note:** There are different MPI libraries on Venus than on other ZIH systems,
 so you have to compile the binaries specifically for their target.
 
-#### File Systems
+#### Filesystems
 
 - The large main memory on the system allows users to create RAM disks
   within their own jobs.