diff --git a/Dockerfile b/Dockerfile
index 731e831c9b2fc1ff1068ae2b2a80c04bbf0039c7..f6bf9841524472c7af2522ce9cd641e9c5dbd824 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -4,9 +4,7 @@ FROM python:3.8-bullseye
 # Base #
 ########
 
-COPY ./ /src/
-
-RUN pip install -r /src/doc.zih.tu-dresden.de/requirements.txt
+RUN pip install mkdocs>=1.1.2 mkdocs-material>=7.1.0
 
 ##########
 # Linter #
@@ -16,6 +14,6 @@ RUN apt update && apt install -y nodejs npm aspell
 
 RUN npm install -g markdownlint-cli markdown-link-check
 
-WORKDIR /src/doc.zih.tu-dresden.de
+WORKDIR /docs
 
 CMD ["mkdocs", "build", "--verbose", "--strict"]
diff --git a/doc.zih.tu-dresden.de/README.md b/doc.zih.tu-dresden.de/README.md
index e7de9cd2eed1f6f72183047886655d522846bc34..57cb9a23f94bc38d004049816873cd3105d618d6 100644
--- a/doc.zih.tu-dresden.de/README.md
+++ b/doc.zih.tu-dresden.de/README.md
@@ -40,13 +40,7 @@ Now, create a local clone of your fork
 
 #### Install Dependencies
 
-**TODO:** Description
-
-```Shell Session
-~ cd hpc-compendium/doc.zih.tu-dresden.de
-~ pip install -r requirements.txt
-```
-
+See [Installation with Docker](#preview-using-mkdocs-with-dockerfile).
 **TODO:** virtual environment
 **TODO:** What we need for markdownlinter and checks?
 
@@ -393,280 +387,3 @@ BigDataFrameworksApacheSparkApacheFlinkApacheHadoop.md is not included in nav
 pika.md is not included in nav
 specific_software.md is not included in nav
 ```
-
-### Pre-commit Git Hook
-
-You can automatically run checks whenever you try to commit a change. In this case, failing checks
-prevent commits (unless you use option `--no-verify`). This can be accomplished by adding a
-pre-commit hook to your local clone of the repository. The following code snippet shows how to do
-that:
-
-```bash
-cp doc.zih.tu-dresden.de/util/pre-commit .git/hooks/
-```
-
-!!! note
-    The pre-commit hook only works, if you can use docker without using `sudo`. If this is not
-    already the case, use the command `adduser $USER docker` to enable docker commands without
-    `sudo` for the current user. Restart the docker daemons afterwards.
-
-## Content Rules
-
-**Remark:** Avoid using tabs both in markdown files and in `mkdocs.yaml`. Type spaces instead.
-
-### New Page and Pages Structure
-
-The pages structure is defined in the configuration file [mkdocs.yaml](mkdocs.yml).
-
-```Shell Session
-docs/
-  - Home: index.md
-  - Application for HPC Login: application.md
-  - Request for Resources: req_resources.md
-  - Access to the Cluster: access.md
-  - Available Software and Usage:
-    - Overview: software/overview.md
-  ...
-```
-
-To add a new page to the documentation follow these two steps:
-
-1. Create a new markdown file under `docs/subdir/file_name.md` and put the documentation inside. The
-   sub directory and file name should follow the pattern `fancy_title_and_more.md`.
-1. Add `subdir/file_name.md` to the configuration file `mkdocs.yml` by updating the navigation
-   section.
-
-Make sure that the new page **is not floating**, i.e., it can be reached directly from the documentation
-structure.
-
-### Markdown
-
-1. Please keep things simple, i.e., avoid using fancy markdown dialects.
-    * [Cheat Sheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet)
-    * [Style Guide](https://github.com/google/styleguide/blob/gh-pages/docguide/style.md)
-
-1. Do not add large binary files or high resolution images to the repository. See this valuable
-   document for [image optimization](https://web.dev/fast/#optimize-your-images).
-
-1. [Admonitions](https://squidfunk.github.io/mkdocs-material/reference/admonitions/) may be
-actively used, especially for longer code examples, warnings, tips, important information that
-should be highlighted, etc. Code examples, longer than half screen height should collapsed
-(and indented):
-
-??? example
-    ```Bash
-    [...]
-    # very long example here
-    [...]
-    ```
-
-### Writing Style
-
-**TODO** Guide [Issue #14](#14)
-
-* Capitalize headings, e.g. *Exclusive Reservation of Hardware*
-* Give keywords in link texts, e.g. [Code Blocks](#code-blocks-and-syntax-highlighting) is more
-  descriptive than [this subsection](#code-blocks-and-syntax-highlighting)
-
-### Spelling and Technical Wording
-
-To provide a consistent and high quality documentation, and help users to find the right pages,
-there is a list of conventions w.r.t. spelling and technical wording.
-
-* Language settings: en_us
-* `I/O` not `IO`
-* `Slurm` not `SLURM`
-* `Filesystem` not `file system`
-* `ZIH system` and `ZIH systems` not `Taurus`, `HRSKII`, `our HPC systems`, etc.
-* `Workspace` not `work space`
-* avoid term `HPC-DA`
-* Partition names after the keyword *partition*: *partition `ml`* not *ML partition*, *ml
-  partition*, *`ml` partition*, *"ml" partition*, etc.
-
-### Code Blocks and Command Prompts
-
-Showing commands and sample output is an important part of all technical documentation. To make
-things as clear for readers as possible and provide a consistent documentation, some rules have to
-be followed.
-
-1. Use ticks to mark code blocks and commands, not italic font.
-1. Specify language for code blocks ([see below](#code-blocks-and-syntax-highlighting)).
-1. All code blocks and commands should be runnable from a login node or a node within a specific
-   partition (e.g., `ml`).
-1. It should be clear from the prompt, where the command is run (e.g. local machine, login node or
-   specific partition).
-
-#### Prompts
-
-We follow this rules regarding prompts:
-
-| Host/Partition         | Prompt           |
-|------------------------|------------------|
-| Login nodes            | `marie@login$`   |
-| Arbitrary compute node | `marie@compute$` |
-| `haswell` partition    | `marie@haswell$` |
-| `ml` partition         | `marie@ml$`      |
-| `alpha` partition      | `marie@alpha$`   |
-| `alpha` partition      | `marie@alpha$`   |
-| `romeo` partition      | `marie@romeo$`   |
-| `julia` partition      | `marie@julia$`   |
-| Localhost              | `marie@local$`   |
-
-*Remarks:*
-
-* **Always use a prompt**, even there is no output provided for the shown command.
-* All code blocks should use long parameter names (e.g. Slurm parameters), if available.
-* All code blocks which specify some general command templates, e.g. containing `<` and `>`
-  (see [Placeholders](#mark-placeholders)), should use `bash` for the code block. Additionally,
-  an example invocation, perhaps with output, should be given with the normal `console` code block.
-  See also [Code Block description below](#code-blocks-and-syntax-highlighting).
-* Using some magic, the prompt as well as the output is identified and will not be copied!
-* Stick to the [generic user name](#data-privacy-and-generic-user-name) `marie`.
-
-#### Code Blocks and Syntax Highlighting
-
-This project makes use of the extension
-[pymdownx.highlight](https://squidfunk.github.io/mkdocs-material/reference/code-blocks/) for syntax
-highlighting.  There is a complete list of supported
-[language short codes](https://pygments.org/docs/lexers/).
-
-For consistency, use the following short codes within this project:
-
-With the exception of command templates, use `console` for shell session and console:
-
-```` markdown
-```console
-marie@login$ ls
-foo
-bar
-```
-````
-
-Make sure that shell session and console code blocks are executable on the login nodes of HPC system.
-
-Command templates use [Placeholders](#mark-placeholders) to mark replaceable code parts. Command
-templates should give a general idea of invocation and thus, do not contain any output. Use a
-`bash` code block followed by an invocation example (with `console`):
-
-```` markdown
-```bash
-marie@local$ ssh -NL <local port>:<compute node>:<remote port> <zih login>@tauruslogin.hrsk.tu-dresden.de
-```
-
-```console
-marie@local$ ssh -NL 5901:172.24.146.46:5901 marie@tauruslogin.hrsk.tu-dresden.de
-```
-````
-
-Also use `bash` for shell scripts such as jobfiles:
-
-```` markdown
-```bash
-#!/bin/bash
-#SBATCH --nodes=1
-#SBATCH --time=01:00:00
-#SBATCH --output=slurm-%j.out
-
-module load foss
-
-srun a.out
-```
-````
-
-!!! important
-
-    Use long parameter names where possible to ease understanding.
-
-`python` for Python source code:
-
-```` markdown
-```python
-from time import gmtime, strftime
-print(strftime("%Y-%m-%d %H:%M:%S", gmtime()))
-```
-````
-
-`pycon` for Python console:
-
-```` markdown
-```pycon
->>> from time import gmtime, strftime
->>> print(strftime("%Y-%m-%d %H:%M:%S", gmtime()))
-2021-08-03 07:20:33
-```
-````
-
-Line numbers can be added via
-
-```` markdown
-```bash linenums="1"
-#!/bin/bash
-
-#SBATCH -N 1
-#SBATCH -n 23
-#SBATCH -t 02:10:00
-
-srun a.out
-```
-````
-
-_Result_:
-
-![lines](misc/lines.png)
-
-Specific Lines can be highlighted by using
-
-```` markdown
-```bash hl_lines="2 3"
-#!/bin/bash
-
-#SBATCH -N 1
-#SBATCH -n 23
-#SBATCH -t 02:10:00
-
-srun a.out
-```
-````
-
-_Result_:
-
-![lines](misc/highlight_lines.png)
-
-### Data Privacy and Generic User Name
-
-Where possible, replace login, project name and other private data with clearly arbitrary placeholders.
-E.g., use the generic login `marie` and the corresponding project name `p_marie`.
-
-```console
-marie@login$ ls -l
-drwxr-xr-x   3 marie p_marie      4096 Jan 24  2020 code
-drwxr-xr-x   3 marie p_marie      4096 Feb 12  2020 data
--rw-rw----   1 marie p_marie      4096 Jan 24  2020 readme.md
-```
-
-### Mark Omissions
-
-If showing only a snippet of a long output, omissions are marked with `[...]`.
-
-### Mark Placeholders
-
-Stick to the Unix rules on optional and required arguments, and selection of item sets:
-
-* `<required argument or value>`
-* `[optional argument or value]`
-* `{choice1|choice2|choice3}`
-
-## Graphics and Attachments
-
-All graphics and attachments are saved within `misc` directory of the respective sub directory in
-`docs`.
-
-The syntax to insert a graphic or attachment into a page is
-
-```Bash
-![PuTTY: Switch on X11](misc/putty2.jpg)
-{: align="center"}
-```
-
-The attribute `align` is optional. By default, graphics are left aligned. **Note:** It is crucial to
-have `{: align="center"}` on a new line.
diff --git a/doc.zih.tu-dresden.de/docs/contrib/content_rules.md b/doc.zih.tu-dresden.de/docs/contrib/content_rules.md
index f5492e7f35ff26e425bff9c7b246f7c0d4a29fb0..2be83c1f78668abb764586741a7de764b5baa112 100644
--- a/doc.zih.tu-dresden.de/docs/contrib/content_rules.md
+++ b/doc.zih.tu-dresden.de/docs/contrib/content_rules.md
@@ -51,6 +51,12 @@ should be highlighted, etc. Code examples, longer than half screen height should
 ## Writing Style
 
 * Capitalize headings, e.g. *Exclusive Reservation of Hardware*
+* Give keywords in link texts, e.g. [Code Blocks](#code-blocks-and-syntax-highlighting) is more
+  descriptive than [this subsection](#code-blocks-and-syntax-highlighting)
+* Use active over passive voice
+    * Write with confidence. This confidence should be reflected in the documentation, so that
+      the readers trust and follow it.
+    * Example: `We recommend something` instead of `Something is recommended.`
 
 ## Spelling and Technical Wording
 
@@ -61,8 +67,17 @@ there is a list of conventions w.r.t. spelling and technical wording.
 * `I/O` not `IO`
 * `Slurm` not `SLURM`
 * `Filesystem` not `file system`
-* `ZIH system` and `ZIH systems` not `Taurus` etc. if possible
+* `ZIH system` and `ZIH systems` not `Taurus`, `HRSKII`, `our HPC systems`, etc.
 * `Workspace` not `work space`
+* avoid term `HPC-DA`
+* Partition names after the keyword *partition*: *partition `ml`* not *ML partition*, *ml
+  partition*, *`ml` partition*, *"ml" partition*, etc.
+
+### Long Options
+
+* Use long over short options, e.g. `srun --nodes=2 --ntasks-per-node=4 ...` is preferred over
+  `srun -N 2 -n 4 ...`
+* Use `module` over the short front-end `ml` in documentation and examples
 
 ## Code Blocks and Command Prompts
 
@@ -114,7 +129,7 @@ For consistency, use the following short codes within this project:
 
 With the exception of command templates, use `console` for shell session and console:
 
-```` markdown
+````markdown
 ```console
 marie@login$ ls
 foo
@@ -128,7 +143,7 @@ Command templates use [Placeholders](#mark-placeholders) to mark replaceable cod
 templates should give a general idea of invocation and thus, do not contain any output. Use a
 `bash` code block followed by an invocation example (with `console`):
 
-```` markdown
+````markdown
 ```bash
 marie@local$ ssh -NL <local port>:<compute node>:<remote port> <zih login>@tauruslogin.hrsk.tu-dresden.de
 ```
@@ -140,7 +155,7 @@ marie@local$ ssh -NL 5901:172.24.146.46:5901 marie@tauruslogin.hrsk.tu-dresden.d
 
 Also use `bash` for shell scripts such as job files:
 
-```` markdown
+````markdown
 ```bash
 #!/bin/bash
 #SBATCH --nodes=1
@@ -159,7 +174,7 @@ srun a.out
 
 `python` for Python source code:
 
-```` markdown
+````markdown
 ```python
 from time import gmtime, strftime
 print(strftime("%Y-%m-%d %H:%M:%S", gmtime()))
@@ -168,7 +183,7 @@ print(strftime("%Y-%m-%d %H:%M:%S", gmtime()))
 
 `pycon` for Python console:
 
-```` markdown
+````markdown
 ```pycon
 >>> from time import gmtime, strftime
 >>> print(strftime("%Y-%m-%d %H:%M:%S", gmtime()))
@@ -178,7 +193,7 @@ print(strftime("%Y-%m-%d %H:%M:%S", gmtime()))
 
 Line numbers can be added via
 
-```` markdown
+````markdown
 ```bash linenums="1"
 #!/bin/bash
 
@@ -190,6 +205,10 @@ srun a.out
 ```
 ````
 
+_Result_:
+
+![lines](misc/lines.png)
+
 Specific Lines can be highlighted by using
 
 ```` markdown
@@ -204,6 +223,10 @@ srun a.out
 ```
 ````
 
+_Result_:
+
+![lines](misc/highlight_lines.png)
+
 ### Data Privacy and Generic User Name
 
 Where possible, replace login, project name and other private data with clearly arbitrary placeholders.
diff --git a/doc.zih.tu-dresden.de/docs/contrib/contribute_container.md b/doc.zih.tu-dresden.de/docs/contrib/contribute_container.md
index d3b87d46d6f45af76665b49a74fb3ed7f580edcb..dd44fafa136d63ae80267226f70dc00563507ba3 100644
--- a/doc.zih.tu-dresden.de/docs/contrib/contribute_container.md
+++ b/doc.zih.tu-dresden.de/docs/contrib/contribute_container.md
@@ -86,7 +86,25 @@ To avoid a lot of retyping, use the following in your shell:
 alias wiki="docker run --name=hpc-compendium --rm -it -w /docs --mount src=$PWD/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium bash -c"
 ```
 
-You are now ready to use the different checks
+You are now ready to use the different checks, however we suggest to try the pre-commit hook.
+
+#### Pre-commit Git Hook
+
+We recommend to automatically run checks whenever you try to commit a change. In this case, failing
+checks prevent commits (unless you use option `--no-verify`). This can be accomplished by adding a
+pre-commit hook to your local clone of the repository. The following code snippet shows how to do
+that:
+
+```bash
+cp doc.zih.tu-dresden.de/util/pre-commit .git/hooks/
+```
+
+!!! note
+    The pre-commit hook only works, if you can use docker without using `sudo`. If this is not
+    already the case, use the command `adduser $USER docker` to enable docker commands without
+    `sudo` for the current user. Restart the docker daemons afterwards.
+
+Read on if you want to run a specific check.
 
 #### Linter
 
diff --git a/doc.zih.tu-dresden.de/docs/contrib/howto_contribute.md b/doc.zih.tu-dresden.de/docs/contrib/howto_contribute.md
index 31105a5208932ff49ee86d939ed8faa744dad854..e0d91cccc3f534e0d7057b72f1d6479f8932b6aa 100644
--- a/doc.zih.tu-dresden.de/docs/contrib/howto_contribute.md
+++ b/doc.zih.tu-dresden.de/docs/contrib/howto_contribute.md
@@ -4,14 +4,21 @@
 
     Ink is better than the best memory.
 
+In principle, there are three possible ways how to contribute to this documentation.
+
 ## Contribute via Issue
 
 Users can contribute to the documentation via the
 [GitLab issue tracking system](https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium/-/issues).
 For that, open an issue to report typos and missing documentation or request for more precise
-wording etc.  ZIH staff will get in touch with you to resolve the issue and improve the
+wording etc. ZIH staff will get in touch with you to resolve the issue and improve the
 documentation.
 
+??? tip "Create an issue in GitLab"
+
+    ![GIF showing how to create an issue in GitLab](misc/create_gitlab_issue.gif)
+    {: align=center}
+
 !!! warning "HPC support"
 
     Non-documentation issues and requests need to be send as ticket to
@@ -20,8 +27,15 @@ documentation.
 ## Contribute via Web IDE
 
 GitLab offers a rich and versatile web interface to work with repositories. To fix typos and edit
-source files, just select the file of interest and click the `Edit` button. A text and commit
-editor are invoked: Do your changes, add a meaningful commit message and commit the changes.
+source files, follow these steps:
+
+1. Navigate to the repository at
+[https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium](https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium)
+and log in.
+1. Select the right branch.
+1. Select the file of interest in `doc.zih.tu-dresden.de/docs/...` and click the `Edit` button.
+1. A text and commit editor are invoked: Do your changes, add a meaningful commit message and commit
+   the changes.
 
 The more sophisticated integrated Web IDE is reached from the top level menu of the repository or
 by selecting any source file.
@@ -29,12 +43,9 @@ by selecting any source file.
 Other git services might have an equivalent web interface to interact with the repository. Please
 refer to the corresponding documentation for further information.
 
-<!--This option of contributing is only available for users of-->
-<!--[gitlab.hrz.tu-chemnitz.de](https://gitlab.hrz.tu-chemnitz.de). Furthermore, -->
-
 ## Contribute Using Git Locally
 
 For experienced Git users, we provide a Docker container that includes all checks of the CI engine
 used in the back-end. Using them should ensure that merge requests will not be blocked
 due to automatic checking.
-For details, see [Work Locally Using Containers](contribute_container.md).
+For details, refer to the page [Work Locally Using Containers](contribute_container.md).
diff --git a/doc.zih.tu-dresden.de/docs/contrib/misc/create_gitlab_issue.gif b/doc.zih.tu-dresden.de/docs/contrib/misc/create_gitlab_issue.gif
new file mode 100644
index 0000000000000000000000000000000000000000..cb4910897903283e43b21feadcdb2acf2f42c15e
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/contrib/misc/create_gitlab_issue.gif differ
diff --git a/doc.zih.tu-dresden.de/misc/highlight_lines.png b/doc.zih.tu-dresden.de/docs/contrib/misc/highlight_lines.png
similarity index 100%
rename from doc.zih.tu-dresden.de/misc/highlight_lines.png
rename to doc.zih.tu-dresden.de/docs/contrib/misc/highlight_lines.png
diff --git a/doc.zih.tu-dresden.de/misc/lines.png b/doc.zih.tu-dresden.de/docs/contrib/misc/lines.png
similarity index 100%
rename from doc.zih.tu-dresden.de/misc/lines.png
rename to doc.zih.tu-dresden.de/docs/contrib/misc/lines.png
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md
index 13b4e5b127c7d1013b1868e823522599fbca55e2..a5bb1980e342b8f1c19ecb6b610a5d481cd98268 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md
@@ -1,9 +1,9 @@
 # Batch System Slurm
 
-When log in to ZIH systems, you are placed on a login node. There you can manage your
+When logging in to ZIH systems, you are placed on a login node. There, you can manage your
 [data life cycle](../data_lifecycle/overview.md),
 [setup experiments](../data_lifecycle/experiments.md), and
-edit and prepare jobs. The login nodes are not suited for computational work!  From the login nodes,
+edit and prepare jobs. The login nodes are not suited for computational work! From the login nodes,
 you can interact with the batch system, e.g., submit and monitor your jobs.
 
 ??? note "Batch System"
@@ -32,7 +32,7 @@ ZIH uses the batch system Slurm for resource management and job scheduling.
 Just specify the resources you need in terms
 of cores, memory, and time and your Slurm will place your job on the system.
 
-This pages provides a brief overview on
+This page provides a brief overview on
 
 * [Slurm options](#options) to specify resource requirements,
 * how to submit [interactive](#interactive-jobs) and [batch jobs](#batch-jobs),
@@ -60,39 +60,39 @@ There are three basic Slurm commands for job submission and execution:
 Using `srun` directly on the shell will be blocking and launch an
 [interactive job](#interactive-jobs). Apart from short test runs, it is recommended to submit your
 jobs to Slurm for later execution by using [batch jobs](#batch-jobs). For that, you can conveniently
-put the parameters directly in a [job file](#job-files) which you can submit using `sbatch [options]
-<job file>`.
+put the parameters directly in a [job file](#job-files), which you can submit using `sbatch
+[options] <job file>`.
 
-During runtime, the environment variable `SLURM_JOB_ID` will be set to the id of your job. The job
+At runtime, the environment variable `SLURM_JOB_ID` is set to the id of your job. The job
 id is unique. The id allows you to [manage and control](#manage-and-control-jobs) your jobs.
 
 ## Options
 
-The following table holds the most important options for `srun/sbatch/salloc` to specify resource
+The following table contains the most important options for `srun/sbatch/salloc` to specify resource
 requirements and control communication.
 
 ??? tip "Options Table"
 
     | Slurm Option               | Description |
     |:---------------------------|:------------|
-    | `-n, --ntasks=<N>`         | number of (MPI) tasks (default: 1) |
-    | `-N, --nodes=<N>`          | number of nodes; there will be `--ntasks-per-node` processes started on each node |
-    | `--ntasks-per-node=<N>`    | number of tasks per allocated node to start (default: 1) |
-    | `-c, --cpus-per-task=<N>`  | number of CPUs per task; needed for multithreaded (e.g. OpenMP) jobs; typically `N` should be equal to `OMP_NUM_THREADS` |
-    | `-p, --partition=<name>`   | type of nodes where you want to execute your job (refer to [partitions](partitions_and_limits.md)) |
-    | `--mem-per-cpu=<size>`     | memory need per allocated CPU in MB |
-    | `-t, --time=<HH:MM:SS>`    | maximum runtime of the job |
-    | `--mail-user=<your email>` | get updates about the status of the jobs |
-    | `--mail-type=ALL`          | for what type of events you want to get a mail; valid options: `ALL`, `BEGIN`, `END`, `FAIL`, `REQUEUE` |
-    | `-J, --job-name=<name>`    | name of the job shown in the queue and in mails (cut after 24 chars) |
-    | `--no-requeue`             | disable requeueing of the job in case of node failure (default: enabled) |
-    | `--exclusive`              | exclusive usage of compute nodes; you will be charged for all CPUs/cores on the node |
-    | `-A, --account=<project>`  | charge resources used by this job to the specified project |
-    | `-o, --output=<filename>`  | file to save all normal output (stdout) (default: `slurm-%j.out`) |
-    | `-e, --error=<filename>`   | file to save all error output (stderr)  (default: `slurm-%j.out`) |
-    | `-a, --array=<arg>`        | submit an array job ([examples](slurm_examples.md#array-jobs)) |
-    | `-w <node1>,<node2>,...`   | restrict job to run on specific nodes only |
-    | `-x <node1>,<node2>,...`   | exclude specific nodes from job |
+    | `-n, --ntasks=<N>`         | Number of (MPI) tasks (default: 1) |
+    | `-N, --nodes=<N>`          | Number of nodes; there will be `--ntasks-per-node` processes started on each node |
+    | `--ntasks-per-node=<N>`    | Number of tasks per allocated node to start (default: 1) |
+    | `-c, --cpus-per-task=<N>`  | Number of CPUs per task; needed for multithreaded (e.g. OpenMP) jobs; typically `N` should be equal to `OMP_NUM_THREADS` |
+    | `-p, --partition=<name>`   | Type of nodes where you want to execute your job (refer to [partitions](partitions_and_limits.md)) |
+    | `--mem-per-cpu=<size>`     | Memory need per allocated CPU in MB |
+    | `-t, --time=<HH:MM:SS>`    | Maximum runtime of the job |
+    | `--mail-user=<your email>` | Get updates about the status of the jobs |
+    | `--mail-type=ALL`          | For what type of events you want to get a mail; valid options: `ALL`, `BEGIN`, `END`, `FAIL`, `REQUEUE` |
+    | `-J, --job-name=<name>`    | Name of the job shown in the queue and in mails (cut after 24 chars) |
+    | `--no-requeue`             | Disable requeueing of the job in case of node failure (default: enabled) |
+    | `--exclusive`              | Exclusive usage of compute nodes; you will be charged for all CPUs/cores on the node |
+    | `-A, --account=<project>`  | Charge resources used by this job to the specified project |
+    | `-o, --output=<filename>`  | File to save all normal output (stdout) (default: `slurm-%j.out`) |
+    | `-e, --error=<filename>`   | File to save all error output (stderr)  (default: `slurm-%j.out`) |
+    | `-a, --array=<arg>`        | Submit an array job ([examples](slurm_examples.md#array-jobs)) |
+    | `-w <node1>,<node2>,...`   | Restrict job to run on specific nodes only |
+    | `-x <node1>,<node2>,...`   | Exclude specific nodes from job |
 
 !!! note "Output and Error Files"
 
@@ -109,19 +109,19 @@ requirements and control communication.
 ### Host List
 
 If you want to place your job onto specific nodes, there are two options for doing this. Either use
-`-p, --partion=<name>` to specify a host group aka. [partition](partitions_and_limits.md) that fits
-your needs. Or, use `-w, --nodelist=<host1,host2,..>`) with a list of hosts that will work for you.
+`-p, --partition=<name>` to specify a host group aka. [partition](partitions_and_limits.md) that fits
+your needs. Or, use `-w, --nodelist=<host1,host2,..>` with a list of hosts that will work for you.
 
 ## Interactive Jobs
 
 Interactive activities like editing, compiling, preparing experiments etc. are normally limited to
-the login nodes. For longer interactive sessions you can allocate cores on the compute node with the
-command `salloc`. It takes the same options like `sbatch` to specify the required resources.
+the login nodes. For longer interactive sessions, you can allocate cores on the compute node with
+the command `salloc`. It takes the same options as `sbatch` to specify the required resources.
 
 `salloc` returns a new shell on the node, where you submitted the job. You need to use the command
 `srun` in front of the following commands to have these commands executed on the allocated
 resources. If you allocate more than one task, please be aware that `srun` will run the command on
-each allocated task!
+each allocated task by default!
 
 The syntax for submitting a job is
 
@@ -132,16 +132,23 @@ marie@login$ srun [options] <command>
 An example of an interactive session looks like:
 
 ```console
-marie@login$ srun --pty -n 1 -c 4 --time=1:00:00 --mem-per-cpu=1700 bash
-marie@login$ srun: job 13598400 queued and waiting for resources
-marie@login$ srun: job 13598400 has been allocated resources
+marie@login$ srun --pty --ntasks=1 --cpus-per-task=4 --time=1:00:00 --mem-per-cpu=1700 bash -l
+srun: job 13598400 queued and waiting for resources
+srun: job 13598400 has been allocated resources
 marie@compute$ # Now, you can start interactive work with e.g. 4 cores
 ```
 
+!!! note "Using `module` commands"
+
+    The [module commands](../software/modules.md) are made available by sourcing the files
+    `/etc/profile` and `~/.bashrc`. This is done automatically by passing the parameter `-l` to your
+    shell, as shown in the example above. If you missed adding `-l` at submitting the interactive
+    session, no worry, you can source this files also later on manually.
+
 !!! note "Partition `interactive`"
 
     A dedicated partition `interactive` is reserved for short jobs (< 8h) with not more than one job
-    per user. Please check the availability of nodes there with `sinfo -p interactive`.
+    per user. Please check the availability of nodes there with `sinfo --partition=interactive`.
 
 ### Interactive X11/GUI Jobs
 
@@ -176,10 +183,10 @@ Batch jobs are encapsulated within [job files](#job-files) and submitted to the
 environment settings and the commands for executing the application. Using batch jobs and job files
 has multiple advantages:
 
-* You can reproduce your experiments and work, because it's all steps are saved in a file.
+* You can reproduce your experiments and work, because all steps are saved in a file.
 * You can easily share your settings and experimental setup with colleagues.
-* Submit your job file to the scheduling system for later execution. In the meanwhile, you can grab
-  a coffee and proceed with other work (,e.g., start writing a paper).
+* You can submit your job file to the scheduling system for later execution. In the meanwhile, you can
+  grab a coffee and proceed with other work (e.g., start writing a paper).
 
 !!! hint "The syntax for submitting a job file to Slurm is"
 
@@ -208,7 +215,7 @@ srun ./application [options]          # Execute parallel application with srun
 ```
 
 The following two examples show the basic resource specifications for a pure OpenMP application and
-a pure MPI application, respectively. Within the section [Job Examples](slurm_examples.md) we
+a pure MPI application, respectively. Within the section [Job Examples](slurm_examples.md), we
 provide a comprehensive collection of job examples.
 
 ??? example "Job file OpenMP"
@@ -230,7 +237,7 @@ provide a comprehensive collection of job examples.
     ```
 
     * Submisson: `marie@login$ sbatch batch_script.sh`
-    * Run with fewer CPUs: `marie@login$ sbatch -c 14 batch_script.sh`
+    * Run with fewer CPUs: `marie@login$ sbatch --cpus-per-task=14 batch_script.sh`
 
 ??? example "Job file MPI"
 
@@ -248,7 +255,7 @@ provide a comprehensive collection of job examples.
     ```
 
     * Submisson: `marie@login$ sbatch batch_script.sh`
-    * Run with fewer MPI tasks: `marie@login$ sbatch --ntasks 14 batch_script.sh`
+    * Run with fewer MPI tasks: `marie@login$ sbatch --ntasks=14 batch_script.sh`
 
 ## Manage and Control Jobs
 
@@ -289,14 +296,14 @@ marie@login$ whypending <jobid>
 ### Editing Jobs
 
 Jobs that have not yet started can be altered. Using `scontrol update timelimit=4:00:00
-jobid=<jobid>` it is for example possible to modify the maximum runtime. `scontrol` understands many
-different options, please take a look at the [man page](https://slurm.schedmd.com/scontrol.html) for
-more details.
+jobid=<jobid>`, it is for example possible to modify the maximum runtime. `scontrol` understands
+many different options, please take a look at the
+[scontrol documentation](https://slurm.schedmd.com/scontrol.html) for more details.
 
 ### Canceling Jobs
 
 The command `scancel <jobid>` kills a single job and removes it from the queue. By using `scancel -u
-<username>` you can send a canceling signal to all of your jobs at once.
+<username>`, you can send a canceling signal to all of your jobs at once.
 
 ### Accounting
 
@@ -317,34 +324,34 @@ marie@login$ sacct
 [...]
 ```
 
-We'd like to point your attention to the following options gain insight in your jobs.
+We'd like to point your attention to the following options to gain insight in your jobs.
 
 ??? example "Show specific job"
 
     ```console
-    marie@login$ sacct -j <JOBID>
+    marie@login$ sacct --jobs=<JOBID>
     ```
 
 ??? example "Show all fields for a specific job"
 
     ```console
-    marie@login$ sacct -j <JOBID> -o All
+    marie@login$ sacct --jobs=<JOBID> --format=All
     ```
 
 ??? example "Show specific fields"
 
     ```console
-    marie@login$ sacct -j <JOBID> -o JobName,MaxRSS,MaxVMSize,CPUTime,ConsumedEnergy
+    marie@login$ sacct --jobs=<JOBID> --format=JobName,MaxRSS,MaxVMSize,CPUTime,ConsumedEnergy
     ```
 
-The manual page (`man sacct`) and the [online reference](https://slurm.schedmd.com/sacct.html)
+The manual page (`man sacct`) and the [sacct online reference](https://slurm.schedmd.com/sacct.html)
 provide a comprehensive documentation regarding available fields and formats.
 
 !!! hint "Time span"
 
     By default, `sacct` only shows data of the last day. If you want to look further into the past
-    without specifying an explicit job id, you need to provide a start date via the `-S` option.
-    A certain end date is also possible via `-E`.
+    without specifying an explicit job id, you need to provide a start date via the option
+    `--starttime` (or short: `-S`). A certain end date is also possible via `--endtime` (or `-E`).
 
 ??? example "Show all jobs since the beginning of year 2021"
 
@@ -356,7 +363,7 @@ provide a comprehensive documentation regarding available fields and formats.
 
 How to ask for a reservation is described in the section
 [reservations](overview.md#exclusive-reservation-of-hardware).
-After we agreed with your requirements, we will send you an e-mail with your reservation name. Then
+After we agreed with your requirements, we will send you an e-mail with your reservation name. Then,
 you could see more information about your reservation with the following command:
 
 ```console
@@ -387,7 +394,7 @@ constraints, please refer to the [Slurm documentation](https://slurm.schedmd.com
 
 | Feature | Description                                                              |
 |:--------|:-------------------------------------------------------------------------|
-| DA      | subset of Haswell nodes with a high bandwidth to NVMe storage (island 6) |
+| DA      | Subset of Haswell nodes with a high bandwidth to NVMe storage (island 6) |
 
 #### Filesystem Features
 
diff --git a/doc.zih.tu-dresden.de/docs/software/distributed_training.md b/doc.zih.tu-dresden.de/docs/software/distributed_training.md
index 98bbdfaa342d1a1a0277e06b6a5ca16b3e9ba10a..b3c6733bc0c7150eeee561ec450d33a7db27d54a 100644
--- a/doc.zih.tu-dresden.de/docs/software/distributed_training.md
+++ b/doc.zih.tu-dresden.de/docs/software/distributed_training.md
@@ -55,90 +55,91 @@ The Parameter Server holds the parameters and is responsible for updating
 the global state of the models.
 Each worker runs the training loop independently.
 
-#### Example
-
-In this case, we will go through an example with Multi Worker Mirrored Strategy.
-Multi-node training requires a `TF_CONFIG` environment variable to be set which will
-be different on each node.
-
-```console
-marie@compute$ TF_CONFIG='{"cluster": {"worker": ["10.1.10.58:12345", "10.1.10.250:12345"]}, "task": {"index": 0, "type": "worker"}}' python main.py
-```
-
-The `cluster` field describes how the cluster is set up (same on each node).
-Here, the cluster has two nodes referred to as workers.
-The `IP:port` information is listed in the `worker` array.
-The `task` field varies from node to node.
-It specifies the type and index of the node.
-In this case, the training job runs on worker 0, which is `10.1.10.58:12345`.
-We need to adapt this snippet for each node.
-The second node will have `'task': {'index': 1, 'type': 'worker'}`.
-
-With two modifications, we can parallelize the serial code:
-We need to initialize the distributed strategy:
-
-```python
-strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
-```
-
-And define the model under the strategy scope:
-
-```python
-with strategy.scope():
-    model = resnet.resnet56(img_input=img_input, classes=NUM_CLASSES)
-    model.compile(
-        optimizer=opt,
-        loss='sparse_categorical_crossentropy',
-        metrics=['sparse_categorical_accuracy'])
-model.fit(train_dataset,
-    epochs=NUM_EPOCHS)
-```
-
-To run distributed training, the training script needs to be copied to all nodes,
-in this case on two nodes.
-TensorFlow is available as a module.
-Check for the version.
-The `TF_CONFIG` environment variable can be set as a prefix to the command.
-Now, run the script on the partition `alpha` simultaneously on both nodes:
-
-```bash
-#!/bin/bash
-
-#SBATCH --job-name=distr
-#SBATCH --partition=alpha
-#SBATCH --output=%j.out
-#SBATCH --error=%j.err
-#SBATCH --mem=64000
-#SBATCH --nodes=2
-#SBATCH --ntasks=2
-#SBATCH --ntasks-per-node=1
-#SBATCH --cpus-per-task=14
-#SBATCH --gres=gpu:1
-#SBATCH --time=01:00:00
-
-function print_nodelist {
+??? example "Multi Worker Mirrored Strategy"
+
+    In this case, we will go through an example with Multi Worker Mirrored Strategy.
+    Multi-node training requires a `TF_CONFIG` environment variable to be set which will
+    be different on each node.
+
+    ```console
+    marie@compute$ TF_CONFIG='{"cluster": {"worker": ["10.1.10.58:12345", "10.1.10.250:12345"]}, "task": {"index": 0, "type": "worker"}}' python main.py
+    ```
+
+    The `cluster` field describes how the cluster is set up (same on each node).
+    Here, the cluster has two nodes referred to as workers.
+    The `IP:port` information is listed in the `worker` array.
+    The `task` field varies from node to node.
+    It specifies the type and index of the node.
+    In this case, the training job runs on worker 0, which is `10.1.10.58:12345`.
+    We need to adapt this snippet for each node.
+    The second node will have `'task': {'index': 1, 'type': 'worker'}`.
+
+    With two modifications, we can parallelize the serial code:
+    We need to initialize the distributed strategy:
+
+    ```python
+    strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
+    ```
+
+    And define the model under the strategy scope:
+
+    ```python
+    with strategy.scope():
+        model = resnet.resnet56(img_input=img_input, classes=NUM_CLASSES)
+        model.compile(
+            optimizer=opt,
+            loss='sparse_categorical_crossentropy',
+            metrics=['sparse_categorical_accuracy'])
+    model.fit(train_dataset,
+        epochs=NUM_EPOCHS)
+    ```
+
+    To run distributed training, the training script needs to be copied to all nodes,
+    in this case on two nodes.
+    TensorFlow is available as a module.
+    Check for the version.
+    The `TF_CONFIG` environment variable can be set as a prefix to the command.
+    Now, run the script on the partition `alpha` simultaneously on both nodes:
+
+    ```bash
+    #!/bin/bash
+
+    #SBATCH --job-name=distr
+    #SBATCH --partition=alpha
+    #SBATCH --output=%j.out
+    #SBATCH --error=%j.err
+    #SBATCH --mem=64000
+    #SBATCH --nodes=2
+    #SBATCH --ntasks=2
+    #SBATCH --ntasks-per-node=1
+    #SBATCH --cpus-per-task=14
+    #SBATCH --gres=gpu:1
+    #SBATCH --time=01:00:00
+
+    function print_nodelist {
         scontrol show hostname $SLURM_NODELIST
-}
-NODE_1=$(print_nodelist | awk '{print $1}' | sort -u | head -n 1)
-NODE_2=$(print_nodelist | awk '{print $1}' | sort -u | tail -n 1)
-IP_1=$(dig +short ${NODE_1}.taurus.hrsk.tu-dresden.de)
-IP_2=$(dig +short ${NODE_2}.taurus.hrsk.tu-dresden.de)
+    }
+    NODE_1=$(print_nodelist | awk '{print $1}' | sort -u | head -n 1)
+    NODE_2=$(print_nodelist | awk '{print $1}' | sort -u | tail -n 1)
+    IP_1=$(dig +short ${NODE_1}.taurus.hrsk.tu-dresden.de)
+    IP_2=$(dig +short ${NODE_2}.taurus.hrsk.tu-dresden.de)
 
-module load modenv/hiera
-module load modenv/hiera GCC/10.2.0 CUDA/11.1.1 OpenMPI/4.0.5 TensorFlow/2.4.1
+    module load modenv/hiera
+    module load modenv/hiera GCC/10.2.0 CUDA/11.1.1 OpenMPI/4.0.5 TensorFlow/2.4.1
 
-# On the first node
-TF_CONFIG='{"cluster": {"worker": ["'"${NODE_1}"':33562", "'"${NODE_2}"':33561"]}, "task": {"index": 0, "type": "worker"}}' srun -w ${NODE_1} -N 1 --ntasks=1 --gres=gpu:1 python main_ddl.py &
+    # On the first node
+    TF_CONFIG='{"cluster": {"worker": ["'"${NODE_1}"':33562", "'"${NODE_2}"':33561"]}, "task": {"index": 0, "type": "worker"}}' srun -w ${NODE_1} -N 1 --ntasks=1 --gres=gpu:1 python main_ddl.py &
 
-# On the second node
-TF_CONFIG='{"cluster": {"worker": ["'"${NODE_1}"':33562", "'"${NODE_2}"':33561"]}, "task": {"index": 1, "type": "worker"}}' srun -w ${NODE_2} -N 1 --ntasks=1 --gres=gpu:1 python main_ddl.py &
+    # On the second node
+    TF_CONFIG='{"cluster": {"worker": ["'"${NODE_1}"':33562", "'"${NODE_2}"':33561"]}, "task": {"index": 1, "type": "worker"}}' srun -w ${NODE_2} -N 1 --ntasks=1 --gres=gpu:1 python main_ddl.py &
 
-wait
-```
+    wait
+    ```
 
 ### Distributed PyTorch
 
 !!! note
+
     This section is under construction
 
 PyTorch provides multiple ways to achieve data parallelism to train the deep learning models
@@ -179,23 +180,21 @@ See: Use `nn.parallel.DistributedDataParallel` instead of multiprocessing or `nn
 Check the [page](https://pytorch.org/docs/stable/notes/cuda.html#cuda-nn-ddp-instead) and
 [Distributed Data Parallel](https://pytorch.org/docs/stable/notes/ddp.html#ddp).
 
-Examples:
+??? example "Parallel Model"
 
-1. The parallel model.
-The main aim of this model to show the way how to effectively implement your
-neural network on several GPUs.
-It includes a comparison of different kinds of models and tips to improve the performance
-of your model.
-**Necessary** parameters for running this model are **2 GPU** and 14 cores.
+    The main aim of this model is to show the way how to effectively implement your neural network
+    on multiple GPUs. It includes a comparison of different kinds of models and tips to improve the
+    performance of your model.
+    **Necessary** parameters for running this model are **2 GPU** and 14 cores.
 
-(example_PyTorch_parallel.zip)
+    Download: [example_PyTorch_parallel.zip (4.2 KB)](misc/example_PyTorch_parallel.zip)
 
-Remember that for using [JupyterHub service](../access/jupyterhub.md) for PyTorch you need to
-create and activate a virtual environment (kernel) with loaded essential modules.
+    Remember that for using [JupyterHub service](../access/jupyterhub.md) for PyTorch, you need to
+    create and activate a virtual environment (kernel) with loaded essential modules.
 
-Run the example in the same way as the previous examples.
+    Run the example in the same way as the previous examples.
 
-#### Distributed data-parallel
+#### Distributed Data-Parallel
 
 [DistributedDataParallel](https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel)
 (DDP) implements data parallelism at the module level which can run across multiple machines.
@@ -206,21 +205,21 @@ synchronize gradients and buffers.
 
 Please also look at the [official tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html).
 
-To use distributed data parallelism on ZIH systems, please make sure the `--ntasks-per-node`
-parameter is equal to the number of GPUs you use per node.
+To use distributed data parallelism on ZIH systems, please make sure the value of
+parameter `--ntasks-per-node=<N>` equals the number of GPUs you use per node.
 Also, it can be useful to increase `memory/cpu` parameters if you run larger models.
 Memory can be set up to:
 
 - `--mem=250G` and `--cpus-per-task=7` for the partition `ml`.
 - `--mem=60G` and `--cpus-per-task=6` for the partition `gpu2`.
 
-Keep in mind that only one memory parameter (`--mem-per-cpu=<MB>` or `--mem=<MB>`) can be specified
+Keep in mind that only one memory parameter (`--mem-per-cpu=<MB>` or `--mem=<MB>`) can be specified.
 
 ## External Distribution
 
 ### Horovod
 
-[Horovod](https://github.com/horovod/horovod) is the open source distributed training framework
+[Horovod](https://github.com/horovod/horovod) is the open-source distributed training framework
 for TensorFlow, Keras and PyTorch.
 It makes it easier to develop distributed deep learning projects and speeds them up.
 Horovod scales well to a large number of nodes and has a strong focus on efficient training on
@@ -235,7 +234,7 @@ the distributed code from TensorFlow for instance, with parameter servers.
 Horovod uses MPI and NCCL which gives in some cases better results than
 pure TensorFlow and PyTorch.
 
-#### Horovod as a module
+#### Horovod as Module
 
 Horovod is available as a module with **TensorFlow** or **PyTorch** for
 **all** module environments.
@@ -260,19 +259,19 @@ marie@compute$ module load Horovod/0.19.5-fosscuda-2019b-TensorFlow-2.2.0-Python
 
 Or if you want to use Horovod on the partition `alpha`, you can load it with the dependencies:
 
-```bash
+```console
 marie@alpha$ module spider Horovod                         #Check available modules
 marie@alpha$ module load modenv/hiera  GCC/10.2.0  CUDA/11.1.1  OpenMPI/4.0.5 Horovod/0.21.1-TensorFlow-2.4.1
 ```
 
-#### Horovod installation
+#### Horovod Installation
 
 However, if it is necessary to use another version of Horovod, it is possible to install it
 manually. For that, you need to create a [virtual environment](python_virtual_environments.md) and
 load the dependencies (e.g. MPI).
 Installing TensorFlow can take a few hours and is not recommended.
 
-##### Install Horovod for TensorFlow with python and pip
+##### Install Horovod for TensorFlow with Python and Pip
 
 This example shows the installation of Horovod for TensorFlow.
 Adapt as required and refer to the [Horovod documentation](https://horovod.readthedocs.io/en/stable/install_include.html)
@@ -299,13 +298,12 @@ Available Tensor Operations:
     [ ] CCL
     [X] MPI
     [ ] Gloo
-
 ```
 
 If you want to use OpenMPI then specify `HOROVOD_GPU_ALLREDUCE=MPI`.
 To have better performance it is recommended to use NCCL instead of OpenMPI.
 
-##### Verify that Horovod works
+##### Verify Horovod Works
 
 ```pycon
 >>> import tensorflow
@@ -320,29 +318,30 @@ To have better performance it is recommended to use NCCL instead of OpenMPI.
 Hello from: 0
 ```
 
-#### Example
-
-Follow the steps in the [official examples](https://github.com/horovod/horovod/tree/master/examples)
-to parallelize your code.
-In Horovod, each GPU gets pinned to a process.
-You can easily start your job with the following bash script with four processes on two nodes:
-
-```bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --ntasks=4
-#SBATCH --ntasks-per-node=2
-#SBATCH --gres=gpu:2
-#SBATCH --partition=ml
-#SBATCH --mem=250G
-#SBATCH --time=01:00:00
-#SBATCH --output=run_horovod.out
-
-module load modenv/ml
-module load Horovod/0.19.5-fosscuda-2019b-TensorFlow-2.2.0-Python-3.7.4
-
-srun python <your_program.py>
-```
-
-Do not forget to specify the total number of tasks `--ntasks` and the number of tasks per node
-`--ntasks-per-node` which must match the number of GPUs per node.
+??? example
+
+    Follow the steps in the
+    [official examples](https://github.com/horovod/horovod/tree/master/examples)
+    to parallelize your code.
+    In Horovod, each GPU gets pinned to a process.
+    You can easily start your job with the following bash script with four processes on two nodes:
+
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2
+    #SBATCH --ntasks=4
+    #SBATCH --ntasks-per-node=2
+    #SBATCH --gres=gpu:2
+    #SBATCH --partition=ml
+    #SBATCH --mem=250G
+    #SBATCH --time=01:00:00
+    #SBATCH --output=run_horovod.out
+
+    module load modenv/ml
+    module load Horovod/0.19.5-fosscuda-2019b-TensorFlow-2.2.0-Python-3.7.4
+
+    srun python <your_program.py>
+    ```
+
+    Do not forget to specify the total number of tasks `--ntasks` and the number of tasks per node
+    `--ntasks-per-node` which must match the number of GPUs per node.
diff --git a/Compendium_attachments/PyTorch/example_PyTorch_parallel.zip b/doc.zih.tu-dresden.de/docs/software/misc/example_PyTorch_parallel.zip
similarity index 100%
rename from Compendium_attachments/PyTorch/example_PyTorch_parallel.zip
rename to doc.zih.tu-dresden.de/docs/software/misc/example_PyTorch_parallel.zip
diff --git a/doc.zih.tu-dresden.de/docs/software/virtual_machines_tools.md b/doc.zih.tu-dresden.de/docs/software/virtual_machines_tools.md
index 0b03ddf927aeed68d8726797ed04db373d24b9b3..fbec2e51bc453cc17e2d131d7229c50ff90aa23f 100644
--- a/doc.zih.tu-dresden.de/docs/software/virtual_machines_tools.md
+++ b/doc.zih.tu-dresden.de/docs/software/virtual_machines_tools.md
@@ -12,12 +12,12 @@ it to ZIH systems for execution.
 **This does not work on the partition `ml`** as it uses the Power9 architecture which your
 workstation likely doesn't.
 
-For this we provide a Virtual Machine (VM) on the partition `ml` which allows users to gain root
-permissions in an isolated environment. The workflow to use this manually is described at
-[this page](virtual_machines.md) but is quite cumbersome.
+For this, we provide a Virtual Machine (VM) on the partition `ml` which allows users to gain root
+permissions in an isolated environment. The workflow to use this manually is described for
+[virtual machines](virtual_machines.md) but is quite cumbersome.
 
-To make this easier two programs are provided: `buildSingularityImage` and `startInVM` which do what
-they say. The latter is for more advanced use cases so you should be fine using
+To make this easier, two programs are provided: `buildSingularityImage` and `startInVM`, which do
+what they say. The latter is for more advanced use cases, so you should be fine using
 `buildSingularityImage`, see the following section.
 
 !!! note "SSH key without password"
@@ -28,43 +28,48 @@ they say. The latter is for more advanced use cases so you should be fine using
 **The recommended workflow** is to create and test a definition file locally. You usually start from
 a base Docker container. Those typically exist for different architectures but with a common name
 (e.g.  `ubuntu:18.04`). Singularity automatically uses the correct Docker container for your current
-architecture when building. So in most cases you can write your definition file, build it and test
+architecture when building. So, in most cases, you can write your definition file, build it and test
 it locally, then move it to ZIH systems and build it on Power9 (partition `ml`) without any further
 changes. However, sometimes Docker containers for different architectures have different suffixes,
 in which case you'd need to change that when moving to ZIH systems.
 
 ## Build a Singularity Container in a Job
 
-To build a Singularity container on ZIH systems simply run:
+To build a Singularity container for the power9-architecture on ZIH systems simply run:
 
 ```console
 marie@login$ buildSingularityImage --arch=power9 myContainer.sif myDefinition.def
 ```
 
-This command will submit a batch job and immediately return. Note that while Power9 is currently
-the only supported architecture, the parameter is still required. If you want it to block while the
+To build a singularity image on the x86-architecture, run:
+
+```console
+marie@login$ buildSingularityImage --arch=x86 myContainer.sif myDefinition.def
+```
+
+These commands will submit a batch job and immediately return. If you want it to block while the
 image is built and see live output, add the option `--interactive`:
 
 ```console
 marie@login$ buildSingularityImage --arch=power9 --interactive myContainer.sif myDefinition.def
 ```
 
-There are more options available which can be shown by running `buildSingularityImage --help`. All
-have reasonable defaults.The most important ones are:
+There are more options available, which can be shown by running `buildSingularityImage --help`. All
+have reasonable defaults. The most important ones are:
 
 * `--time <time>`: Set a higher job time if the default time is not
   enough to build your image and your job is canceled before completing. The format is the same as
   for Slurm.
 * `--tmp-size=<size in GB>`: Set a size used for the temporary
-  location of the Singularity container. Basically the size of the extracted container.
+  location of the Singularity container, basically the size of the extracted container.
 * `--output=<file>`: Path to a file used for (log) output generated
   while building your container.
 * Various Singularity options are passed through. E.g.
   `--notest, --force, --update`. See, e.g., `singularity --help` for details.
 
-For **advanced users** it is also possible to manually request a job with a VM (`srun -p ml
+For **advanced users**, it is also possible to manually request a job with a VM (`srun -p ml
 --cloud=kvm ...`) and then use this script to build a Singularity container from within the job. In
-this case the `--arch` and other Slurm related parameters are not required. The advantage of using
+this case, the `--arch` and other Slurm related parameters are not required. The advantage of using
 this script is that it automates the waiting for the VM and mounting of host directories into it
 (can also be done with `startInVM`) and creates a temporary directory usable with Singularity inside
 the VM controlled by the `--tmp-size` parameter.
@@ -73,21 +78,22 @@ the VM controlled by the `--tmp-size` parameter.
 
 **Read here if you have problems like "File not found".**
 
-As the build starts in a VM you may not have access to all your files.  It is usually bad practice
+As the build starts in a VM, you may not have access to all your files. It is usually bad practice
 to refer to local files from inside a definition file anyway as this reduces reproducibility.
-However common directories are available by default. For others, care must be taken. In short:
+However, common directories are available by default. For others, care must be taken. In short:
 
-* `/home/$USER`, `/scratch/$USER` are available and should be used `/scratch/\<group>` also works for
-* all groups the users is in `/projects/\<group>` similar, but is read-only! So don't use this to
-  store your generated container directly, but rather move it here afterwards
-* /tmp is the VM local temporary directory. All files put here will be lost!
+* `/home/$USER`, `/scratch/$USER` are available and should be used `/scratch/<group>` also works for
+  all groups the users is in
+* `/projects/<group>` similar, but is read-only! So don't use this to store your generated
+  container directly, but rather move it here afterwards
+* `/tmp` is the VM local temporary directory. All files put here will be lost!
 
 If the current directory is inside (or equal to) one of the above (except `/tmp`), then relative paths
 for container and definition work as the script changes to the VM equivalent of the current
-directory.  Otherwise you need to use absolute paths. Using `~` in place of `$HOME` does work too.
+directory.  Otherwise, you need to use absolute paths. Using `~` in place of `$HOME` does work too.
 
-Under the hood, the filesystem of ZIH systems is mounted via SSHFS at `/host_data`, so if you need any
-other files they can be found there.
+Under the hood, the filesystem of ZIH systems is mounted via SSHFS at `/host_data`. So if you need any
+other files, they can be found there.
 
 There is also a new SSH key named `kvm` which is created by the scripts and authorized inside the VM
 to allow for password-less access to SSHFS. This is stored at `~/.ssh/kvm` and regenerated if it
@@ -98,26 +104,32 @@ needs to be re-generated on every script run.
 
 ## Start a Job in a VM
 
-Especially when developing a Singularity definition file it might be useful to get a shell directly
-on a VM. To do so simply run:
+Especially when developing a Singularity definition file, it might be useful to get a shell directly
+on a VM. To do so on the power9-architecture, simply run:
 
 ```console
 startInVM --arch=power9
 ```
 
+To do so on the x86-architecture, run:
+
+```console
+startInVM --arch=x86
+```
+
 This will execute an `srun` command with the `--cloud=kvm` parameter, wait till the VM is ready,
 mount all folders (just like `buildSingularityImage`, see the Filesystem section above) and come
 back with a bash inside the VM. Inside that you are root, so you can directly execute `singularity
 build` commands.
 
-As usual more options can be shown by running `startInVM --help`, the most important one being
+As usual, more options can be shown by running `startInVM --help`, the most important one being
 `--time`.
 
 There are two special use cases for this script:
 
 1. Execute an arbitrary command inside the VM instead of getting a bash by appending the command to
-   the script. Example: `startInVM --arch=power9 singularity build \~/myContainer.sif  \~/myDefinition.de`
+   the script. Example: `startInVM --arch=power9 singularity build ~/myContainer.sif  ~/myDefinition.de`
 1. Use the script in a job manually allocated via srun/sbatch. This will work the same as when
    running outside a job but will **not** start a new job. This is useful for using it inside batch
-   scripts, when you already have an allocation or need special arguments for the job system. Again
+   scripts, when you already have an allocation or need special arguments for the job system. Again,
    you can run an arbitrary command by passing it to the script.
diff --git a/doc.zih.tu-dresden.de/requirements.txt b/doc.zih.tu-dresden.de/requirements.txt
deleted file mode 100644
index 272b09c7c7ffb6b945eaa66e14e2e695f5502f17..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/requirements.txt
+++ /dev/null
@@ -1,6 +0,0 @@
-# Documentation static site generator & deployment tool
-mkdocs>=1.1.2
-
-# Add custom theme if not inside a theme_dir
-# (https://github.com/mkdocs/mkdocs/wiki/MkDocs-Themes)
-mkdocs-material>=7.1.0
diff --git a/doc.zih.tu-dresden.de/util/check-bash-syntax.sh b/doc.zih.tu-dresden.de/util/check-bash-syntax.sh
index e5681413d14771e8fb144a2684161cf5e7c1edae..9f31effee3ebc3380af5ca892047aca6a9357139 100755
--- a/doc.zih.tu-dresden.de/util/check-bash-syntax.sh
+++ b/doc.zih.tu-dresden.de/util/check-bash-syntax.sh
@@ -47,12 +47,12 @@ branch="origin/${CI_MERGE_REQUEST_TARGET_BRANCH_NAME:-preview}"
 
 if [ $all_files = true ]; then
   echo "Search in all bash files."
-  files=$(git ls-tree --full-tree -r --name-only HEAD $basedir/docs/ | grep .sh)
+  files=`git ls-tree --full-tree -r --name-only HEAD $basedir/docs/ | grep .sh || true`
 elif [[ ! -z $file ]]; then
   files=$file
 else
   echo "Search in git-changed files."
-  files=`git diff --name-only "$(git merge-base HEAD "$branch")" | grep .sh`
+  files=`git diff --name-only "$(git merge-base HEAD "$branch")" | grep .sh || true`
 fi
 
 
diff --git a/doc.zih.tu-dresden.de/wordlist.aspell b/doc.zih.tu-dresden.de/wordlist.aspell
index c54073a4814792a22cd904a36266f46e4cfef91f..a34ccf8cbe586522b9a6c0ee7d8f201d030a4ae2 100644
--- a/doc.zih.tu-dresden.de/wordlist.aspell
+++ b/doc.zih.tu-dresden.de/wordlist.aspell
@@ -96,6 +96,7 @@ GitHub
 GitLab
 GitLab's
 glibc
+Gloo
 gnuplot
 gpu
 GPU
@@ -268,6 +269,7 @@ Rsync
 runnable
 runtime
 Runtime
+sacct
 salloc
 Sandybridge
 Saxonid