diff --git a/README.md b/README.md
index 05825be788b1d0e0d6436454e6aa0849d28d93c3..d3482f3ae680798e81cdd2ea7814eeadb4abe57d 100644
--- a/README.md
+++ b/README.md
@@ -15,7 +15,7 @@ within the CI/CD pipeline help to ensure a high quality documentation.
 ## Reporting Issues
 
 Issues concerning this documentation can reported via the GitLab
-[issue tracking system](https://gitlab.hrz.tu-chemnitz.de/zih/hpc-compendium/hpc-compendium/-/issues).
+[issue tracking system](https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium/-/issues).
 Please check for any already existing issue before submitting your issue in order to avoid duplicate
 issues.
 
diff --git a/.markdownlintrc b/doc.zih.tu-dresden.de/.markdownlintrc
similarity index 100%
rename from .markdownlintrc
rename to doc.zih.tu-dresden.de/.markdownlintrc
diff --git a/doc.zih.tu-dresden.de/README.md b/doc.zih.tu-dresden.de/README.md
index 31344cece97859451158faa45a172ebcacea1752..1829a5bc54c26ce37f61f27410e45e8901488183 100644
--- a/doc.zih.tu-dresden.de/README.md
+++ b/doc.zih.tu-dresden.de/README.md
@@ -9,7 +9,7 @@ long describing complex steps, contributing is quite easy - trust us.
 ## Contribute via Issue
 
 Users can contribute to the documentation via the
-[issue tracking system](https://gitlab.hrz.tu-chemnitz.de/zih/hpc-compendium/hpc-compendium/-/issues).
+[issue tracking system](https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium/-/issues).
 For that, open an issue to report typos and missing documentation or request for more precise
 wording etc.  ZIH staff will get in touch with you to resolve the issue and improve the
 documentation.
@@ -120,14 +120,20 @@ cd /PATH/TO/hpc-compendium
 docker build -t hpc-compendium .
 ```
 
+To avoid a lot of retyping, use the following in your shell:
+
+```bash
+alias wiki="docker run --name=hpc-compendium --rm -it -w /docs --mount src=$PWD/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium bash -c"
+```
+
 If you want to see how it looks in your browser, you can use shell commands to serve
 the documentation:
 
 ```Bash
-docker run --name=hpc-compendium -p 8000:8000 --rm -it -w /docs --mount src="$(pwd)"/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium bash -c "mkdocs build --verbose && mkdocs serve -a 0.0.0.0:8000"
+wiki "mkdocs build --verbose && mkdocs serve -a 0.0.0.0:8000"
 ```
 
-You can view the documentation via [http://localhost:8000](http://localhost:8000) in your browser, now.
+You can view the documentation via `http://localhost:8000` in your browser, now.
 
 If that does not work, check if you can get the URL for your browser's address
 bar from a different terminal window:
@@ -141,32 +147,32 @@ documentation.  If you want to check whether the markdown files are formatted
 properly, use the following command:
 
 ```Bash
-docker run --name=hpc-compendium --rm -it -w /docs --mount src="$(pwd)"/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium markdownlint docs
+wiki 'markdownlint docs'
 ```
 
 To check whether there are links that point to a wrong target, use
 (this may take a while and gives a lot of output because it runs over all files):
 
 ```Bash
-docker run --name=hpc-compendium --rm -it -w /docs --mount src="$(pwd)"/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium bash -c "find docs -type f -name '*.md' | xargs -L1 markdown-link-check"
+wiki "find docs -type f -name '*.md' | xargs -L1 markdown-link-check"
 ```
 
-To check a single file, e. g. `doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md`, use:
+To check a single file, e. g. `doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md`, use:
 
 ```Bash
-docker run --name=hpc-compendium --rm -it -w /docs --mount src="$(pwd)"/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium markdown-link-check docs/software/big_data_frameworks.md
+wiki 'markdown-link-check docs/software/big_data_frameworks_spark.md'
 ```
 
 For spell-checking a single file, use:
 
 ```Bash
-docker run --name=hpc-compendium --rm -it -w /docs --mount src="$(pwd)"/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium ./util/check-spelling.sh <file>
+wiki 'util/check-spelling.sh <file>'
 ```
 
 For spell-checking all files, use:
 
 ```Bash
-docker run --name=hpc-compendium --rm -it -w /docs --mount src="$(pwd)"/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium ./util/check-spelling.sh
+docker run --name=hpc-compendium --rm -it -w /docs --mount src="$(pwd)",target=/docs,type=bind hpc-compendium ./doc.zih.tu-dresden.de/util/check-spelling.sh
 ```
 
 This outputs all words of all files that are unknown to the spell checker.
@@ -194,7 +200,7 @@ locally on the documentation. At first, you should add a remote pointing to the
 documentation.
 
 ```Shell Session
-~ git remote add upstream-zih git@gitlab.hrz.tu-chemnitz.de:zih/hpc-compendium/hpc-compendium.git
+~ git remote add upstream-zih git@gitlab.hrz.tu-chemnitz.de:zih/hpcsupport/hpc-compendium.git
 ```
 
 Now, you have two remotes, namely *origin* and *upstream-zih*. The remote *origin* points to your fork,
@@ -204,8 +210,8 @@ whereas *upstream-zih* points to the original documentation repository at GitLab
 $ git remote -v
 origin  git@gitlab.hrz.tu-chemnitz.de:LOGIN/hpc-compendium.git (fetch)
 origin  git@gitlab.hrz.tu-chemnitz.de:LOGIN/hpc-compendium.git (push)
-upstream-zih  git@gitlab.hrz.tu-chemnitz.de:zih/hpc-compendium/hpc-compendium.git (fetch)
-upstream-zih  git@gitlab.hrz.tu-chemnitz.de:zih/hpc-compendium/hpc-compendium.git (push)
+upstream-zih  git@gitlab.hrz.tu-chemnitz.de:zih/hpcsupport/hpc-compendium.git (fetch)
+upstream-zih  git@gitlab.hrz.tu-chemnitz.de:zih/hpcsupport/hpc-compendium.git (push)
 ```
 
 Next, you should synchronize your `main` branch with the upstream.
@@ -237,7 +243,7 @@ new branch (a so-called feature branch) basing on the `main` branch and commit y
 
 The last command pushes the changes to your remote at branch `FEATUREBRANCH`. Now, it is time to
 incorporate the changes and improvements into the HPC Compendium. For this, create a
-[merge request](https://gitlab.hrz.tu-chemnitz.de/zih/hpc-compendium/hpc-compendium/-/merge_requests/new)
+[merge request](https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium/-/merge_requests/new)
 to the `main` branch.
 
 ### Important Branches
@@ -248,8 +254,8 @@ There are two important branches in this repository:
   - Branch containing recent changes which will be soon merged to main branch (protected
     branch)
   - Served at [todo url](todo url) from TUD VPN
-- Main: Branch which is deployed at [doc.zih.tu-dresden.de](doc.zih.tu-dresden.de) holding the
-    current documentation (protected branch)
+- Main: Branch which is deployed at [https://doc.zih.tu-dresden.de](https://doc.zih.tu-dresden.de)
+    holding the current documentation (protected branch)
 
 If you are totally sure about your commit (e.g., fix a typo), it is only the following steps:
 
@@ -388,13 +394,29 @@ pika.md is not included in nav
 specific_software.md is not included in nav
 ```
 
+### Pre-commit Git Hook
+
+You can automatically run checks whenever you try to commit a change. In this case, failing checks
+prevent commits (unless you use option `--no-verify`). This can be accomplished by adding a
+pre-commit hook to your local clone of the repository. The following code snippet shows how to do
+that:
+
+```bash
+cp doc.zih.tu-dresden.de/util/pre-commit .git/hooks/
+```
+
+!!! note
+    The pre-commit hook only works, if you can use docker without using `sudo`. If this is not
+    already the case, use the command `adduser $USER docker` to enable docker commands without
+    `sudo` for the current user. Restart the docker daemons afterwards.
+
 ## Content Rules
 
 **Remark:** Avoid using tabs both in markdown files and in `mkdocs.yaml`. Type spaces instead.
 
 ### New Page and Pages Structure
 
-The pages structure is defined in the configuration file [mkdocs.yaml](doc.zih.tu-dresden.de/mkdocs.yml).
+The pages structure is defined in the configuration file [mkdocs.yaml](mkdocs.yml).
 
 ```Shell Session
 docs/
@@ -453,9 +475,11 @@ there is a list of conventions w.r.t. spelling and technical wording.
 * `I/O` not `IO`
 * `Slurm` not `SLURM`
 * `Filesystem` not `file system`
-* `ZIH system` and `ZIH systems` not `Taurus`, `HRSKII`, `our HPC systems` etc.
+* `ZIH system` and `ZIH systems` not `Taurus`, `HRSKII`, `our HPC systems`, etc.
 * `Workspace` not `work space`
 * avoid term `HPC-DA`
+* Partition names after the keyword *partition*: *partition `ml`* not *ML partition*, *ml
+  partition*, *`ml` partition*, *"ml" partition*, etc.
 
 ### Code Blocks and Command Prompts
 
diff --git a/doc.zih.tu-dresden.de/docs/access/jupyterhub.md b/doc.zih.tu-dresden.de/docs/access/jupyterhub.md
index 6c5d86618e8e105143cfc6ad24cd954a10ce354c..dcdd9363c8d406d7227b97abce91ad67298e9a67 100644
--- a/doc.zih.tu-dresden.de/docs/access/jupyterhub.md
+++ b/doc.zih.tu-dresden.de/docs/access/jupyterhub.md
@@ -1,218 +1,183 @@
 # JupyterHub
 
-With our JupyterHub service we offer you now a quick and easy way to
-work with jupyter notebooks on Taurus.
+With our JupyterHub service we offer you a quick and easy way to work with Jupyter notebooks on ZIH
+systems. This page covers starting and stopping JuperterHub sessions, error handling and customizing
+the environment.
 
-Subpages:
-
--   [JupyterHub for Teaching (git-pull feature, quickstart links, direct
-    links to notebook files)](jupyterhub_for_teaching.md)
+We also provide a comprehensive documentation on how to use
+[JupyterHub for Teaching (git-pull feature, quickstart links, direct links to notebook files)](jupyterhub_for_teaching.md).
 
 ## Disclaimer
 
-This service is provided "as-is", use at your own discretion. Please
-understand that JupyterHub is a complex software system of which we are
-not the developers and don't have any downstream support contracts for,
-so we merely offer an installation of it but cannot give extensive
-support in every case.
+!!! warning
+
+    The JupyterHub service is provided *as-is*, use at your own discretion.
+
+Please understand that JupyterHub is a complex software system of which we are not the developers
+and don't have any downstream support contracts for, so we merely offer an installation of it but
+cannot give extensive support in every case.
 
 ## Access
 
-<span style="color:red">**NOTE**</span> This service is only available for users with
-an active HPC project. See [here](../access/overview.md) how to apply for an HPC
-project.
+!!! note
+    This service is only available for users with an active HPC project.
+    See [here](../access/overview.md) how to apply for an HPC project.
 
-JupyterHub is available here:\
-<https://taurus.hrsk.tu-dresden.de/jupyter>
+JupyterHub is available at
+[https://taurus.hrsk.tu-dresden.de/jupyter](https://taurus.hrsk.tu-dresden.de/jupyter).
 
-## Start a session
+## Start a Session
 
-Start a new session by clicking on the **TODO ADD IMAGE** \<img alt="" height="24"
-src="%ATTACHURL%/start_my_server.png" /> button.
+Start a new session by clicking on the `Start my server` button.
 
 A form opens up where you can customize your session. Our simple form
 offers you the most important settings to start quickly.
 
-**TODO ADD IMAGE** \<a href="%ATTACHURL%/simple_form.png">\<img alt="session form"
-src="<https://doc.zih.tu-dresden.de/hpc-wiki/pub/Compendium/JupyterHub/simple_form.png>"
-style="border: 1px solid #888;" title="simple form" width="400" />\</a>
+![Simple form](misc/simple_form.png)
+{: align="center"}
 
 For advanced users we have an extended form where you can change many
 settings. You can:
 
--   modify Slurm parameters to your needs ( [more about
-    Slurm](../jobs_and_resources/slurm.md))
--   assign your session to a project or reservation
--   load modules from the [LMOD module
-    system](../software/runtime_environment.md)
--   choose a different standard environment (in preparation for future
-    software updates or testing additional features)
+- modify batch system parameters to your needs ([more about batch system Slurm](../jobs_and_resources/slurm.md))
+- assign your session to a project or reservation
+- load modules from the [module system](../software/runtime_environment.md)
+- choose a different standard environment (in preparation for future
+  software updates or testing additional features)
 
-**TODO ADD IMAGE** \<a href="%ATTACHURL%/advanced_form_nov2019.png">\<img alt="session
-form"
-src="<https://doc.zih.tu-dresden.de/hpc-wiki/pub/Compendium/JupyterHub/advanced_form_nov2019.png>"
-style="border: 1px solid #888;" title="advanced form" width="400"
-/>\</a>
+![Advanced form](misc/advanced_form.png)
+{: align="center"}
 
 You can save your own configurations as additional presets. Those are
 saved in your browser and are lost if you delete your browsing data. Use
 the import/export feature (available through the button) to save your
 presets in text files.
 
-Note: the [<span style="color:blue">**alpha**</span>]
-(https://doc.zih.tu-dresden.de/hpc-wiki/bin/view/Compendium/AlphaCentauri)
-partition is available only in the extended form.
+!!! info
+    The partition [alpha](https://doc.zih.tu-dresden.de/hpc-wiki/bin/view/Compendium/AlphaCentauri)
+    is available only in the extended form.
 
 ## Applications
 
-You can choose between JupyterLab or the classic notebook app.
+You can choose between JupyterLab or classic Jupyter notebooks as outlined in the following.
 
 ### JupyterLab
 
-**TODO ADD IMAGE** \<a href="%ATTACHURL%/jupyterlab_app.png">\<img alt="jupyterlab app"
-src="<https://doc.zih.tu-dresden.de/hpc-wiki/pub/Compendium/JupyterHub/jupyterlab_app.png>"
-style="border: 1px solid #888;" title="JupyterLab overview" width="400"
-/>\</a>
+![JupyterLab overview](misc/jupyterlab_overview.png)
+{: align="center"}
 
 The main workspace is used for multiple notebooks, consoles or
 terminals. Those documents are organized with tabs and a very versatile
 split screen feature. On the left side of the screen you can open
 several views:
 
--   file manager
--   controller for running kernels and terminals
--   overview of commands and settings
--   details about selected notebook cell
--   list of open tabs
+- file manager
+- controller for running kernels and terminals
+- overview of commands and settings
+- details about selected notebook cell
+- list of open tabs
 
-### Classic notebook
+### Classic Jupyter Notebook
 
-**TODO ADD IMAGE** \<a href="%ATTACHURL%/jupyter_notebook_app_filebrowser.png">\<img
-alt="filebrowser in jupyter notebook server" width="400"
-src="<https://doc.zih.tu-dresden.de/hpc-wiki/pub/Compendium/JupyterHub/jupyter_notebook_app_filebrowser.png>"
-style="border: 1px solid #888;" title="Classic notebook (file browser)"
-/>\</a>
+Initially your `home` directory is listed. You can open existing notebooks or files by navigating to
+the corresponding path and clicking on them.
 
-**TODO ADD IMAGE** \<a href="%ATTACHURL%/jupyter_notebook_example_matplotlib.png">\<img
-alt="jupyter_notebook_example_matplotlib" width="400"
-src="<https://doc.zih.tu-dresden.de/hpc-wiki/pub/Compendium/JupyterHub/jupyter_notebook_example_matplotlib.png>"
-style="border: 1px solid #888;" title="Classic notebook (matplotlib
-demo)" />\</a>
+![Jupyter notebook file browser](misc/jupyter_notebook_file_browser.png)
+{: align="center"}
 
-Initially you will get a list of your home directory. You can open
-existing notebooks or files by clicking on them.
+Above the table on the right side is the button `New` which lets you create new notebooks, files,
+directories or terminals.
 
-Above the table on the right side is the "New ⏷" button which lets you
-create new notebooks, files, directories or terminals.
+![Jupyter notebook example matplotlib](misc/jupyter_notebook_example_matplotlib.png)
+{: align="center"}
 
-## The notebook
+## Jupyter Notebooks in General
 
-In JupyterHub you can create scripts in notebooks.
-Notebooks are programs which are split in multiple logical code blocks.
-In between those code blocks you can insert text blocks for
-documentation and each block can be executed individually. Each notebook
-is paired with a kernel which runs the code. We currently offer one for
-Python, C++, MATLAB and R.
+In JupyterHub you can create scripts in notebooks. Notebooks are programs which are split into
+multiple logical code blocks.  In between those code blocks you can insert text blocks for
+documentation and each block can be executed individually. Each notebook is paired with a kernel
+running the code. We currently offer one for Python, C++, MATLAB and R.
 
-## Stop a session
+## Stop a Session
 
-It's good practise to stop your session once your work is done. This
-releases resources for other users and your quota is less charged. If
-you just log out or close the window your server continues running and
-will not stop until the Slurm job runtime hits the limit (usually 8
-hours).
+It is good practise to stop your session once your work is done. This releases resources for other
+users and your quota is less charged. If you just log out or close the window, your server continues
+running and **will not stop** until the Slurm job runtime hits the limit (usually 8 hours).
 
 At first you have to open the JupyterHub control panel.
 
-**JupyterLab**: Open the file menu and then click on Logout. You can
-also click on "Hub Control Panel" which opens the control panel in a new
+**JupyterLab**: Open the file menu and then click on `Logout`. You can
+also click on `Hub Control Panel` which opens the control panel in a new
 tab instead.
 
-**TODO ADD IMAGE** \<a href="%ATTACHURL%/jupyterlab_logout.png">\<img alt="" height="400"
-src="<https://doc.zih.tu-dresden.de/hpc-wiki/pub/Compendium/JupyterHub/jupyterlab_logout.png>"
-style="border: 1px solid #888;" title="JupyterLab logout button"/>\</a>
+![JupyterLab logout](misc/jupyterlab_logout.png)
+{: align="center"}
 
-**Classic notebook**: Click on the control panel button on the top right
+**Classic Jupyter notebook**: Click on the control panel button on the top right
 of your screen.
 
-**TODO ADD IMAGE** \<img alt="" src="%ATTACHURL%/notebook_app_control_panel_btn.png"
-style="border: 1px solid #888;" title="Classic notebook (control panel
-button)" />
-
-Now you are back on the JupyterHub page and you can stop your server by
-clicking on **TODO ADD IMAGE** \<img alt="" height="24"
-src="%ATTACHURL%/stop_my_server.png" title="Stop button" />.
+![Jupyter notebook control panel button](misc/jupyter_notebook_control_panel_button.png)
+{: align="center"}
 
-## Error handling
+Now you are back on the JupyterHub page and you can stop your server by clicking on
+![Stop my server](misc/stop_my_server.png)
+{: align="center"}
 
-We want to explain some errors that you might face sooner or later. If
-you need help open a ticket at HPC support.
+## Error Handling
 
-### Error while starting a session
+We want to explain some errors that you might face sooner or later.
+If you need help open a ticket at [HPC support](mailto:hpcsupport@zih.tu-dresden.de).
 
-**TODO ADD IMAGE** \<a href="%ATTACHURL%/error_batch_job_submission_failed.png">\<img
-alt="" width="400"
-src="<https://doc.zih.tu-dresden.de/hpc-wiki/pub/Compendium/JupyterHub/error_batch_job_submission_failed.png>"
-style="border: 1px solid #888;" title="Error message: Batch job
-submission failed."/>\</a>
+### Error at Session Start
 
-This message often appears instantly if your Slurm parameters are not
-valid. Please check those settings against the available hardware.
-Useful pages for valid Slurm parameters:
+![Error batch job submission failed](misc/error_batch_job_submission_failed.png)
+{: align="center"}
 
--   [Slurm batch system (Taurus)] **TODO LINK** (../jobs_and_resources/SystemTaurus#Batch_System)
--   [General information how to use Slurm](../jobs_and_resources/slurm.md)
+This message appears instantly if your batch system parameters are not valid.
+Please check those settings against the available hardware.
+Useful pages for valid batch system parameters:
 
-### Error message in JupyterLab
+- [General information how to use Slurm](../jobs_and_resources/slurm.md)
+- [Partitions and limits](../jobs_and_resources/partitions_and_limits.md)
 
-**TODO ADD IMAGE** \<a href="%ATTACHURL%/jupyterlab_error_directory_not_found.png">\<img
-alt="" width="400"
-src="<https://doc.zih.tu-dresden.de/hpc-wiki/pub/Compendium/JupyterHub/jupyterlab_error_directory_not_found.png>"
-style="border: 1px solid #888;" title="Error message: Directory not
-found"/>\</a>
+### Error Message in JupyterLab
 
-If the connection to your notebook server unexpectedly breaks you maybe
-will get this error message.
-Sometimes your notebook server might hit a Slurm or hardware limit and
-gets killed. Then usually the logfile of the corresponding Slurm job
-might contain useful information. These logfiles are located in your
-home directory and have the name "jupyter-session-\<jobid>.log".
+![JupyterLab error directory not found](misc/jupyterlab_error_directory_not_found.png)
+{: align="center"}
 
-------------------------------------------------------------------------
+If the connection to your notebook server unexpectedly breaks, you will get this error message.
+Sometimes your notebook server might hit a batch system or hardware limit and gets killed. Then
+usually the logfile of the corresponding batch job might contain useful information. These logfiles
+are located in your `home` directory and have the name `jupyter-session-<jobid>.log`.
 
-## Advanced tips
+## Advanced Tips
 
-### Standard environments
+### Standard Environments
 
-The default python kernel uses conda environments based on the [Watson
-Machine Learning Community Edition (formerly
-PowerAI)](https://developer.ibm.com/linuxonpower/deep-learning-powerai/)
+The default Python kernel uses conda environments based on the
+[Watson Machine Learning Community Edition (formerly PowerAI)](https://developer.ibm.com/linuxonpower/deep-learning-powerai/)
 package suite. You can open a list with all included packages of the
 exact standard environment through the spawner form:
 
-**TODO ADD IMAGE** \<img alt="environment_package_list.png"
-src="%ATTACHURL%/environment_package_list.png" style="border: 1px solid
-\#888;" title="JupyterHub environment package list" />
+![Environment package list](misc/environment_package_list.png)
+{: align="center"}
 
-This list shows all packages of the currently selected conda
-environment. This depends on your settings for partition (cpu
-architecture) and standard environment.
+This list shows all packages of the currently selected conda environment. This depends on your
+settings for partition (CPU architecture) and standard environment.
 
 There are three standard environments:
 
--   production,
--   test,
--   python-env-python3.8.6.
+- production
+- test
+- python-env-python3.8.6
 
-**Python-env-python3.8.6**virtual environment can be used for all x86
-partitions(gpu2, alpha, etc). It gives the opportunity to create a user
-kernel with the help of a python environment.
+**Python-env-python3.8.6** virtual environment can be used for all x86 partitions(`gpu2`, `alpha`,
+etc). It gives the opportunity to create a user kernel with the help of a Python environment.
 
-Here's a short list of some included software:
+Here is a short list of some included software:
 
-|            |           |        |
-|------------|-----------|--------|
 |            | generic\* | ml     |
+|------------|-----------|--------|
 | Python     | 3.6.10    | 3.6.10 |
 | R\*\*      | 3.6.2     | 3.6.0  |
 | WML CE     | 1.7.0     | 1.7.0  |
@@ -226,155 +191,122 @@ Here's a short list of some included software:
 
 \*\* R is loaded from the [module system](../software/runtime_environment.md)
 
-### Creating and using your own environment
+### Creating and Using a Custom Environment
 
-Interactive code interpreters which are used by Jupyter Notebooks are
-called kernels.
-Creating and using your own kernel has the benefit that you can install
-your own preferred python packages and use them in your notebooks.
+!!! info
 
-We currently have two different architectures at Taurus. Build your
-kernel environment on the **same architecture** that you want to use
+    Interactive code interpreters which are used by Jupyter notebooks are called *kernels*. Creating
+    and using your own kernel has the benefit that you can install your own preferred Python
+    packages and use them in your notebooks.
+
+We currently have two different architectures at ZIH systems.
+Build your kernel environment on the **same architecture** that you want to use
 later on with the kernel. In the examples below we use the name
 "my-kernel" for our user kernel. We recommend to prefix your kernels
-with keywords like "intel", "ibm", "ml", "venv", "conda". This way you
-can later recognize easier how you built the kernel and on which
-hardware it will work.
-
-**Intel nodes** (e.g. haswell, gpu2):
+with keywords like `haswell`, `ml`, `romeo`, `venv`, `conda`. This way you
+can later recognize easier how you built the kernel and on which hardware it will work.
 
-    srun --pty -n 1 -c 2 --mem-per-cpu 2583 -t 08:00:00 bash -l
+**Intel nodes** (e.g. partition `haswell`, `gpu2`):
 
-If you don't need Sandy Bridge support for your kernel you can create
-your kernel on partition 'haswell'.
-
-**Power nodes** (ml partition):
-
-    srun --pty -p ml -n 1 -c 2 --mem-per-cpu 5772 -t 08:00:00 bash -l
-
-Create a virtual environment in your home directory. You can decide
-between python virtualenvs or conda environments.
+```console
+maria@login$ srun --pty --ntasks=1 --cpus-per-task=2 --mem-per-cpu=2541 --time=08:00:00 bash -l
+```
 
-<span class="twiki-macro RED"></span> **Note** <span
-class="twiki-macro ENDCOLOR"></span>: Please take in mind that Python
-venv is the preferred way to create a Python virtual environment.
+**Power nodes** (partition `ml`):
 
-#### Python virtualenv
+```console
+maria@login$ srun --pty --partition=ml --ntasks=1 --cpus-per-task=2 --mem-per-cpu=1443 --time=08:00:00 bash -l
+```
 
-```bash
-$ module load Python/3.8.6-GCCcore-10.2.0
+Create a virtual environment in your `home` directory. You can decide between Python virtualenvs or
+conda environments.
 
-$ mkdir user-kernel         #please use Workspaces!
+!!! note
+    Please take in mind that Python venv is the preferred way to create a Python virtual environment.
 
-$ cd user-kernel
+#### Python Virtualenv
 
-$ virtualenv --system-site-packages my-kernel
+```console
+marie@compute$ module load Python/3.8.6-GCCcore-10.2.0
+marie@compute$ mkdir user-kernel # please use workspaces!
+marie@compute$ cd user-kernel
+marie@compute$ virtualenv --system-site-packages my-kernel
 Using base prefix '/sw/installed/Python/3.6.6-fosscuda-2018b'
 New python executable in .../user-kernel/my-kernel/bin/python
 Installing setuptools, pip, wheel...done.
-
-$ source my-kernel/bin/activate
-
-(my-kernel) $ pip install ipykernel
+marie@compute$ source my-kernel/bin/activate
+marie@compute$ pip install ipykernel
 Collecting ipykernel
-...
+[...]
 Successfully installed ... ipykernel-5.1.0 ipython-7.5.0 ...
-
-(my-kernel) $ pip install --upgrade pip
-
-(my-kernel) $ python -m ipykernel install --user --name my-kernel --display-name="my kernel"
+marie@compute$ pip install --upgrade pip
+marie@compute$ python -m ipykernel install --user --name my-kernel --display-name="my kernel"
 Installed kernelspec my-kernel in .../.local/share/jupyter/kernels/my-kernel
-
-[now install additional packages for your notebooks]
-
-(my-kernel) $ deactivate
+marie@compute$ pip install [...] # now install additional packages for your notebooks
+marie@compute$ deactivate
 ```
 
-#### Conda environment
+#### Conda Environment
 
 Load the needed module for Intel nodes
 
-```
-module load Anaconda3
+```console
+marie@compute$ module load Anaconda3
 ```
 
-... or for IBM nodes (ml partition):
+... or for IBM nodes (partition `ml`):
 
-```
-module load PythonAnaconda
+```console
+marie@ml$ module load PythonAnaconda
 ```
 
-Continue with environment creation, package installation and kernel
-registration:
-
-```
-$ mkdir user-kernel         #please use Workspaces!
+Continue with environment creation, package installation and kernel registration:
 
-$ conda create --prefix /home/<USER>/user-kernel/my-kernel python=3.6
+```console
+marie@compute$ mkdir user-kernel # please use workspaces!
+marie@compute$ conda create --prefix /home/<USER>/user-kernel/my-kernel python=3.6
 Collecting package metadata: done
 Solving environment: done
 [...]
-
-$ conda activate /home/<USER>/user-kernel/my-kernel
-
-$ conda install ipykernel
+marie@compute$ conda activate /home/<USER>/user-kernel/my-kernel
+marie@compute$ conda install ipykernel
 Collecting package metadata: done
 Solving environment: done
 [...]
-
-$ python -m ipykernel install --user --name my-kernel --display-name="my kernel"
+marie@compute$ python -m ipykernel install --user --name my-kernel --display-name="my kernel"
 Installed kernelspec my-kernel in [...]
-
-[now install additional packages for your notebooks]
-
-$ conda deactivate
+marie@compute$ conda install [..] # now install additional packages for your notebooks
+marie@compute$ conda deactivate
 ```
 
 Now you can start a new session and your kernel should be available.
 
-**In JupyterLab**:
-
-Your kernels are listed on the launcher page:
+**JupyterLab**: Your kernels are listed on the launcher page:
 
-**TODO ADD IMAGE**\<a href="%ATTACHURL%/user-kernel_in_jupyterlab_launcher.png">\<img
-alt="jupyterlab_app.png" height="410"
-src="<https://doc.zih.tu-dresden.de/hpc-wiki/pub/Compendium/JupyterHub/user-kernel_in_jupyterlab_launcher.png>"
-style="border: 1px solid #888;" title="JupyterLab kernel launcher
-list"/>\</a>
+![JupyterLab user kernel launcher](misc/jupyterlab_user_kernel_launcher.png)
+{: align="center"}
 
 You can switch kernels of existing notebooks in the menu:
 
-**TODO ADD IMAGE** \<a href="%ATTACHURL%/jupyterlab_change_kernel.png">\<img
-alt="jupyterlab_app.png"
-src="<https://doc.zih.tu-dresden.de/hpc-wiki/pub/Compendium/JupyterHub/jupyterlab_change_kernel.png>"
-style="border: 1px solid #888;" title="JupyterLab kernel switch"/>\</a>
-
-**In classic notebook app**:
+![JupyterLab change kernel](misc/jupyterlab_change_kernel.png)
+{: align="center"}
 
-Your kernel is listed in the New menu:
+**Classic Jupyter notebook**: Your kernel is listed in the New menu:
 
-**TODO ADD IMAGE** \<a href="%ATTACHURL%/user-kernel_in_jupyter_notebook.png">\<img
-alt="jupyterlab_app.png"
-src="<https://doc.zih.tu-dresden.de/hpc-wiki/pub/Compendium/JupyterHub/user-kernel_in_jupyter_notebook.png>"
-style="border: 1px solid #888;" title="Classic notebook (create notebook
-with new kernel)"/>\</a>
+![Jupyter notebook user kernel launcher](misc/jupyter_notebook_user_kernel_launcher.png)
+{: align="center"}
 
 You can switch kernels of existing notebooks in the kernel menu:
 
-**TODO ADD IMAGE** \<a href="%ATTACHURL%/switch_kernel_in_jupyter_notebook.png">\<img
-alt="jupyterlab_app.png"
-src="<https://doc.zih.tu-dresden.de/hpc-wiki/pub/Compendium/JupyterHub/switch_kernel_in_jupyter_notebook.png>"
-style="border: 1px solid #888;" title="Classic notebook (kernel
-switch)"/>\</a>
+![Jupyter notebook change kernel](misc/jupyter_notebook_change_kernel.png)
+{: align="center"}
 
-**Note**: Both python venv and conda virtual environments will be
-mention in the same list.
+!!! note
+    Both python venv and conda virtual environments will be mention in the same list.
 
-### Loading modules
+### Loading Modules
 
-You have now the option to preload modules from the LMOD module
-system.
-Select multiple modules that will be preloaded before your notebook
-server starts. The list of available modules depends on the module
-environment you want to start the session in (scs5 or ml). The right
-module environment will be chosen by your selected partition.
+You have now the option to preload modules from the [module system](../software/modules.md).
+Select multiple modules that will be preloaded before your notebook server starts. The list of
+available modules depends on the module environment you want to start the session in (`scs5` or
+`ml`).  The right module environment will be chosen by your selected partition.
diff --git a/doc.zih.tu-dresden.de/docs/access/jupyterhub_for_teaching.md b/doc.zih.tu-dresden.de/docs/access/jupyterhub_for_teaching.md
index 970a11898a6f2e93110d8b4f211ae9df9d883eed..92ad16d1325173c384c7472658239baca3e26157 100644
--- a/doc.zih.tu-dresden.de/docs/access/jupyterhub_for_teaching.md
+++ b/doc.zih.tu-dresden.de/docs/access/jupyterhub_for_teaching.md
@@ -14,11 +14,10 @@ Please be aware of the following notes:
 - Scheduled downtimes are announced by email. Please plan your courses accordingly.
 - Access to HPC resources is handled through projects. See your course as a project. Projects need
   to be registered beforehand (more info on the page [Access](../application/overview.md)).
-- Don't forget to **TODO ANCHOR**(add your users)
-  (ProjectManagement#manage_project_members_40dis_45_47enable_41) (eg. students or tutors) to
-your project.
-- It might be a good idea to **TODO ANCHOR**(request a
-  reservation)(Slurm#Reservations) of part of the compute resources for your project/course to
+- Don't forget to [add your users](../application/project_management.md#manage-project-members-dis-enable)
+  (eg. students or tutors) to your project.
+- It might be a good idea to [request a reservation](../jobs_and_resources/overview.md#exclusive-reservation-of-hardware)
+  of part of the compute resources for your project/course to
   avoid unnecessary waiting times in the batch system queue.
 
 ## Clone a Repository With a Link
diff --git a/doc.zih.tu-dresden.de/docs/access/key_fingerprints.md b/doc.zih.tu-dresden.de/docs/access/key_fingerprints.md
index 1ef85b835a0a34ce379eb03e043abbe138540945..6be427f53bd2247ab94a7abfdad25abfa01742d4 100644
--- a/doc.zih.tu-dresden.de/docs/access/key_fingerprints.md
+++ b/doc.zih.tu-dresden.de/docs/access/key_fingerprints.md
@@ -1,7 +1,9 @@
 # SSH Key Fingerprints
 
-The key fingerprints of login and export nodes can occasionally change. This page holds up-to-date
-fingerprints.
+!!! hint
+
+    The key fingerprints of login and export nodes can occasionally change. This page holds
+    up-to-date fingerprints.
 
 ## Login Nodes
 
diff --git a/Compendium_attachments/JupyterHub/advanced_form_nov2019.png b/doc.zih.tu-dresden.de/docs/access/misc/advanced_form.png
similarity index 100%
rename from Compendium_attachments/JupyterHub/advanced_form_nov2019.png
rename to doc.zih.tu-dresden.de/docs/access/misc/advanced_form.png
diff --git a/Compendium_attachments/JupyterHub/error_batch_job_submission_failed.png b/doc.zih.tu-dresden.de/docs/access/misc/error_batch_job_submission_failed.png
similarity index 100%
rename from Compendium_attachments/JupyterHub/error_batch_job_submission_failed.png
rename to doc.zih.tu-dresden.de/docs/access/misc/error_batch_job_submission_failed.png
diff --git a/Compendium_attachments/JupyterHub/switch_kernel_in_jupyter_notebook.png b/doc.zih.tu-dresden.de/docs/access/misc/jupyter_notebook_change_kernel.png
similarity index 100%
rename from Compendium_attachments/JupyterHub/switch_kernel_in_jupyter_notebook.png
rename to doc.zih.tu-dresden.de/docs/access/misc/jupyter_notebook_change_kernel.png
diff --git a/Compendium_attachments/JupyterHub/notebook_app_control_panel_btn.png b/doc.zih.tu-dresden.de/docs/access/misc/jupyter_notebook_control_panel_button.png
similarity index 100%
rename from Compendium_attachments/JupyterHub/notebook_app_control_panel_btn.png
rename to doc.zih.tu-dresden.de/docs/access/misc/jupyter_notebook_control_panel_button.png
diff --git a/Compendium_attachments/JupyterHub/jupyter_notebook_example_matplotlib.png b/doc.zih.tu-dresden.de/docs/access/misc/jupyter_notebook_example_matplotlib.png
similarity index 100%
rename from Compendium_attachments/JupyterHub/jupyter_notebook_example_matplotlib.png
rename to doc.zih.tu-dresden.de/docs/access/misc/jupyter_notebook_example_matplotlib.png
diff --git a/Compendium_attachments/JupyterHub/jupyter_notebook_app_filebrowser.png b/doc.zih.tu-dresden.de/docs/access/misc/jupyter_notebook_file_browser.png
similarity index 100%
rename from Compendium_attachments/JupyterHub/jupyter_notebook_app_filebrowser.png
rename to doc.zih.tu-dresden.de/docs/access/misc/jupyter_notebook_file_browser.png
diff --git a/Compendium_attachments/JupyterHub/user-kernel_in_jupyter_notebook.png b/doc.zih.tu-dresden.de/docs/access/misc/jupyter_notebook_user_kernel_launcher.png
similarity index 100%
rename from Compendium_attachments/JupyterHub/user-kernel_in_jupyter_notebook.png
rename to doc.zih.tu-dresden.de/docs/access/misc/jupyter_notebook_user_kernel_launcher.png
diff --git a/Compendium_attachments/JupyterHub/jupyterlab_change_kernel.png b/doc.zih.tu-dresden.de/docs/access/misc/jupyterlab_change_kernel.png
similarity index 100%
rename from Compendium_attachments/JupyterHub/jupyterlab_change_kernel.png
rename to doc.zih.tu-dresden.de/docs/access/misc/jupyterlab_change_kernel.png
diff --git a/Compendium_attachments/JupyterHub/jupyterlab_error_directory_not_found.png b/doc.zih.tu-dresden.de/docs/access/misc/jupyterlab_error_directory_not_found.png
similarity index 100%
rename from Compendium_attachments/JupyterHub/jupyterlab_error_directory_not_found.png
rename to doc.zih.tu-dresden.de/docs/access/misc/jupyterlab_error_directory_not_found.png
diff --git a/Compendium_attachments/JupyterHub/jupyterlab_logout.png b/doc.zih.tu-dresden.de/docs/access/misc/jupyterlab_logout.png
similarity index 100%
rename from Compendium_attachments/JupyterHub/jupyterlab_logout.png
rename to doc.zih.tu-dresden.de/docs/access/misc/jupyterlab_logout.png
diff --git a/Compendium_attachments/JupyterHub/jupyterlab_app.png b/doc.zih.tu-dresden.de/docs/access/misc/jupyterlab_overview.png
similarity index 100%
rename from Compendium_attachments/JupyterHub/jupyterlab_app.png
rename to doc.zih.tu-dresden.de/docs/access/misc/jupyterlab_overview.png
diff --git a/Compendium_attachments/JupyterHub/user-kernel_in_jupyterlab_launcher.png b/doc.zih.tu-dresden.de/docs/access/misc/jupyterlab_user_kernel_launcher.png
similarity index 100%
rename from Compendium_attachments/JupyterHub/user-kernel_in_jupyterlab_launcher.png
rename to doc.zih.tu-dresden.de/docs/access/misc/jupyterlab_user_kernel_launcher.png
diff --git a/Compendium_attachments/JupyterHub/simple_form.png b/doc.zih.tu-dresden.de/docs/access/misc/simple_form.png
similarity index 100%
rename from Compendium_attachments/JupyterHub/simple_form.png
rename to doc.zih.tu-dresden.de/docs/access/misc/simple_form.png
diff --git a/Compendium_attachments/JupyterHub/start_my_server.png b/doc.zih.tu-dresden.de/docs/access/misc/start_my_server.png
similarity index 100%
rename from Compendium_attachments/JupyterHub/start_my_server.png
rename to doc.zih.tu-dresden.de/docs/access/misc/start_my_server.png
diff --git a/Compendium_attachments/JupyterHub/stop_my_server.png b/doc.zih.tu-dresden.de/docs/access/misc/stop_my_server.png
similarity index 100%
rename from Compendium_attachments/JupyterHub/stop_my_server.png
rename to doc.zih.tu-dresden.de/docs/access/misc/stop_my_server.png
diff --git a/doc.zih.tu-dresden.de/docs/access/overview.md b/doc.zih.tu-dresden.de/docs/access/overview.md
index e324c4cad6a1fb2539f226e90c85d5cae3af6909..3600d8e69a05ad98201c161371164eda8d61cf41 100644
--- a/doc.zih.tu-dresden.de/docs/access/overview.md
+++ b/doc.zih.tu-dresden.de/docs/access/overview.md
@@ -1,21 +1,18 @@
-# Access to the Cluster
+# Access to ZIH Systems
 
-## SSH access
+There are several different ways to access ZIH systems depending on the intended usage:
 
-Important note: ssh to Taurus is only possible from inside TU Dresden Campus. Users from outside
-should use VPN
-([see here](https://tu-dresden.de/zih/dienste/service-katalog/arbeitsumgebung/zugang_datennetz/vpn)).
+* [SSH connection](ssh_login.md) is the classical way to connect to the login nodes and work from
+    the command line to set up experiments and manage batch jobs
+* [Desktop Cloud Visualization](desktop_cloud_visualization.md) provides a virtual Linux desktop
+  with access to GPU resources for OpenGL 3D applications
+* [WebVNC service](graphical_applications_with_webvnc.md) allows better support for graphical
+   applications than SSH with X forwarding
+* [JupyterHub service](jupyterhub.md) offers a quick and easy way to work with Jupyter notebooks on
+   ZIH systems.
 
-The recommended way to connect to the HPC login servers directly via ssh:
+!!! hint
 
-```Bash
-ssh <zih-login>@taurus.hrsk.tu-dresden.de
-```
-
-Please put this command in the terminal and replace `zih-login` with your login that you received
-during the access procedure. Accept the host verifying and enter your password. You will be loaded
-by login nodes in your Taurus home directory.  This method requires two conditions: Linux OS,
-workstation within the campus network. For other options and details check the Login page.
-
-Useful links: [Access]**todo link**, [Project Request Form](../application/request_for_resources.md),
-[Terms Of Use]**todo link**
+    Prerequisite for accessing ZIH systems is a HPC project and a login. Please refer to the pages
+    within [Application for Login and Resources](../application/overview.md) for detailed
+    information.
diff --git a/doc.zih.tu-dresden.de/docs/access/security_restrictions.md b/doc.zih.tu-dresden.de/docs/access/security_restrictions.md
index 25f6270410c4e35cee150019298fac6dd33cd01e..bcdc0f578c8e1c7674d5eb42395870636359729b 100644
--- a/doc.zih.tu-dresden.de/docs/access/security_restrictions.md
+++ b/doc.zih.tu-dresden.de/docs/access/security_restrictions.md
@@ -1,27 +1,27 @@
-# Security Restrictions on Taurus
+# Security Restrictions
 
-As a result of the security incident the German HPC sites in Gau Alliance are now adjusting their
-measurements to prevent infection and spreading of the malware.
+As a result of a security incident the German HPC sites in Gauß Alliance have adjusted their
+measurements to prevent infection and spreading of malware.
 
-The most important items for HPC systems at ZIH are:
+The most important items for ZIH systems are:
 
-- All users (who haven't done so recently) have to
+* All users (who haven't done so recently) have to
   [change their ZIH password](https://selfservice.zih.tu-dresden.de/l/index.php/pswd/change_zih_password).
-  **Login to Taurus is denied with an old password.**
-- All old (private and public) keys have been moved away.
-- All public ssh keys for Taurus have to
-  - be re-generated using only the ED25519 algorithm (`ssh-keygen -t ed25519`)
-  - **passphrase for the private key must not be empty**
-- Ideally, there should be no private key on Taurus except for local use.
-- Keys to other systems must be passphrase-protected!
-- **ssh to Taurus** is only possible from inside TU Dresden Campus
-  (login\[1,2\].zih.tu-dresden.de will be blacklisted). Users from outside can use VPN (see
+    * **Login to ZIH systems is denied with an old password.**
+* All old (private and public) keys have been moved away.
+* All public ssh keys for ZIH systems have to
+    * be re-generated using only the ED25519 algorithm (`ssh-keygen -t ed25519`)
+    * **passphrase for the private key must not be empty**
+* Ideally, there should be no private key on ZIH system except for local use.
+* Keys to other systems must be passphrase-protected!
+* **ssh to ZIH systems** is only possible from inside TU Dresden campus
+  (`login[1,2].zih.tu-dresden.de` will be blacklisted). Users from outside can use VPN (see
   [here](https://tu-dresden.de/zih/dienste/service-katalog/arbeitsumgebung/zugang_datennetz/vpn)).
-- **ssh from Taurus** is only possible inside TU Dresden Campus.
-  (Direct ssh access to other computing centers was the spreading vector of the recent incident.)
+* **ssh from ZIH system** is only possible inside TU Dresden campus.
+  (Direct SSH access to other computing centers was the spreading vector of the recent incident.)
 
-Data transfer is possible via the taurusexport nodes. We are working on a bandwidth-friendly
-solution.
+Data transfer is possible via the [export nodes](../data_transfer/export_nodes.md). We are working
+on a bandwidth-friendly solution.
 
 We understand that all this will change convenient workflows. If the measurements would render your
-work on Taurus completely impossible, please contact the HPC support.
+work on ZIH systems completely impossible, please [contact the HPC support](../support/support.md).
diff --git a/doc.zih.tu-dresden.de/docs/access/ssh_login.md b/doc.zih.tu-dresden.de/docs/access/ssh_login.md
index 6adaa8a32aa659495b715b79769b960f7a6f6934..5e67c5279f701405224078d7234517c642d3e726 100644
--- a/doc.zih.tu-dresden.de/docs/access/ssh_login.md
+++ b/doc.zih.tu-dresden.de/docs/access/ssh_login.md
@@ -4,9 +4,9 @@ For security reasons, ZIH systems are only accessible for hosts within the domai
 
 ## Virtual Private Network (VPN)
 
-To access HPC systems from outside the campus networks it's recommended to set up a VPN connection to
-enter the campus network. While active it allows the user to connect directly to the HPC login
-nodes.
+To access the ZIH systems from outside the campus networks it's recommended to set up a VPN
+connection to enter the campus network. While active, it allows the user to connect directly to the
+HPC login nodes.
 
 For more information on our VPN and how to set it up, please visit the corresponding
 [ZIH service catalogue page](https://tu-dresden.de/zih/dienste/service-katalog/arbeitsumgebung/zugang_datennetz/vpn).
@@ -58,7 +58,7 @@ marie@local$ ssh -XC <zih-login>@taurus.hrsk.tu-dresden.de
     Also consider to use a [DCV session](desktop_cloud_visualization.md) for remote desktop
     visualization at ZIH systems.
 
-### Password-less SSH
+### Password-Less SSH
 
 Of course, password-less SSH connecting is supported at ZIH. All public SSH keys for ZIH systems
 have to be generated following these rules:
@@ -79,8 +79,8 @@ Enter passphrase for key 'id-ed25519':
 
 We recommend one of the following applications:
 
-  * MobaXTerm: [homepage](https://mobaxterm.mobatek.net) | [ZIH Tutorial](misc/basic_usage_of_MobaXterm.pdf)
-  * PuTTY: [homepage](https://www.putty.org) | [ZIH Tutorial](misc/basic_usage_of_PuTTY.pdf)
+  * [MobaXTerm](https://mobaxterm.mobatek.net): [ZIH documentation](misc/basic_usage_of_MobaXterm.pdf)
+  * [PuTTY](https://www.putty.org): [ZIH documentation](misc/basic_usage_of_PuTTY.pdf)
   * OpenSSH Server: [docs](https://docs.microsoft.com/de-de/windows-server/administration/openssh/openssh_install_firstuse)
 
 The page [key fingerprints](key_fingerprints.md) holds the up-to-date fingerprints for the login
diff --git a/doc.zih.tu-dresden.de/docs/access/ssh_mit_putty.md b/doc.zih.tu-dresden.de/docs/access/ssh_mit_putty.md
deleted file mode 100644
index dd93865dbdb482dba0b5e87b68c2685e0ea11f39..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/access/ssh_mit_putty.md
+++ /dev/null
@@ -1,87 +0,0 @@
-# Prerequisites for Access to a Linux Cluster From a Windows Workstation
-
-To work at an HPC system at ZIH you need
-
-- a program that provides you a command shell (like
-  [PuTTY](http://www.chiark.greenend.org.uk/%7Esgtatham/putty/download.html)
-  or
-  [Secure Shell ssh3.2](http://tu-dresden.de/die_tu_dresden/zentrale_einrichtungen/zih/dienste/datennetz_dienste/secure_shell/);
-  both free) (The putty.exe is only to download at the desktop. (No installation))
-
-and if you would like to use graphical software from the HPC system
-
-- an X-Server (like [X-Ming](http://www.straightrunning.com/XmingNotes/)
-  or [CygWin32](http://www.cygwin.com/cygwin/)
-
-at your local PC. Here, you can find installation descriptions for the X servers:
-[X-Ming Installation](misc/install-Xming.pdf)
-
-[CygWin Installation](misc/cygwin_doku_de.pdf)
-
-Please note: You have also to install additional fonts for X-Ming at your PC. (also to find at
-[this website](http://www.straightrunning.com/XmingNotes/).  If you would like transfer files
-between your PC and an HPC machine, you should also have
-
-- [WinSCP](http://winscp.net/eng/docs/lang:de>) (an SCP program is also included in the
-  "Secure Shell ssh3.2" software; see above)
-
-installed at your PC.
-
-We advice putty + Xming (+ WinSCP).
-
-Please note: If you use software with OpenGL (like abaqus), please install "Xming-mesa" instead of
-"Xmin".
-
-After installation you have to start always at first the X-server. At the bottom right corner you
-will get an new icon (a black X for X-Ming).  Now you can start putty.exe. A window will appear
-where you have to give the name of the computer and you have to switch ON the "X11 forwarding".
-(please look at the figures)
-
-![PuTTY: Name of HPC System](misc/putty1.jpg)
-{: align="center"}
-
-![PuTTY: Switch on X11](misc/putty2.jpg)
-{: align="center"}
-
-<!--\<img alt="" src="%PUBURL%/Compendium/Login/putty1.jpg" title="putty:-->
-<!--name of HPC-machine" width="300" /> \<img alt=""-->
-<!--src="%PUBURL%/Compendium/Login/putty2.jpg" title="putty: switch on X11"-->
-<!--width="300" /> \<br />-->
-
-Now you can *Open* the connection. You will get a window from the remote machine, where you can put
-your Linux commands. If you would like to use commercial software, please follow the next
-instructions about the modules.
-
-## Copy Files from the HRSK Machines to Your Local Machine
-
-Take the following steps if your Workstation has a Windows operating system. You need putty (see
-above) and your favorite SCP program, in this example WinSCP.
-
-* Make a connection to `login1.zih.tu-dresden.de`
-
-![Tunnel 1](misc/tunnel1.png)
-{: align="center"}
-
-* Setup SSH tunnel (data from your machine port 1222 will be directed to deimos port 22)
-
-![Tunnel 2](misc/tunnel2.png)
-{: align="center"}
-
-* After clicking on the "Add" button, the tunnel should look like that
-
-![Tunnel 3](misc/tunnel3.png)
-{: align="center"}
-
-- Click "Open" and enter your login and password (upon successful login, the tunnel will exist)
-
-![Tunnel 4](misc/tunnel4.png)
-{: align="center"}
-
-- Put the putty window in the background (leave it running) and open  WinSCP (or your favorite SCP
-  program), connect to localhost:1222
-
-![Tunnel 5](misc/tunnel5.png)
-{: align="center"}
-
-- After hitting "Login" and entering your username/password, you can
-  access your files on deimos.
diff --git a/Compendium_attachments/ProjectManagement/add_member.png b/doc.zih.tu-dresden.de/docs/application/misc/add_member.png
similarity index 100%
rename from Compendium_attachments/ProjectManagement/add_member.png
rename to doc.zih.tu-dresden.de/docs/application/misc/add_member.png
diff --git a/Compendium_attachments/ProjectManagement/external_login.png b/doc.zih.tu-dresden.de/docs/application/misc/external_login.png
similarity index 100%
rename from Compendium_attachments/ProjectManagement/external_login.png
rename to doc.zih.tu-dresden.de/docs/application/misc/external_login.png
diff --git a/Compendium_attachments/ProjectManagement/members.png b/doc.zih.tu-dresden.de/docs/application/misc/members.png
similarity index 100%
rename from Compendium_attachments/ProjectManagement/members.png
rename to doc.zih.tu-dresden.de/docs/application/misc/members.png
diff --git a/Compendium_attachments/ProjectManagement/overview.png b/doc.zih.tu-dresden.de/docs/application/misc/overview.png
similarity index 100%
rename from Compendium_attachments/ProjectManagement/overview.png
rename to doc.zih.tu-dresden.de/docs/application/misc/overview.png
diff --git a/Compendium_attachments/ProjectManagement/password.png b/doc.zih.tu-dresden.de/docs/application/misc/password.png
similarity index 100%
rename from Compendium_attachments/ProjectManagement/password.png
rename to doc.zih.tu-dresden.de/docs/application/misc/password.png
diff --git a/Compendium_attachments/ProjectManagement/project_details.png b/doc.zih.tu-dresden.de/docs/application/misc/project_details.png
similarity index 100%
rename from Compendium_attachments/ProjectManagement/project_details.png
rename to doc.zih.tu-dresden.de/docs/application/misc/project_details.png
diff --git a/Compendium_attachments/ProjectManagement/stats.png b/doc.zih.tu-dresden.de/docs/application/misc/stats.png
similarity index 100%
rename from Compendium_attachments/ProjectManagement/stats.png
rename to doc.zih.tu-dresden.de/docs/application/misc/stats.png
diff --git a/doc.zih.tu-dresden.de/docs/application/project_management.md b/doc.zih.tu-dresden.de/docs/application/project_management.md
index a69ef756d4b74fc35e7c5be014fc2b060ea0af5e..79e457cb2590d4109a160a8296b676c3384490d5 100644
--- a/doc.zih.tu-dresden.de/docs/application/project_management.md
+++ b/doc.zih.tu-dresden.de/docs/application/project_management.md
@@ -1,113 +1,104 @@
-# Project management
+# Project Management
 
-The HPC project leader has overall responsibility for the project and
-for all activities within his project on ZIH's HPC systems. In
-particular he shall:
+The HPC project leader has overall responsibility for the project and for all activities within the
+corresponding project on ZIH systems. In particular the project leader shall:
 
--   add and remove users from the project,
--   update contact details of th eproject members,
--   monitor the resources his project,
--   inspect and store data of retiring users.
+* add and remove users from the project,
+* update contact details of the project members,
+* monitor the resources of the project,
+* inspect and store data of retiring users.
 
-For this he can appoint a *project administrator* with an HPC account to
-manage technical details.
+The project leader can appoint a *project administrator* with an HPC account to manage these
+technical details.
 
-The front-end to the HPC project database enables the project leader and
-the project administrator to
+The front-end to the HPC project database enables the project leader and the project administrator
+to
 
--   add and remove users from the project,
--   define a technical administrator,
--   view statistics (resource consumption),
--   file a new HPC proposal,
--   file results of the HPC project.
+* add and remove users from the project,
+* define a technical administrator,
+* view statistics (resource consumption),
+* file a new HPC proposal,
+* file results of the HPC project.
 
 ## Access
 
-<span class="twiki-macro IMAGE" type="frame" align="right"
-caption="password" width="100">%ATTACHURLPATH%/external_login.png</span>
-
+![Login Screen>](misc/external_login.png "Login Screen"){loading=lazy width=300 style="float:right"}
 [Entry point to the project management system](https://hpcprojekte.zih.tu-dresden.de/managers)
-
 The project leaders of an ongoing project and their accredited admins
 are allowed to login to the system. In general each of these persons
 should possess a ZIH login at the Technical University of Dresden, with
 which it is possible to log on the homepage. In some cases, it may
 happen that a project leader of a foreign organization do not have a ZIH
 login. For this purpose, it is possible to set a local password:
-"[Passwort vergessen](https://hpcprojekte.zih.tu-dresden.de/managers/members/missingPassword)".
+"[Missing Password](https://hpcprojekte.zih.tu-dresden.de/managers/members/missingPassword)".
 
-<span class="twiki-macro IMAGE" type="frame" align="right" caption="password reset"
-width="100">%ATTACHURLPATH%/password.png</span>
+&nbsp;
+{: style="clear:right;"}
 
-On the 'Passwort vergessen' page, it is possible to reset the
-passwords of a 'non-ZIH-login'. For this you write your login, which
-usually corresponds to your email address, in the field and click on
-'zurcksetzen'. Within 10 minutes the system sends a signed e-mail from
-<hpcprojekte@zih.tu-dresden.de> to the registered e-mail address. this
-e-mail contains a link to reset the password.
+![Password Reset>](misc/password.png "Password Reset"){loading=lazy width=300 style="float:right"}
+On the 'Missing Password' page, it is possible to reset the passwords of a 'non-ZIH-login'. For this
+you write your login, which usually corresponds to your email address, in the field and click on
+'reset. Within 10 minutes the system sends a signed e-mail from <hpcprojekte@zih.tu-dresden.de> to
+the registered e-mail address. this e-mail contains a link to reset the password.
+
+&nbsp;
+{: style="clear:right;"}
 
 ## Projects
 
-<span class="twiki-macro IMAGE" type="frame" align="right"
-caption="projects overview"
-width="100">%ATTACHURLPATH%/overview.png</span>
-
-\<div style="text-align: justify;"> After login you reach an overview
-that displays all available projects. In each of these projects are
-listed, you are either project leader or an assigned project
-administrator. From this list, you have the option to view the details
-of a project or make a following project request. The latter is only
-possible if a project has been approved and is active or was. In the
-upper right area you will find a red button to log out from the system.
-\</div> \<br style="clear: both;" /> \<br /> <span
-class="twiki-macro IMAGE" type="frame" align="right"
-caption="project details"
-width="100">%ATTACHURLPATH%/project_details.png</span> \<div
-style="text-align: justify;"> The project details provide information
-about the requested and allocated resources. The other tabs show the
-employee and the statistics about the project. \</div> \<br
-style="clear: both;" />
-
-### manage project members (dis-/enable)
-
-<span class="twiki-macro IMAGE" type="frame" align="right"
-caption="project members" width="100">%ATTACHURLPATH%/members.png</span>
-\<div style="text-align: justify;"> The project members can be managed
-under the tab 'employee' in the project details. This page gives an
-overview of all ZIH logins that are a member of a project and its
-status. If a project member marked in green, it can work on all
-authorized HPC machines when the project has been approved. If an
-employee is marked in red, this can have several causes:
-
--   he was manually disabled by project managers, project administrator
-    or an employee of the ZIH
--   he was disabled by the system because his ZIH login expired
--   his confirmation of the current hpc-terms is missing
-
-You can specify a user as an administrator. This user can then access
-the project managment system. Next, you can disable individual project
-members. This disabling is only a "request of disabling" and has a time
-delay of 5 minutes. An user can add or reactivate himself, with his
-zih-login, to a project via the link on the end of the page. To prevent
-misuse this link is valid for 2 weeks and will then be renewed
-automatically. \</div> \<br style="clear: both;" />
-
-<span class="twiki-macro IMAGE" type="frame" align="right"
-caption="add member" width="100">%ATTACHURLPATH%/add_member.png</span>
-
-\<div style="text-align: justify;"> The link leads to a page where you
-can sign in to a Project by accepting the term of use. You need also an
-valid ZIH-Login. After this step it can take 1-1,5 h to transfer the
-login to all cluster nodes. \</div> \<br style="clear: both;" />
-
-### statistic
-
-<span class="twiki-macro IMAGE" type="frame" align="right"
-caption="project statistic" width="100">%ATTACHURLPATH%/stats.png</span>
-
-\<div style="text-align: justify;"> The statistic is located under the
-tab 'Statistik' in the project details. The data will updated once a day
-an shows used CPU-time and used disk space of an project. Following
-projects shows also the data of the predecessor. \</div>
-
-\<br style="clear: both;" />
+![Project Overview>](misc/overview.png "Project Overview"){loading=lazy width=300 style="float:right"}
+After login you reach an overview that displays all available projects. In each of these projects
+are listed, you are either project leader or an assigned project administrator. From this list, you
+have the option to view the details of a project or make a following project request. The latter is
+only possible if a project has been approved and is active or was. In the upper right area you will
+find a red button to log out from the system.
+
+&nbsp;
+{: style="clear:right;"}
+
+![Project Details>](misc/project_details.png "Project Details"){loading=lazy width=300 style="float:right"}
+The project details provide information about the requested and allocated resources. The other tabs
+show the employee and the statistics about the project.
+
+&nbsp;
+{: style="clear:right;"}
+
+### Manage Project Members (dis-/enable)
+
+![Project Members>](misc/members.png "Project Members"){loading=lazy width=300 style="float:right"}
+The project members can be managed under the tab 'employee' in the project details. This page gives
+an overview of all ZIH logins that are a member of a project and its status. If a project member
+marked in green, it can work on all authorized HPC machines when the project has been approved. If
+an employee is marked in red, this can have several causes:
+
+* the employee was manually disabled by project managers, project administrator
+  or ZIH staff
+* the employee was disabled by the system because its ZIH login expired
+* confirmation of the current HPC-terms is missing
+
+You can specify a user as an administrator. This user can then access the project management system.
+Next, you can disable individual project members. This disabling is only a "request of disabling"
+and has a time delay of 5 minutes. An user can add or reactivate itself, with its ZIH-login, to a
+project via the link on the end of the page. To prevent misuse this link is valid for 2 weeks and
+will then be renewed automatically.
+
+&nbsp;
+{: style="clear:right;"}
+
+![Add Member>](misc/add_member.png "Add Member"){loading=lazy width=300 style="float:right"}
+The link leads to a page where you can sign in to a project by accepting the term of use. You need
+also an valid ZIH-Login. After this step it can take 1-1,5 h to transfer the login to all cluster
+nodes.
+
+&nbsp;
+{: style="clear:right;"}
+
+### Statistic
+
+![Project Statistic>](misc/stats.png "Project Statistic"){loading=lazy width=300 style="float:right"}
+The statistic is located under the tab 'Statistic' in the project details. The data will updated
+once a day an shows used CPU-time and used disk space of an project. Following projects shows also
+the data of the predecessor.
+
+&nbsp;
+{: style="clear:right;"}
diff --git a/doc.zih.tu-dresden.de/docs/archive/beegfs_on_demand.md b/doc.zih.tu-dresden.de/docs/archive/beegfs_on_demand.md
new file mode 100644
index 0000000000000000000000000000000000000000..8c2235f933fb41f5e590e880fdeb92ce6e950dfc
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/archive/beegfs_on_demand.md
@@ -0,0 +1,158 @@
+# BeeGFS Filesystem
+
+!!! warning
+
+    This documentation page is outdated.
+    The up-to date documentation on BeeGFS can be found [here](../data_lifecycle/beegfs.md).
+
+**Prerequisites:** To work with TensorFlow you obviously need a [login](../application/overview.md) to
+the ZIH systems and basic knowledge about Linux, mounting, and batch system Slurm.
+
+**Aim** of this page is to introduce
+users how to start working with the BeeGFS filesystem - a high-performance parallel filesystem.
+
+## Mount Point
+
+Understanding of mounting and the concept of the mount point is important for using filesystems and
+object storage. A mount point is a directory (typically an empty one) in the currently accessible
+filesystem on which an additional filesystem is mounted (i.e., logically attached).  The default
+mount points for a system are the directories in which filesystems will be automatically mounted
+unless told by the user to do otherwise.  All partitions are attached to the system via a mount
+point. The mount point defines the place of a particular data set in the filesystem. Usually, all
+partitions are connected through the root partition. On this partition, which is indicated with the
+slash (/), directories are created.
+
+## BeeGFS Introduction
+
+[BeeGFS](https://www.beegfs.io/content/) is the parallel cluster filesystem.  BeeGFS spreads data
+across multiple servers to aggregate capacity and performance of all servers to provide a highly
+scalable shared network filesystem with striped file contents. This is made possible by the
+separation of metadata and file contents.
+
+BeeGFS is fast, flexible, and easy to manage storage if for your issue
+filesystem plays an important role use BeeGFS. It addresses everyone,
+who needs large and/or fast file storage.
+
+## Create BeeGFS Filesystem
+
+To reserve nodes for creating BeeGFS filesystem you need to create a
+[batch](../jobs_and_resources/slurm.md) job
+
+```Bash
+#!/bin/bash
+#SBATCH -p nvme
+#SBATCH -N 4
+#SBATCH --exclusive
+#SBATCH --time=1-00:00:00
+#SBATCH --beegfs-create=yes
+
+srun sleep 1d  # sleep for one day
+
+## when finished writing, submit with:  sbatch <script_name>
+```
+
+Example output with job id:
+
+```Bash
+Submitted batch job 11047414   #Job id n.1
+```
+
+Check the status of the job with `squeue -u \<username>`.
+
+## Mount BeeGFS Filesystem
+
+You can mount BeeGFS filesystem on the partition ml (PowerPC architecture) or on the
+partition haswell (x86_64 architecture), more information about [partitions](../jobs_and_resources/partitions_and_limits.md).
+
+### Mount BeeGFS Filesystem on the Partition `ml`
+
+Job submission can be done with the command (use job id (n.1) from batch job used for creating
+BeeGFS system):
+
+```console
+srun -p ml --beegfs-mount=yes --beegfs-jobid=11047414 --pty bash                #Job submission on ml nodes
+```console
+
+Example output:
+
+```console
+srun: job 11054579 queued and waiting for resources         #Job id n.2
+srun: job 11054579 has been allocated resources
+```
+
+### Mount BeeGFS Filesystem on the Haswell Nodes (x86_64)
+
+Job submission can be done with the command (use job id (n.1) from batch
+job used for creating BeeGFS system):
+
+```console
+srun --constrain=DA --beegfs-mount=yes --beegfs-jobid=11047414 --pty bash       #Job submission on the Haswell nodes
+```
+
+Example output:
+
+```console
+srun: job 11054580 queued and waiting for resources          #Job id n.2
+srun: job 11054580 has been allocated resources
+```
+
+## Working with BeeGFS files for both types of nodes
+
+Show contents of the previously created file, for example,
+`beegfs_11054579` (where 11054579 - job id **n.2** of srun job):
+
+```console
+cat .beegfs_11054579
+```
+
+Note: don't forget to go over to your `home` directory where the file located
+
+Example output:
+
+```Bash
+#!/bin/bash
+
+export BEEGFS_USER_DIR="/mnt/beegfs/<your_id>_<name_of_your_job>/<your_id>"
+export BEEGFS_PROJECT_DIR="/mnt/beegfs/<your_id>_<name_of_your_job>/<name of your project>"
+```
+
+Execute the content of the file:
+
+```console
+source .beegfs_11054579
+```
+
+Show content of user's BeeGFS directory with the command:
+
+```console
+ls -la ${BEEGFS_USER_DIR}
+```
+
+Example output:
+
+```console
+total 0
+drwx--S--- 2 <username> swtest  6 21. Jun 10:54 .
+drwxr-xr-x 4 root        root  36 21. Jun 10:54 ..
+```
+
+Show content of the user's project BeeGFS directory with the command:
+
+```console
+ls -la ${BEEGFS_PROJECT_DIR}
+```
+
+Example output:
+
+```console
+total 0
+drwxrws--T 2 root swtest  6 21. Jun 10:54 .
+drwxr-xr-x 4 root root   36 21. Jun 10:54 ..
+```
+
+!!! note
+
+    If you want to mount the BeeGFS filesystem on an x86 instead of an ML (power) node, you can
+    either choose the partition "interactive" or the partition `haswell64`, but for the partition
+    `haswell64` you have to add the parameter `--exclude=taurusi[4001-4104,5001-5612]` to your job.
+    This is necessary because the BeeGFS client is only installed on the 6000 island.
diff --git a/doc.zih.tu-dresden.de/docs/archive/cxfs_end_of_support.md b/doc.zih.tu-dresden.de/docs/archive/cxfs_end_of_support.md
index 84e018b655f958ecb2d0a8d35982aad47a66adb2..2854bb2aeccb7d016e91dda4d9de6d717521bf46 100644
--- a/doc.zih.tu-dresden.de/docs/archive/cxfs_end_of_support.md
+++ b/doc.zih.tu-dresden.de/docs/archive/cxfs_end_of_support.md
@@ -1,44 +1,45 @@
-# Changes in the CXFS File System
+# Changes in the CXFS Filesystem
 
-With the ending support from SGI, the CXFS file system will be seperated
-from its tape library by the end of March, 2013.
+!!! warning
 
-This file system is currently mounted at
+    This page is outdated!
 
-- SGI Altix: `/fastfs/`
-- Atlas: `/hpc_fastfs/`
+With the ending support from SGI, the CXFS filesystem will be separated from its tape library by
+the end of March, 2013.
 
-We kindly ask our users to remove their large data from the file system.
+This filesystem is currently mounted at
+
+* SGI Altix: `/fastfs/`
+* Atlas: `/hpc_fastfs/`
+
+We kindly ask our users to remove their large data from the filesystem.
 Files worth keeping can be moved
 
-- to the new [Intermediate Archive](../data_lifecycle/intermediate_archive.md) (max storage
+* to the new [Intermediate Archive](../data_lifecycle/intermediate_archive.md) (max storage
     duration: 3 years) - see
     [MigrationHints](#migration-from-cxfs-to-the-intermediate-archive) below,
-- or to the [Log-term Archive](../data_lifecycle/preservation_research_data.md) (tagged with
+* or to the [Log-term Archive](../data_lifecycle/preservation_research_data.md) (tagged with
     metadata).
 
-To run the file system without support comes with the risk of losing
-data. So, please store away your results into the Intermediate Archive.
-`/fastfs` might on only be used for really temporary data, since we are
-not sure if we can fully guarantee the availability and the integrity of
-this file system, from then on.
+To run the filesystem without support comes with the risk of losing data. So, please store away
+your results into the Intermediate Archive. `/fastfs` might on only be used for really temporary
+data, since we are not sure if we can fully guarantee the availability and the integrity of this
+filesystem, from then on.
 
-With the new HRSK-II system comes a large scratch file system with appr.
-800 TB disk space. It will be made available for all running HPC systems
-in due time.
+With the new HRSK-II system comes a large scratch filesystem with approximately 800 TB disk space.
+It will be made available for all running HPC systems in due time.
 
 ## Migration from CXFS to the Intermediate Archive
 
 Data worth keeping shall be moved by the users to the directory
 `archive_migration`, which can be found in your project's and your
-personal `/fastfs` directories. (`/fastfs/my_login/archive_migration`,
-`/fastfs/my_project/archive_migration` )
+personal `/fastfs` directories:
 
-\<u>Attention:\</u> Exclusively use the command `mv`. Do **not** use
-`cp` or `rsync`, for they will store a second version of your files in
-the system.
+* `/fastfs/my_login/archive_migration`
+* `/fastfs/my_project/archive_migration`
 
-Please finish this by the end of January. Starting on Feb/18/2013, we
-will step by step transfer these directories to the new hardware.
+**Attention:** Exclusively use the command `mv`. Do **not** use `cp` or `rsync`, for they will store
+a second version of your files in the system.
 
-- Set DENYTOPICVIEW = WikiGuest
+Please finish this by the end of January. Starting on Feb/18/2013, we will step by step transfer
+these directories to the new hardware.
diff --git a/doc.zih.tu-dresden.de/docs/archive/install_jupyter.md b/doc.zih.tu-dresden.de/docs/archive/install_jupyter.md
new file mode 100644
index 0000000000000000000000000000000000000000..0d50ecc6c8ec26c30fccaf7882abee6f2070d55b
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/archive/install_jupyter.md
@@ -0,0 +1,200 @@
+# Jupyter Installation
+
+Jupyter notebooks allow to analyze data interactively using your web browser. One advantage of
+Jupyter is, that code, documentation and visualization can be included in a single notebook, so that
+it forms a unit. Jupyter notebooks can be used for many tasks, such as data cleaning and
+transformation, numerical simulation, statistical modeling, data visualization and also machine
+learning.
+
+There are two general options on how to work with Jupyter notebooks on ZIH systems: remote Jupyter
+server and JupyterHub.
+
+These sections show how to set up and run a remote Jupyter server with GPUs within a Slurm job.
+Furthermore, the following sections explain which modules and packages you need for that.
+
+!!! note
+    On ZIH systems, there is a [JupyterHub](../access/jupyterhub.md), where you do not need the
+    manual server setup described below and can simply run your Jupyter notebook on HPC nodes. Keep
+    in mind, that, with JupyterHub, you can't work with some special instruments. However, general
+    data analytics tools are available.
+
+The remote Jupyter server is able to offer more freedom with settings and approaches.
+
+## Preparation phase (optional)
+
+On ZIH system, start an interactive session for setting up the environment:
+
+```console
+marie@login$ srun --pty -n 1 --cpus-per-task=2 --time=2:00:00 --mem-per-cpu=2500 --x11=first bash -l -i
+```
+
+Create a new directory in your home, e.g. Jupyter
+
+```console
+marie@compute$ mkdir Jupyter
+marie@compute$ cd Jupyter
+```
+
+There are two ways how to run Anaconda. The easiest way is to load the Anaconda module. The second
+one is to download Anaconda in your home directory.
+
+1. Load Anaconda module (recommended):
+
+```console
+marie@compute module load modenv/scs5
+marie@compute module load Anaconda3
+```
+
+1. Download latest Anaconda release (see example below) and change the rights to make it an
+executable script and run the installation script:
+
+```console
+marie@compute wget https://repo.continuum.io/archive/Anaconda3-2019.03-Linux-x86_64.sh
+marie@compute chmod u+x Anaconda3-2019.03-Linux-x86_64.sh
+marie@compute ./Anaconda3-2019.03-Linux-x86_64.sh
+```
+
+(during installation you have to confirm the license agreement)
+
+Next step will install the anaconda environment into the home
+directory (`/home/userxx/anaconda3`). Create a new anaconda environment with the name `jnb`.
+
+```console
+marie@compute conda create --name jnb
+```
+
+## Set environmental variables
+
+In the shell, activate previously created python environment (you can
+deactivate it also manually) and install Jupyter packages for this python environment:
+
+```console
+marie@compute source activate jnb
+marie@compute conda install jupyter
+```
+
+If you need to adjust the configuration, you should create the template. Generate configuration
+files for Jupyter notebook server:
+
+```console
+marie@compute jupyter notebook --generate-config
+```
+
+Find a path of the configuration file, usually in the home under `.jupyter` directory, e.g.
+`/home//.jupyter/jupyter_notebook_config.py`
+
+Set a password (choose easy one for testing), which is needed later on to log into the server
+in browser session:
+
+```console
+marie@compute jupyter notebook password Enter password: Verify password:
+```
+
+You get a message like that:
+
+```console
+[NotebookPasswordApp] Wrote *hashed password* to
+/home/<zih_user>/.jupyter/jupyter_notebook_config.json
+```
+
+I order to create a certificate for secure connections, you can create a self-signed
+certificate:
+
+```console
+marie@compute openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mykey.key -out mycert.pem
+```
+
+Fill in the form with decent values.
+
+Possible entries for your Jupyter configuration (`.jupyter/jupyter_notebook*config.py*`).
+
+```console
+c.NotebookApp.certfile = u'<path-to-cert>/mycert.pem'
+c.NotebookApp.keyfile = u'<path-to-cert>/mykey.key'
+
+# set ip to '*' otherwise server is bound to localhost only
+c.NotebookApp.ip = '*'
+c.NotebookApp.open_browser = False
+
+# copy hashed password from the jupyter_notebook_config.json
+c.NotebookApp.password = u'<your hashed password here>'
+c.NotebookApp.port = 9999
+c.NotebookApp.allow_remote_access = True
+```
+
+!!! note
+    `<path-to-cert>` - path to key and certificate files, for example:
+    (`/home/<zih_user>/mycert.pem`)
+
+## Slurm job file to run the Jupyter server on ZIH system with GPU (1x K80) (also works on K20)
+
+```console
+#!/bin/bash -l
+#SBATCH --gres=gpu:1 # request GPU
+#SBATCH --partition=gpu2 # use partition GPU 2
+#SBATCH --output=notebook_output.txt
+#SBATCH --nodes=1
+#SBATCH --ntasks=1
+#SBATCH --time=02:30:00
+#SBATCH --mem=4000M
+#SBATCH -J "jupyter-notebook" # job-name
+#SBATCH -A <name_of_your_project>
+
+unset XDG_RUNTIME_DIR   # might be required when interactive instead of sbatch to avoid 'Permission denied error'
+srun jupyter notebook
+```
+
+Start the script above (e.g. with the name `jnotebook`) with sbatch command:
+
+```console
+sbatch jnotebook.slurm
+```
+
+If you have a question about sbatch script see the article about [Slurm](../jobs_and_resources/slurm.md).
+
+Check by the command: `tail notebook_output.txt` the status and the **token** of the server. It
+should look like this:
+
+```console
+https://(taurusi2092.taurus.hrsk.tu-dresden.de or 127.0.0.1):9999/
+```
+
+You can see the **server node's hostname** by the command: `squeue -u <username>`.
+
+### Remote connect to the server
+
+There are two options on how to connect to the server:
+
+1. You can create an ssh tunnel if you have problems with the
+solution above. Open the other terminal and configure ssh
+tunnel: (look up connection values in the output file of Slurm job, e.g.) (recommended):
+
+```console
+node=taurusi2092 #see the name of the node with squeue -u <your_login>
+localport=8887 #local port on your computer
+remoteport=9999 #pay attention on the value. It should be the same value as value in the notebook_output.txt
+ssh -fNL ${localport}:${node}:${remoteport} <zih_user>@taurus.hrsk.tu-dresden.de #configure the ssh tunnel for connection to your remote server
+pgrep -f "ssh -fNL ${localport}" #verify that tunnel is alive
+```
+
+2. On your client (local machine) you now can connect to the server.  You need to know the **node's
+   hostname**, the **port** of the server and the **token** to login (see paragraph above).
+
+You can connect directly if you know the IP address (just ping the node's hostname while logged on
+ZIH system).
+
+```console
+#comand on remote terminal 
+taurusi2092$> host taurusi2092 
+# copy IP address from output 
+# paste IP to your browser or call on local terminal e.g.:
+local$> firefox https://<IP>:<PORT>  # https important to use SSL cert
+```
+
+To login into the Jupyter notebook site, you have to enter the **token**.
+(`https://localhost:8887`). Now you can create and execute notebooks on ZIH system with GPU support.
+
+!!! important
+    If you would like to use [JupyterHub](../access/jupyterhub.md) after using a remote manually
+    configured Jupyter server (example above) you need to change the name of the configuration file
+    (`/home//.jupyter/jupyter_notebook_config.py`) to any other.
diff --git a/doc.zih.tu-dresden.de/docs/archive/overview.md b/doc.zih.tu-dresden.de/docs/archive/overview.md
index 42491b1f35af69931b4c1ef30765bc2567777394..7600ef01e81d7f623f616d28d70abbf73cb07ed2 100644
--- a/doc.zih.tu-dresden.de/docs/archive/overview.md
+++ b/doc.zih.tu-dresden.de/docs/archive/overview.md
@@ -1,5 +1,6 @@
 # Archive
 
-A warm welcome to the *archive*. You probably got here by following a link from within the compendium.
-The archive holds outdated documentation for future reference. Documentation in the archive, is not
-further updated.
+A warm welcome to the **archive**. You probably got here by following a link from within the compendium
+or by purpose.
+The archive holds outdated documentation for future reference.
+Hence, documentation in the archive, is not further updated.
diff --git a/doc.zih.tu-dresden.de/docs/archive/unicore_rest_api.md b/doc.zih.tu-dresden.de/docs/archive/unicore_rest_api.md
index 3cc59e7beb48a69a2b939542b14fef28cf4047fc..839028f327e069e912f59ffb688ccd1f54b58a40 100644
--- a/doc.zih.tu-dresden.de/docs/archive/unicore_rest_api.md
+++ b/doc.zih.tu-dresden.de/docs/archive/unicore_rest_api.md
@@ -1,18 +1,15 @@
 # UNICORE access via REST API
 
-**%RED%The UNICORE support has been abandoned and so this way of access
-is no longer available.%ENDCOLOR%**
+!!! warning
 
-Most of the UNICORE features are also available using its REST API.
-
-This API is documented here:
-
-<https://sourceforge.net/p/unicore/wiki/REST_API/>
+    This page is outdated! The UNICORE support has been abandoned and so this way of access is no
+    longer available.
 
-Some useful examples of job submission via REST are available at:
-
-<https://sourceforge.net/p/unicore/wiki/REST_API_Examples/>
-
-The base address for the Taurus system at the ZIH is:
+Most of the UNICORE features are also available using its REST API.
 
-unicore.zih.tu-dresden.de:8080/TAURUS/rest/core
+* This API is documented here:
+    * [https://sourceforge.net/p/unicore/wiki/REST_API/](https://sourceforge.net/p/unicore/wiki/REST_API/)
+* Some useful examples of job submission via REST are available at:
+    * [https://sourceforge.net/p/unicore/wiki/REST_API_Examples/](https://sourceforge.net/p/unicore/wiki/REST_API_Examples/)
+* The base address for the system at the ZIH is:
+    * `unicore.zih.tu-dresden.de:8080/TAURUS/rest/core`
diff --git a/doc.zih.tu-dresden.de/docs/archive/vampir_trace.md b/doc.zih.tu-dresden.de/docs/archive/vampirtrace.md
similarity index 55%
rename from doc.zih.tu-dresden.de/docs/archive/vampir_trace.md
rename to doc.zih.tu-dresden.de/docs/archive/vampirtrace.md
index 76d267cf1d5eb7115dd26417b42638ca16e07040..15746b60035e4ec7999159693dcaa56ca5f54f9f 100644
--- a/doc.zih.tu-dresden.de/docs/archive/vampir_trace.md
+++ b/doc.zih.tu-dresden.de/docs/archive/vampirtrace.md
@@ -1,16 +1,21 @@
 # VampirTrace
 
-VampirTrace is a performance monitoring tool, that produces tracefiles
-during a program run. These tracefiles can be analyzed and visualized by
-the tool [Vampir] **todo** Vampir. Vampir Supports lots of features
-e.g.
-
--   MPI, OpenMP, pthreads, and hybrid programs
--   Manual source code instrumentation
--   Recording hardware counter by using PAPI library
--   Memory allocation tracing
--   I/O tracing
--   Function filtering and grouping
+!!! warning
+
+    As of 2014 VampirTrace is discontinued. This site only serves an archival purpose. The official
+    successor is the new Scalable Performance Measurement Infrastructure
+    [Score-P](../software/scorep.md).
+
+VampirTrace is a performance monitoring tool, that produces tracefiles during a program run. These
+tracefiles can be analyzed and visualized by the tool [Vampir](../software/vampir.md). VampirTrace
+supports lots of features e.g.
+
+- MPI, OpenMP, Pthreads, and hybrid programs
+- Manual source code instrumentation
+- Recording hardware counter by using PAPI library
+- Memory allocation tracing
+- I/O tracing
+- Function filtering and grouping
 
 Only the basic usage is shown in this Wiki. For a comprehensive
 VampirTrace user manual refer to the
@@ -18,86 +23,77 @@ VampirTrace user manual refer to the
 
 Before using VampirTrace, set up the correct environment with
 
-```Bash
+```console
 module load vampirtrace
 ```
 
-To make measurements with VampirTrace, the user's application program
-needs to be instrumented, i.e., at specific important points
-(\`\`events'') VampirTrace measurement calls have to be activated. By
-default, VampirTrace handles this automatically. In order to enable
-instrumentation of function calls, MPI as well as OpenMP events, the
-user only needs to replace the compiler and linker commands with
-VampirTrace's wrappers. Following wrappers exist:
+To make measurements with VampirTrace, the user's application program needs to be instrumented,
+i.e., at specific important points (*events*) VampirTrace measurement calls have to be activated. By
+default, VampirTrace handles this automatically. In order to enable instrumentation of function
+calls, MPI as well as OpenMP events, the user only needs to replace the compiler and linker commands
+with VampirTrace's wrappers. Following wrappers exist:
 
-|                      |                             |
-|----------------------|-----------------------------|
 | Programming Language | VampirTrace Wrapper Command |
+|----------------------|-----------------------------|
 | C                    | `vtcc`                      |
 | C++                  | `vtcxx`                     |
 | Fortran 77           | `vtf77`                     |
 | Fortran 90           | `vtf90`                     |
 
-The following sections show some examples depending on the
-parallelization type of the program.
+The following sections show some examples depending on the parallelization type of the program.
 
-## Serial programs
+## Serial Programs
 
-Compiling serial code is the default behavior of the wrappers. Simply
-replace the compiler by VampirTrace's wrapper:
+Compiling serial code is the default behavior of the wrappers. Simply replace the compiler by
+VampirTrace's wrapper:
 
 |                      |                               |
 |----------------------|-------------------------------|
 | original             | `ifort a.f90 b.f90 -o myprog` |
 | with instrumentation | `vtf90 a.f90 b.f90 -o myprog` |
 
-This will instrument user functions (if supported by compiler) and link
-the VampirTrace library.
+This will instrument user functions (if supported by compiler) and link the VampirTrace library.
 
-## MPI parallel programs
+## MPI Parallel Programs
 
-If your MPI implementation uses MPI compilers (this is the case on
-Deimos), you need to tell VampirTrace's wrapper to use this compiler
-instead of the serial one:
+If your MPI implementation uses MPI compilers (this is the case on [Deimos](system_deimos.md)), you
+need to tell VampirTrace's wrapper to use this compiler instead of the serial one:
 
 |                      |                                      |
 |----------------------|--------------------------------------|
 | original             | `mpicc hello.c -o hello`             |
 | with instrumentation | `vtcc -vt:cc mpicc hello.c -o hello` |
 
-MPI implementations without own compilers (as on the Altix) require the
-user to link the MPI library manually. In this case, you simply replace
-the compiler by VampirTrace's compiler wrapper:
+MPI implementations without own compilers (as on the [Altix](system_altix.md) require the user to
+link the MPI library manually. In this case, you simply replace the compiler by VampirTrace's
+compiler wrapper:
 
 |                      |                               |
 |----------------------|-------------------------------|
 | original             | `icc hello.c -o hello -lmpi`  |
 | with instrumentation | `vtcc hello.c -o hello -lmpi` |
 
-If you want to instrument MPI events only (creates smaller trace files
-and less overhead) use the option `-vt:inst manual` to disable automatic
-instrumentation of user functions.
+If you want to instrument MPI events only (creates smaller trace files and less overhead) use the
+option `-vt:inst manual` to disable automatic instrumentation of user functions.
 
-## OpenMP parallel programs
+## OpenMP Parallel Programs
 
-When VampirTrace detects OpenMP flags on the command line, OPARI is
-invoked for automatic source code instrumentation of OpenMP events:
+When VampirTrace detects OpenMP flags on the command line, OPARI is invoked for automatic source
+code instrumentation of OpenMP events:
 
 |                      |                            |
 |----------------------|----------------------------|
 | original             | `ifort -openmp pi.f -o pi` |
 | with instrumentation | `vtf77 -openmp pi.f -o pi` |
 
-## Hybrid MPI/OpenMP parallel programs
+## Hybrid MPI/OpenMP Parallel Programs
 
-With a combination of the above mentioned approaches, hybrid
-applications can be instrumented:
+With a combination of the above mentioned approaches, hybrid applications can be instrumented:
 
 |                      |                                                     |
 |----------------------|-----------------------------------------------------|
 | original             | `mpif90 -openmp hybrid.F90 -o hybrid`               |
 | with instrumentation | `vtf90 -vt:f90 mpif90 -openmp hybrid.F90 -o hybrid` |
 
-By default, running a VampirTrace instrumented application should result
-in a tracefile in the current working directory where the application
-was executed.
+By default, running a VampirTrace instrumented application should result in a tracefile in the
+current working directory where the application was executed.
diff --git a/doc.zih.tu-dresden.de/docs/contrib/content_rules.md b/doc.zih.tu-dresden.de/docs/contrib/content_rules.md
new file mode 100644
index 0000000000000000000000000000000000000000..f5492e7f35ff26e425bff9c7b246f7c0d4a29fb0
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/contrib/content_rules.md
@@ -0,0 +1,244 @@
+# Content Rules
+
+**Remark:** Avoid using tabs both in markdown files and in `mkdocs.yaml`. Type spaces instead.
+
+## New Page and Pages Structure
+
+The pages structure is defined in the configuration file `mkdocs.yaml`:
+
+```Bash
+docs/
+  - Home: index.md
+  - Application for HPC Login: application.md
+  - Request for Resources: req_resources.md
+  - Access to the Cluster: access.md
+  - Available Software and Usage:
+    - Overview: software/overview.md
+  [...]
+```
+
+To add a new page to the documentation follow these two steps:
+
+1. Create a new markdown file under `docs/subdir/file_name.md` and put the documentation inside.
+The sub-directory and file name should follow the pattern `fancy_title_and_more.md`.
+1. Add `subdir/file_name.md` to the configuration file `mkdocs.yml` by updating the navigation
+   section.
+
+Make sure that the new page **is not floating**, i.e., it can be reached directly from
+the documentation structure.
+
+## Markdown
+
+1. Please keep things simple, i.e., avoid using fancy markdown dialects.
+    * [Cheat Sheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet)
+    * [Style Guide](https://github.com/google/styleguide/blob/gh-pages/docguide/style.md)
+
+1. Do not add large binary files or high resolution images to the repository. See this valuable
+   document for [image optimization](https://web.dev/fast/#optimize-your-images).
+
+1. [Admonitions](https://squidfunk.github.io/mkdocs-material/reference/admonitions/) may be
+actively used, especially for longer code examples, warnings, tips, important information that
+should be highlighted, etc. Code examples, longer than half screen height should collapsed
+(and indented):
+
+??? example
+    ```Bash
+    [...]
+    # very long example here
+    [...]
+    ```
+
+## Writing Style
+
+* Capitalize headings, e.g. *Exclusive Reservation of Hardware*
+
+## Spelling and Technical Wording
+
+To provide a consistent and high quality documentation, and help users to find the right pages,
+there is a list of conventions w.r.t. spelling and technical wording.
+
+* Language settings: en_us
+* `I/O` not `IO`
+* `Slurm` not `SLURM`
+* `Filesystem` not `file system`
+* `ZIH system` and `ZIH systems` not `Taurus` etc. if possible
+* `Workspace` not `work space`
+
+## Code Blocks and Command Prompts
+
+Showing commands and sample output is an important part of all technical documentation. To make
+things as clear for readers as possible and provide a consistent documentation, some rules have to
+be followed.
+
+1. Use ticks to mark code blocks and commands, not italic font.
+1. Specify language for code blocks ([see below](#code-blocks-and-syntax-highlighting)).
+1. All code blocks and commands should be runnable from a login node or a node within a specific
+   partition (e.g., `ml`).
+1. It should be clear from the prompt, where the command is run (e.g. local machine, login node or
+   specific partition).
+
+### Prompts
+
+We follow this rules regarding prompts:
+
+| Host/Partition         | Prompt           |
+|------------------------|------------------|
+| Login nodes            | `marie@login$`   |
+| Arbitrary compute node | `marie@compute$` |
+| `haswell` partition    | `marie@haswell$` |
+| `ml` partition         | `marie@ml$`      |
+| `alpha` partition      | `marie@alpha$`   |
+| `romeo` partition      | `marie@romeo$`   |
+| `julia` partition      | `marie@julia$`   |
+| Localhost              | `marie@local$`   |
+
+*Remarks:*
+
+* **Always use a prompt**, even there is no output provided for the shown command.
+* All code blocks should use long parameter names (e.g. Slurm parameters), if available.
+* All code blocks which specify some general command templates, e.g. containing `<` and `>`
+  (see [Placeholders](#mark-placeholders)), should use `bash` for the code block. Additionally,
+  an example invocation, perhaps with output, should be given with the normal `console` code block.
+  See also [Code Block description below](#code-blocks-and-syntax-highlighting).
+* Using some magic, the prompt as well as the output is identified and will not be copied!
+* Stick to the [generic user name](#data-privacy-and-generic-user-name) `marie`.
+
+### Code Blocks and Syntax Highlighting
+
+This project makes use of the extension
+[pymdownx.highlight](https://squidfunk.github.io/mkdocs-material/reference/code-blocks/) for syntax
+highlighting.  There is a complete list of supported
+[language short codes](https://pygments.org/docs/lexers/).
+
+For consistency, use the following short codes within this project:
+
+With the exception of command templates, use `console` for shell session and console:
+
+```` markdown
+```console
+marie@login$ ls
+foo
+bar
+```
+````
+
+Make sure that shell session and console code blocks are executable on the login nodes of HPC system.
+
+Command templates use [Placeholders](#mark-placeholders) to mark replaceable code parts. Command
+templates should give a general idea of invocation and thus, do not contain any output. Use a
+`bash` code block followed by an invocation example (with `console`):
+
+```` markdown
+```bash
+marie@local$ ssh -NL <local port>:<compute node>:<remote port> <zih login>@tauruslogin.hrsk.tu-dresden.de
+```
+
+```console
+marie@local$ ssh -NL 5901:172.24.146.46:5901 marie@tauruslogin.hrsk.tu-dresden.de
+```
+````
+
+Also use `bash` for shell scripts such as job files:
+
+```` markdown
+```bash
+#!/bin/bash
+#SBATCH --nodes=1
+#SBATCH --time=01:00:00
+#SBATCH --output=slurm-%j.out
+
+module load foss
+
+srun a.out
+```
+````
+
+!!! important
+
+    Use long parameter names where possible to ease understanding.
+
+`python` for Python source code:
+
+```` markdown
+```python
+from time import gmtime, strftime
+print(strftime("%Y-%m-%d %H:%M:%S", gmtime()))
+```
+````
+
+`pycon` for Python console:
+
+```` markdown
+```pycon
+>>> from time import gmtime, strftime
+>>> print(strftime("%Y-%m-%d %H:%M:%S", gmtime()))
+2021-08-03 07:20:33
+```
+````
+
+Line numbers can be added via
+
+```` markdown
+```bash linenums="1"
+#!/bin/bash
+
+#SBATCH -N 1
+#SBATCH -n 23
+#SBATCH -t 02:10:00
+
+srun a.out
+```
+````
+
+Specific Lines can be highlighted by using
+
+```` markdown
+```bash hl_lines="2 3"
+#!/bin/bash
+
+#SBATCH -N 1
+#SBATCH -n 23
+#SBATCH -t 02:10:00
+
+srun a.out
+```
+````
+
+### Data Privacy and Generic User Name
+
+Where possible, replace login, project name and other private data with clearly arbitrary placeholders.
+E.g., use the generic login `marie` and the corresponding project name `p_marie`.
+
+```console
+marie@login$ ls -l
+drwxr-xr-x   3 marie p_marie      4096 Jan 24  2020 code
+drwxr-xr-x   3 marie p_marie      4096 Feb 12  2020 data
+-rw-rw----   1 marie p_marie      4096 Jan 24  2020 readme.md
+```
+
+## Mark Omissions
+
+If showing only a snippet of a long output, omissions are marked with `[...]`.
+
+## Unix Rules
+
+Stick to the Unix rules on optional and required arguments, and selection of item sets:
+
+* `<required argument or value>`
+* `[optional argument or value]`
+* `{choice1|choice2|choice3}`
+
+## Graphics and Attachments
+
+All graphics and attachments are saved within `misc` directory of the respective sub directory in
+`docs`.
+
+The syntax to insert a graphic or attachment into a page is
+
+```Bash
+![PuTTY: Switch on X11](misc/putty2.jpg)
+{: align="center"}
+```
+
+The attribute `align` is optional. By default, graphics are left aligned. **Note:** It is crucial to
+have `{: align="center"}` on a new line.
diff --git a/doc.zih.tu-dresden.de/docs/contrib/contribute_container.md b/doc.zih.tu-dresden.de/docs/contrib/contribute_container.md
new file mode 100644
index 0000000000000000000000000000000000000000..c0d04ffcd04a2655b352c64c7442403e46df18d8
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/contrib/contribute_container.md
@@ -0,0 +1,132 @@
+# Contributing Using a Local Clone and a Docker Container
+
+## Git Procedure
+
+Please follow this standard Git procedure for working with a local clone:
+
+1. Change to a local (unencrypted) filesystem. (We have seen problems running the container on an
+ecryptfs filesystem. So you might want to use e.g. `/tmp` as the start directory.)
+1. Create a new directory, e.g. with `mkdir hpc-wiki`
+1. Change into the new directory, e.g. `cd hpc-wiki`
+1. Clone the Git repository:
+`git clone git@gitlab.hrz.tu-chemnitz.de:zih/hpcsupport/hpc-compendium.git .` (don't forget the
+dot)
+1. Create a new feature branch for you to work in. Ideally, name it like the file you want to
+modify or the issue you want to work on, e.g.: `git checkout -b issue-174`. (If you are uncertain
+about the name of a file, please look into `mkdocs.yaml`.)
+1. Improve the documentation with your preferred editor, i.e. add new files and correct mistakes.
+automatically by our CI pipeline.
+1. Use `git add <FILE>` to select your improvements for the next commit.
+1. Commit the changes with `git commit -m "<DESCRIPTION>"`. The description should be a meaningful
+description of your changes. If you work on an issue, please also add "Closes 174" (for issue 174).
+1. Push the local changes to the GitLab server, e.g. with `git push origin issue-174`.
+1. As an output you get a link to create a merge request against the preview branch.
+1. When the merge request is created, a continuous integration (CI) pipeline automatically checks
+your contributions.
+
+You can find the details and commands to preview your changes and apply checks in the next section.
+
+## Preparation
+
+Assuming you already have a working Docker installation and have cloned the repository as mentioned
+above, a few more steps are necessary.
+
+* a working Docker installation
+* all necessary access/execution rights
+* a local clone of the repository in the directory `./hpc-wiki`
+
+Build the docker image. This might take a bit longer, but you have to
+run it only once in a while.
+
+```bash
+cd hpc-wiki
+docker build -t hpc-compendium .
+```
+
+## Working with the Docker Container
+
+Here is a suggestion of a workflow which might be suitable for you.
+
+### Start the Local Web Server
+
+The command(s) to start the dockerized web server is this:
+
+```bash
+docker run --name=hpc-compendium -p 8000:8000 --rm -w /docs --mount src="$(pwd)"/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium bash -c "mkdocs build && mkdocs serve -a 0.0.0.0:8000"
+```
+
+You can view the documentation via [http://localhost:8000](http://localhost:8000) in your browser, now.
+
+!!! note
+
+    You can keep the local web server running in this shell to always have the opportunity to see
+    the result of your changes in the browser. Simply open another terminal window for other
+    commands.
+
+You can now update the contents in you preferred editor. The running container automatically takes
+care of file changes and rebuilds the documentation whenever you save a file.
+
+With the details described below, it will then be easy to follow the guidelines for local
+correctness checks before submitting your changes and requesting the merge.
+
+### Run the Proposed Checks Inside Container
+
+In our continuous integration (CI) pipeline, a merge request triggers the automated check of
+
+* correct links,
+* correct spelling,
+* correct text format.
+
+If one of them fails, the merge request will not be accepted. To prevent this, you can run these
+checks locally and adapt your files accordingly.
+
+To avoid a lot of retyping, use the following in your shell:
+
+```bash
+alias wiki="docker run --name=hpc-compendium --rm -it -w /docs --mount src=$PWD/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium bash -c"
+```
+
+You are now ready to use the different checks
+
+#### Linter
+
+If you want to check whether the markdown files are formatted properly, use the following command:
+
+```bash
+wiki 'markdownlint docs'
+```
+
+#### Spell Checker
+
+For spell-checking a single file, , e.g.
+`doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md`, use:
+
+```bash
+wiki 'util/check-spelling.sh docs/software/big_data_frameworks_spark.md'
+```
+
+For spell-checking all files, use:
+
+```bash
+wiki 'find docs -type f -name "*.md" | xargs -L1 util/check-spelling.sh'
+```
+
+This outputs all words of all files that are unknown to the spell checker.
+To let the spell checker "know" a word, append it to
+`doc.zih.tu-dresden.de/wordlist.aspell`.
+
+#### Link Checker
+
+To check a single file, e.g.
+`doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md`, use:
+
+```bash
+wiki 'markdown-link-check docs/software/big_data_frameworks_spark.md'
+```
+
+To check whether there are links that point to a wrong target, use
+(this may take a while and gives a lot of output because it runs over all files):
+
+```bash
+wiki 'find docs -type f -name "*.md" | xargs -L1 markdown-link-check'
+```
diff --git a/doc.zih.tu-dresden.de/docs/contrib/howto_contribute.md b/doc.zih.tu-dresden.de/docs/contrib/howto_contribute.md
new file mode 100644
index 0000000000000000000000000000000000000000..31105a5208932ff49ee86d939ed8faa744dad854
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/contrib/howto_contribute.md
@@ -0,0 +1,40 @@
+# How-To Contribute
+
+!!! cite "Chinese proverb"
+
+    Ink is better than the best memory.
+
+## Contribute via Issue
+
+Users can contribute to the documentation via the
+[GitLab issue tracking system](https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium/-/issues).
+For that, open an issue to report typos and missing documentation or request for more precise
+wording etc.  ZIH staff will get in touch with you to resolve the issue and improve the
+documentation.
+
+!!! warning "HPC support"
+
+    Non-documentation issues and requests need to be send as ticket to
+    [hpcsupport@zih.tu-dresden.de](mailto:hpcsupport@zih.tu-dresden.de).
+
+## Contribute via Web IDE
+
+GitLab offers a rich and versatile web interface to work with repositories. To fix typos and edit
+source files, just select the file of interest and click the `Edit` button. A text and commit
+editor are invoked: Do your changes, add a meaningful commit message and commit the changes.
+
+The more sophisticated integrated Web IDE is reached from the top level menu of the repository or
+by selecting any source file.
+
+Other git services might have an equivalent web interface to interact with the repository. Please
+refer to the corresponding documentation for further information.
+
+<!--This option of contributing is only available for users of-->
+<!--[gitlab.hrz.tu-chemnitz.de](https://gitlab.hrz.tu-chemnitz.de). Furthermore, -->
+
+## Contribute Using Git Locally
+
+For experienced Git users, we provide a Docker container that includes all checks of the CI engine
+used in the back-end. Using them should ensure that merge requests will not be blocked
+due to automatic checking.
+For details, see [Work Locally Using Containers](contribute_container.md).
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/bee_gfs.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/bee_gfs.md
deleted file mode 100644
index 14354286e9793d85f92f8456e733187cb826e854..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/bee_gfs.md
+++ /dev/null
@@ -1,144 +0,0 @@
-# BeeGFS file system
-
-%RED%Note: This page is under construction. %ENDCOLOR%%RED%The pipeline
-will be changed soon%ENDCOLOR%
-
-**Prerequisites:** To work with Tensorflow you obviously need \<a
-href="Login" target="\_blank">access\</a> for the Taurus system and
-basic knowledge about Linux, mounting, SLURM system.
-
-**Aim** \<span style="font-size: 1em;"> of this page is to introduce
-users how to start working with the BeeGFS file\</span>\<span
-style="font-size: 1em;"> system - a high-performance parallel file
-system.\</span>
-
-## Mount point
-
-Understanding of mounting and the concept of the mount point is
-important for using file systems and object storage. A mount point is a
-directory (typically an empty one) in the currently accessible file
-system on which an additional file system is mounted (i.e., logically
-attached). \<span style="font-size: 1em;">The default mount points for a
-system are the directories in which file systems will be automatically
-mounted unless told by the user to do otherwise. \</span>\<span
-style="font-size: 1em;">All partitions are attached to the system via a
-mount point. The mount point defines the place of a particular data set
-in the file system. Usually, all partitions are connected through the
-root partition. On this partition, which is indicated with the slash
-(/), directories are created. \</span>
-
-## BeeGFS introduction
-
-\<span style="font-size: 1em;"> [BeeGFS](https://www.beegfs.io/content/)
-is the parallel cluster file system. \</span>\<span style="font-size:
-1em;">BeeGFS spreads data \</span>\<span style="font-size: 1em;">across
-multiple \</span>\<span style="font-size: 1em;">servers to aggregate
-\</span>\<span style="font-size: 1em;">capacity and \</span>\<span
-style="font-size: 1em;">performance of all \</span>\<span
-style="font-size: 1em;">servers to provide a highly scalable shared
-network file system with striped file contents. This is made possible by
-the separation of metadata and file contents. \</span>
-
-BeeGFS is fast, flexible, and easy to manage storage if for your issue
-filesystem plays an important role use BeeGFS. It addresses everyone,
-who needs large and/or fast file storage
-
-## Create BeeGFS file system
-
-To reserve nodes for creating BeeGFS file system you need to create a
-[batch](../jobs_and_resources/slurm.md) job
-
-    #!/bin/bash
-    #SBATCH -p nvme
-    #SBATCH -N 4
-    #SBATCH --exclusive
-    #SBATCH --time=1-00:00:00
-    #SBATCH --beegfs-create=yes
-
-    srun sleep 1d  # sleep for one day
-
-    ## when finished writing, submit with:  sbatch <script_name>
-
-Example output with job id:
-
-    Submitted batch job 11047414   #Job id n.1
-
-Check the status of the job with 'squeue -u \<username>'
-
-## Mount BeeGFS file system
-
-You can mount BeeGFS file system on the ML partition (ppc64
-architecture) or on the Haswell [partition](../jobs_and_resources/system_taurus.md) (x86_64
-architecture)
-
-### Mount BeeGFS file system on the ML
-
-Job submission can be done with the command (use job id (n.1) from batch
-job used for creating BeeGFS system):
-
-    srun -p ml --beegfs-mount=yes --beegfs-jobid=11047414 --pty bash                #Job submission on ml nodes
-
-Example output:
-
-    srun: job 11054579 queued and waiting for resources         #Job id n.2
-    srun: job 11054579 has been allocated resources
-
-### Mount BeeGFS file system on the Haswell nodes (x86_64)
-
-Job submission can be done with the command (use job id (n.1) from batch
-job used for creating BeeGFS system):
-
-    srun --constrain=DA --beegfs-mount=yes --beegfs-jobid=11047414 --pty bash       #Job submission on the Haswell nodes
-
-Example output:
-
-    srun: job 11054580 queued and waiting for resources          #Job id n.2
-    srun: job 11054580 has been allocated resources
-
-## Working with BeeGFS files for both types of nodes
-
-Show contents of the previously created file, for example,
-beegfs_11054579 (where 11054579 - job id **n.2** of srun job):
-
-    cat .beegfs_11054579
-
-Note: don't forget to go over to your home directory where the file
-located
-
-Example output:
-
-    #!/bin/bash
-
-    export BEEGFS_USER_DIR="/mnt/beegfs/<your_id>_<name_of_your_job>/<your_id>"
-    export BEEGFS_PROJECT_DIR="/mnt/beegfs/<your_id>_<name_of_your_job>/<name of your project>" 
-
-Execute the content of the file:
-
-    source .beegfs_11054579
-
-Show content of user's BeeGFS directory with the command:
-
-    ls -la ${BEEGFS_USER_DIR}
-
-Example output:
-
-    total 0
-    drwx--S--- 2 <username> swtest  6 21. Jun 10:54 .
-    drwxr-xr-x 4 root        root  36 21. Jun 10:54 ..
-
-Show content of the user's project BeeGFS directory with the command:
-
-    ls -la ${BEEGFS_PROJECT_DIR}
-
-Example output:
-
-    total 0
-    drwxrws--T 2 root swtest  6 21. Jun 10:54 .
-    drwxr-xr-x 4 root root   36 21. Jun 10:54 ..
-
-Note: If you want to mount the BeeGFS file system on an x86 instead of
-an ML (power) node, you can either choose the partition "interactive" or
-the partition "haswell64", but for the partition "haswell64" you have to
-add the parameter "--exclude=taurusi\[4001-4104,5001- 5612\]" to your
-job. This is necessary because the BeeGFS client is only installed on
-the 6000 island.
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/beegfs.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/beegfs.md
new file mode 100644
index 0000000000000000000000000000000000000000..1e2460c3852ffc2a59c8f3a1b8f7c6fcc66b5efb
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/beegfs.md
@@ -0,0 +1,73 @@
+# BeeGFS
+
+Commands to work with the BeeGFS filesystem.
+
+## Capacity and Filesystem Health
+
+View storage and inode capacity and utilization for metadata and storage targets.
+
+```console
+marie@login$ beegfs-df -p /beegfs/global0
+```
+
+The `-p` parameter needs to be the mountpoint of the filesystem and is mandatory.
+
+List storage and inode capacity, reachability and consistency information of each storage target.
+
+```console
+marie@login$ beegfs-ctl --listtargets --nodetype=storage --spaceinfo --longnodes --state --mount=/beegfs/global0
+```
+
+To check the capacity of the metadata server, just toggle the `--nodetype` argument.
+
+```console
+marie@login$ beegfs-ctl --listtargets --nodetype=meta --spaceinfo --longnodes --state --mount=/beegfs/global0
+```
+
+## Striping
+
+Show the stripe information of a given file on the filesystem and on which storage target the
+file is stored.
+
+```console
+marie@login$ beegfs-ctl --getentryinfo /beegfs/global0/my-workspace/myfile --mount=/beegfs/global0
+```
+
+Set the stripe pattern for a directory. In BeeGFS, the stripe pattern will be inherited from a
+directory to its children.
+
+```console
+marie@login$ beegfs-ctl --setpattern --chunksize=1m --numtargets=16 /beegfs/global0/my-workspace/ --mount=/beegfs/global0
+```
+
+This will set the stripe pattern for `/beegfs/global0/path/to/mydir/` to a chunk size of 1 MiB
+distributed over 16 storage targets.
+
+Find files located on certain server or targets. The following command searches all files that are
+stored on the storage targets with id 4 or 30 and my-workspace directory.
+
+```console
+marie@login$ beegfs-ctl --find /beegfs/global0/my-workspace/ --targetid=4 --targetid=30 --mount=/beegfs/global0
+```
+
+## Network
+
+View the network addresses of the filesystem servers.
+
+```console
+marie@login$ beegfs-ctl --listnodes --nodetype=meta --nicdetails --mount=/beegfs/global0
+marie@login$ beegfs-ctl --listnodes --nodetype=storage --nicdetails --mount=/beegfs/global0
+marie@login$ beegfs-ctl --listnodes --nodetype=client --nicdetails --mount=/beegfs/global0
+```
+
+Display connections the client is actually using
+
+```console
+marie@login$ beegfs-net
+```
+
+Display possible connectivity of the services
+
+```console
+marie@login$ beegfs-check-servers -p /beegfs/global0
+```
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md
index 5365ac4f3cfed5c4bc6a7051802bfbfe1eb7b17d..4174e2b46c0ff69b3fd6d9a12b0cf626e296bd88 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md
@@ -1,46 +1,23 @@
 # Overview
 
-As soon as you have access to ZIH systems you have to manage your data. Several file systems are
-available. Each file system serves for special purpose according to their respective capacity,
+As soon as you have access to ZIH systems, you have to manage your data. Several filesystems are
+available. Each filesystem serves for special purpose according to their respective capacity,
 performance and permanence.
 
 ## Work Directories
 
-| File system | Usable directory  | Capacity | Availability | Backup | Remarks                                                                                                                                                         |
+| Filesystem  | Usable directory  | Capacity | Availability | Backup | Remarks                                                                                                                                                         |
 |:------------|:------------------|:---------|:-------------|:-------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------|
 | `Lustre`    | `/scratch/`       | 4 PB     | global       | No     | Only accessible via [Workspaces](workspaces.md). Not made for billions of files!                                                                                   |
 | `Lustre`    | `/lustre/ssd`     | 40 TB    | global       | No     | Only accessible via [Workspaces](workspaces.md). For small I/O operations                                                                                          |
-| `BeeGFS`    | `/beegfs/global0` | 232 TB   | global       | No     | Only accessible via [Workspaces](workspaces.md). Fastest available file system, only for large parallel applications running with millions of small I/O operations |
+| `BeeGFS`    | `/beegfs/global0` | 232 TB   | global       | No     | Only accessible via [Workspaces](workspaces.md). Fastest available filesystem, only for large parallel applications running with millions of small I/O operations |
 | `ext4`      | `/tmp`            | 95 GB    | local        | No     | is cleaned up after the job automatically  |
 
-## Warm Archive
-
-!!! warning
-    This is under construction. The functionality is not there, yet.
-
-The warm archive is intended a storage space for the duration of a running HPC-DA project. It can
-NOT substitute a long-term archive. It consists of 20 storage nodes with a net capacity of 10 PB.
-Within Taurus (including the HPC-DA nodes), the management software "Quobyte" enables access via
-
-- native quobyte client - read-only from compute nodes, read-write
-  from login and nvme nodes
-- S3 - read-write from all nodes,
-- Cinder (from OpenStack cluster).
-
-For external access, you can use:
-
-- S3 to `<bucket>.s3.taurusexport.hrsk.tu-dresden.de`
-- or normal file transfer via our taurusexport nodes (see [DataManagement](overview.md)).
-
-An HPC-DA project can apply for storage space in the warm archive. This is limited in capacity and
-duration.
-TODO
-
-## Recommendations for File System Usage
+## Recommendations for Filesystem Usage
 
 To work as efficient as possible, consider the following points
 
-- Save source code etc. in `/home` or /projects/...
+- Save source code etc. in `/home` or `/projects/...`
 - Store checkpoints and other temporary data in `/scratch/ws/...`
 - Compilation in `/dev/shm` or `/tmp`
 
@@ -50,102 +27,30 @@ Getting high I/O-bandwidth
 - Use many processes (writing in the same file at the same time is possible)
 - Use large I/O transfer blocks
 
-## Cheat Sheet for Debugging File System Issues
+## Cheat Sheet for Debugging Filesystem Issues
 
-Every Taurus-User should normally be able to perform the following commands to get some intel about
+Users can select from the following commands to get some idea about
 their data.
 
 ### General
 
-For the first view, you can easily use the "df-command".
-
-```Bash
-df
-```
-
-Alternatively, you can use the "findmnt"-command, which is also able to perform an `df` by adding the
-"-D"-parameter.
-
-```Bash
-findmnt -D
-```
-
-Optional you can use the `-t`-parameter to specify the fs-type or the `-o`-parameter to alter the
-output.
-
-We do **not recommend** the usage of the "du"-command for this purpose.  It is able to cause issues
-for other users, while reading data from the filesystem.
-
-### BeeGFS
-
-Commands to work with the BeeGFS file system.
-
-#### Capacity and file system health
-
-View storage and inode capacity and utilization for metadata and storage targets.
-
-```Bash
-beegfs-df -p /beegfs/global0
-```
-
-The `-p` parameter needs to be the mountpoint of the file system and is mandatory.
-
-List storage and inode capacity, reachability and consistency information of each storage target.
-
-```Bash
-beegfs-ctl --listtargets --nodetype=storage --spaceinfo --longnodes --state --mount=/beegfs/global0
-```
-
-To check the capacity of the metadata server just toggle the `--nodetype` argument.
-
-```Bash
-beegfs-ctl --listtargets --nodetype=meta --spaceinfo --longnodes --state --mount=/beegfs/global0
-```
-
-#### Striping
-
-View the stripe information of a given file on the file system and shows on which storage target the
-file is stored.
-
-```Bash
-beegfs-ctl --getentryinfo /beegfs/global0/my-workspace/myfile --mount=/beegfs/global0
-```
-
-Set the stripe pattern for an directory. In BeeGFS the stripe pattern will be inherited form a
-directory to its children.
-
-```Bash
-beegfs-ctl --setpattern --chunksize=1m --numtargets=16 /beegfs/global0/my-workspace/ --mount=/beegfs/global0
-```
-
-This will set the stripe pattern for `/beegfs/global0/path/to/mydir/` to a chunksize of 1M
-distributed over 16 storage targets.
+For the first view, you can use the command `df`.
 
-Find files located on certain server or targets. The following command searches all files that are
-stored on the storage targets with id 4 or 30 and my-workspace directory.
-
-```Bash
-beegfs-ctl --find /beegfs/global0/my-workspace/ --targetid=4 --targetid=30 --mount=/beegfs/global0
+```console
+marie@login$ df
 ```
 
-#### Network
-
-View the network addresses of the file system servers.
+Alternatively, you can use the command `findmnt`, which is also able to report space usage
+by adding the parameter `-D`:
 
-```Bash
-beegfs-ctl --listnodes --nodetype=meta --nicdetails --mount=/beegfs/global0
-beegfs-ctl --listnodes --nodetype=storage --nicdetails --mount=/beegfs/global0
-beegfs-ctl --listnodes --nodetype=client --nicdetails --mount=/beegfs/global0
+```console
+marie@login$ findmnt -D
 ```
 
-Display connections the client is actually using
+Optionally, you can use the parameter `-t` to specify the filesystem type or the parameter `-o` to
+alter the output.
 
-```Bash
-beegfs-net
-```
+!!! important
 
-Display possible connectivity of the services
-
-```Bash
-beegfs-check-servers -p /beegfs/global0
-```
+    Do **not** use the `du`-command for this purpose. It is able to cause issues
+    for other users, while reading data from the filesystem.
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/hpc_storage_concept2019.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/hpc_storage_concept2019.md
deleted file mode 100644
index 998699215481e1318a3b5aa036eac8b56fa7d94e..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/hpc_storage_concept2019.md
+++ /dev/null
@@ -1,71 +0,0 @@
-# HPC Storage Changes 2019
-
-## Hardware changes require new approach**
-
-\<font face="Open Sans, sans-serif">At the moment we are preparing to
-remove our old hardware from 2013. This comes with a shrinking of our
-/scratch from 5 to 4 PB. At the same time we have now our "warm archive"
-operational for HPC with a capacity of 5 PB for now. \</font>
-
-\<font face="Open Sans, sans-serif">The tool concept of "workspaces" is
-common in a large number of HPC centers. The idea is to allocate a
-workspace directory in a certain storage system - connected with an
-expiry date. After a grace period the data is deleted automatically. The
-validity of a workspace can be extended twice. \</font>
-
-## \<font face="Open Sans, sans-serif"> **How to use workspaces?** \</font>
-
-\<font face="Open Sans, sans-serif">We have prepared a few examples at
-<https://doc.zih.tu-dresden.de/hpc-wiki/bin/view/Compendium/WorkSpaces>\</font>
-
--   \<p>\<font face="Open Sans, sans-serif">For transient data, allocate
-    a workspace, run your job, remove data, and release the workspace
-    from with\</font>\<font face="Open Sans, sans-serif">i\</font>\<font
-    face="Open Sans, sans-serif">n your job file.\</font>\</p>
--   \<p>\<font face="Open Sans, sans-serif">If you are working on a set
-    of data for weeks you might use workspaces in scratch and share them
-    with your groups by setting the file access attributes.\</font>\</p>
--   \<p>\<font face="Open Sans, sans-serif">For \</font>\<font
-    face="Open Sans, sans-serif">mid-term storage (max 3 years), use our
-    "warm archive" which is large but slow. It is available read-only on
-    the compute hosts and read-write an login and export nodes. To move
-    in your data, you might want to use the
-    [datamover nodes](../data_transfer/data_mover.md).\</font>\</p>
-
-## \<font face="Open Sans, sans-serif">Moving Data from /scratch and /lustre/ssd to your workspaces\</font>
-
-We are now mounting /lustre/ssd and /scratch read-only on the compute
-nodes. As soon as the non-workspace /scratch directories are mounted
-read-only on the login nodes as well, you won't be able to remove your
-old data from there in the usual way. So you will have to use the
-DataMover commands and ideally just move your data to your prepared
-workspace:
-
-```Shell Session
-dtmv /scratch/p_myproject/some_data /scratch/ws/myuser-mynewworkspace
-#or:
-dtmv /scratch/p_myproject/some_data /warm_archive/ws/myuser-mynewworkspace
-```
-
-Obsolete data can also be deleted like this:
-
-```Shell Session
-dtrm -rf /scratch/p_myproject/some_old_data
-```
-
-**%RED%At the end of the year we will delete all data on /scratch and
-/lsuter/ssd outside the workspaces.%ENDCOLOR%**
-
-## Data life cycle management
-
-\<font face="Open Sans, sans-serif">Please be aware: \</font>\<font
-face="Open Sans, sans-serif">Data in workspaces will be deleted
-automatically after the grace period.\</font>\<font face="Open Sans,
-sans-serif"> This is especially true for the warm archive. If you want
-to keep your data for a longer time please use our options for
-[long-term storage](preservation_research_data.md).\</font>
-
-\<font face="Open Sans, sans-serif">To \</font>\<font face="Open Sans,
-sans-serif">help you with that, you can attach your email address for
-notification or simply create an ICAL entry for your calendar
-(tu-dresden.de mailboxes only). \</font>
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md
index 2d20726755cf07c9d4a4f9f87d3ae4d2b5825dbc..6aee19dd87cf1f9bcf589c2950ca11e5b99b1b65 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md
@@ -1,8 +1,8 @@
 # Intermediate Archive
 
-With the "Intermediate Archive", ZIH is closing the gap between a normal disk-based file system and
-[Longterm Archive](preservation_research_data.md). The Intermediate Archive is a hierarchical file
-system with disks for buffering and tapes for storing research data.
+With the "Intermediate Archive", ZIH is closing the gap between a normal disk-based filesystem and
+[Longterm Archive](preservation_research_data.md). The Intermediate Archive is a hierarchical
+filesystem with disks for buffering and tapes for storing research data.
 
 Its intended use is the storage of research data for a maximal duration of 3 years. For storing the
 data after exceeding this time, the user has to supply essential metadata and migrate the files to
@@ -12,34 +12,33 @@ files.
 Some more information:
 
 - Maximum file size in the archive is 500 GB (split up your files, see
-  [Datamover](../data_transfer/data_mover.md))
+  [Datamover](../data_transfer/datamover.md))
 - Data will be stored in two copies on tape.
-- The bandwidth to this data is very limited. Hence, this file system
+- The bandwidth to this data is very limited. Hence, this filesystem
   must not be used directly as input or output for HPC jobs.
 
-## How to access the "Intermediate Archive"
+## Access the Intermediate Archive
 
 For storing and restoring your data in/from the "Intermediate Archive" you can use the tool
-[Datamover](../data_transfer/data_mover.md). To use the DataMover you have to login to Taurus
-(taurus.hrsk.tu-dresden.de).
+[Datamover](../data_transfer/datamover.md). To use the DataMover you have to login to ZIH systems.
 
-### Store data
+### Store Data
 
-```Shell Session
-dtcp -r /<directory> /archiv/<project or user>/<directory> # or
-dtrsync -av /<directory> /archiv/<project or user>/<directory>
+```console
+marie@login$ dtcp -r /<directory> /archiv/<project or user>/<directory> # or
+marie@login$ dtrsync -av /<directory> /archiv/<project or user>/<directory>
 ```
 
-### Restore data
+### Restore Data
 
-```Shell Session
-dtcp -r /archiv/<project or user>/<directory> /<directory> # or
-dtrsync -av /archiv/<project or user>/<directory> /<directory>
+```console
+marie@login$ dtcp -r /archiv/<project or user>/<directory> /<directory> # or
+marie@login$ dtrsync -av /archiv/<project or user>/<directory> /<directory>
 ```
 
 ### Examples
 
-```Shell Session
-dtcp -r /scratch/rotscher/results /archiv/rotscher/ # or
-dtrsync -av /scratch/rotscher/results /archiv/rotscher/results
+```console
+marie@login$ dtcp -r /scratch/rotscher/results /archiv/rotscher/ # or
+marie@login$ dtrsync -av /scratch/rotscher/results /archiv/rotscher/results
 ```
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/lustre.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/lustre.md
index 891808543974bb9ad92ed9897762f0d6d66bdbe2..d08a5d5f59490a8236fb6710b28d24d9a01fcfe6 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/lustre.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/lustre.md
@@ -1,11 +1,11 @@
-# Lustre File System(s)
+# Lustre Filesystems
 
 ## Large Files in /scratch
 
 The data containers in [Lustre](https://www.lustre.org) are called object storage targets (OST). The
 capacity of one OST is about 21 TB. All files are striped over a certain number of these OSTs. For
 small and medium files, the default number is 2. As soon as a file grows above ~1 TB it makes sense
-to spread it over a higher number of OSTs, e.g. 16. Once the file system is used >75%, the average
+to spread it over a higher number of OSTs, e.g. 16. Once the filesystem is used >75%, the average
 space per OST is only 5 GB. So, it is essential to split your larger files so that the chunks can be
 saved!
 
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/overview.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/overview.md
index e1b5fca65e562a243590c8fb55f92242b2265b4a..bdbaa5a1523ec2fc06150195e18764cf14b618ef 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/overview.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/overview.md
@@ -4,13 +4,13 @@ Correct organization of the structure of an HPC project is a straightforward way
 work of the whole team. There have to be rules and regulations that every member should follow. The
 uniformity of the project can be achieved by taking into account and setting up correctly
 
-  * the same **set of software** (modules, compiler, packages, libraries, etc),
-  * a defined **data life cycle management** including the same **data storage** or set of them,
-  * and **access rights** to project data.
+* the same **set of software** (modules, compiler, packages, libraries, etc),
+* a defined **data life cycle management** including the same **data storage** or set of them,
+* and **access rights** to project data.
 
 The used set of software within an HPC project can be management with environments on different
 levels either defined by [modules](../software/modules.md), [containers](../software/containers.md)
-or by [Python virtual environments](../software/python.md).
+or by [Python virtual environments](../software/python_virtual_environments.md).
 In the following, a brief overview on relevant topics w.r.t. data life cycle management is provided.
 
 ## Data Storage and Management
@@ -19,27 +19,27 @@ The main concept of working with data on ZIH systems bases on [Workspaces](works
 properly:
 
   * use a `/home` directory for the limited amount of personal data, simple examples and the results
-    of calculations. The home directory is not a working directory! However, `/home` file system is
+    of calculations. The home directory is not a working directory! However, `/home` filesystem is
     [backed up](#backup) using snapshots;
-  * use `workspaces` as a place for working data (i.e. datasets); Recommendations of choosing the
+  * use `workspaces` as a place for working data (i.e. data sets); Recommendations of choosing the
     correct storage system for workspace presented below.
 
-### Taxonomy of File Systems
+### Taxonomy of Filesystems
 
 It is important to design your data workflow according to characteristics, like I/O footprint
 (bandwidth/IOPS) of the application, size of the data, (number of files,) and duration of the
-storage to efficiently use the provided storage and file systems.
-The page [file systems](file_systems.md) holds a comprehensive documentation on the different file
-systems.
+storage to efficiently use the provided storage and filesystems.
+The page [filesystems](file_systems.md) holds a comprehensive documentation on the different
+filesystems.
 <!--In general, the mechanisms of
 so-called--> <!--[Workspaces](workspaces.md) are compulsory for all HPC users to store data for a
 defined duration ---> <!--depending on the requirements and the storage system this time span might
 range from days to a few--> <!--years.-->
-<!--- [HPC file systems](file_systems.md)-->
+<!--- [HPC filesystems](file_systems.md)-->
 <!--- [Intermediate Archive](intermediate_archive.md)-->
 <!--- [Special data containers] **todo** Special data containers (was no valid link in old compendium)-->
-<!--- [Move data between file systems](../data_transfer/data_mover.md)-->
-<!--- [Move data to/from ZIH's file systems](../data_transfer/export_nodes.md)-->
+<!--- [Move data between filesystems](../data_transfer/data_mover.md)-->
+<!--- [Move data to/from ZIH's filesystems](../data_transfer/export_nodes.md)-->
 <!--- [Longterm Preservation for ResearchData](preservation_research_data.md)-->
 
 !!! hint "Recommendations to choose of storage system"
@@ -48,7 +48,7 @@ range from days to a few--> <!--years.-->
       [warm_archive](file_systems.md#warm_archive) can be used.
       (Note that this is mounted **read-only** on the compute nodes).
     * For a series of calculations that works on the same data please use a `scratch` based [workspace](workspaces.md).
-    * **SSD**, in its turn, is the fastest available file system made only for large parallel
+    * **SSD**, in its turn, is the fastest available filesystem made only for large parallel
       applications running with millions of small I/O (input, output operations).
     * If the batch job needs a directory for temporary data then **SSD** is a good choice as well.
       The data can be deleted afterwards.
@@ -60,17 +60,17 @@ otherwise it could vanish. The core data of your project should be [backed up](#
 ### Backup
 
 The backup is a crucial part of any project. Organize it at the beginning of the project. The
-backup mechanism on ZIH systems covers **only** the `/home` and `/projects` file systems. Backed up
+backup mechanism on ZIH systems covers **only** the `/home` and `/projects` filesystems. Backed up
 files can be restored directly by the users. Details can be found
 [here](file_systems.md#backup-and-snapshots-of-the-file-system).
 
 !!! warning
 
-    If you accidentally delete your data in the "no backup" file systems it **can not be restored**!
+    If you accidentally delete your data in the "no backup" filesystems it **can not be restored**!
 
 ### Folder Structure and Organizing Data
 
-Organizing of living data using the file system helps for consistency and structuredness of the
+Organizing of living data using the filesystem helps for consistency of the
 project. We recommend following the rules for your work regarding:
 
   * Organizing the data: Never change the original data; Automatize the organizing the data; Clearly
@@ -81,7 +81,7 @@ project. We recommend following the rules for your work regarding:
     don’t replace documentation and metadata; Use standards of your discipline; Make rules for your
     project, document and keep them (See the [README recommendations]**todo link** below)
 
-This is the example of an organisation (hierarchical) for the folder structure. Use it as a visual
+This is the example of an organization (hierarchical) for the folder structure. Use it as a visual
 illustration of the above:
 
 ![Organizing_Data-using_file_systems.png](misc/Organizing_Data-using_file_systems.png)
@@ -130,7 +130,7 @@ you don’t need throughout its life cycle.
 
 <!--## Software Packages-->
 
-<!--As was written before the module concept is the basic concept for using software on Taurus.-->
+<!--As was written before the module concept is the basic concept for using software on ZIH systems.-->
 <!--Uniformity of the project has to be achieved by using the same set of software on different levels.-->
 <!--It could be done by using environments. There are two types of environments should be distinguished:-->
 <!--runtime environment (the project level, use scripts to load [modules]**todo link**), Python virtual-->
@@ -144,16 +144,16 @@ you don’t need throughout its life cycle.
 
 <!--### Python Virtual Environment-->
 
-<!--If you are working with the Python then it is crucial to use the virtual environment on Taurus. The-->
+<!--If you are working with the Python then it is crucial to use the virtual environment on ZIH systems. The-->
 <!--main purpose of Python virtual environments (don't mess with the software environment for modules)-->
 <!--is to create an isolated environment for Python projects (self-contained directory tree that-->
 <!--contains a Python installation for a particular version of Python, plus a number of additional-->
 <!--packages).-->
 
 <!--**Vitualenv (venv)** is a standard Python tool to create isolated Python environments. We-->
-<!--recommend using venv to work with Tensorflow and Pytorch on Taurus. It has been integrated into the-->
+<!--recommend using venv to work with Tensorflow and Pytorch on ZIH systems. It has been integrated into the-->
 <!--standard library under the [venv module]**todo link**. **Conda** is the second way to use a virtual-->
-<!--environment on the Taurus. Conda is an open-source package management system and environment-->
+<!--environment on the ZIH systems. Conda is an open-source package management system and environment-->
 <!--management system from the Anaconda.-->
 
 <!--[Detailed information]**todo link** about using the virtual environment.-->
@@ -168,9 +168,9 @@ you don’t need throughout its life cycle.
 
 The concept of **permissions** and **ownership** is crucial in Linux. See the
 [HPC-introduction]**todo link** slides for the understanding of the main concept. Standard Linux
-changing permission command (i.e `chmod`) valid for Taurus as well. The **group** access level
+changing permission command (i.e `chmod`) valid for ZIH systems as well. The **group** access level
 contains members of your project group. Be careful with 'write' permission and never allow to change
 the original data.
 
-Useful links: [Data Management]**todo link**, [File Systems]**todo link**, [Get Started with
-HPC-DA]**todo link**, [Project Management]**todo link**, [Preservation research data[**todo link**
+Useful links: [Data Management]**todo link**, [Filesystems]**todo link**,
+[Project Management]**todo link**, [Preservation research data[**todo link**
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/permanent.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/permanent.md
index 98e64e7f56c81b811e5455d785239a40d340ced5..14d7fc3e5e74819d568410340825934cb55d9960 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/permanent.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/permanent.md
@@ -1,69 +1,96 @@
-# Permanent File Systems
+# Permanent Filesystems
 
-## Global /home File System
+!!! hint
+
+    Do not use permanent filesystems as work directories:
+
+    - Even temporary files are kept in the snapshots and in the backup tapes over a long time,
+    senselessly filling the disks,
+    - By the sheer number and volume of work files, they may keep the backup from working efficiently.
+
+## Global /home Filesystem
+
+Each user has 50 GiB in a `/home` directory independent of the granted capacity for the project.
+The home directory is mounted with read-write permissions on all nodes of the ZIH system.
 
-Each user has 50 GB in a `/home` directory independent of the granted capacity for the project.
 Hints for the usage of the global home directory:
 
-- Do not use your `/home` as work directory: Frequent changes (like temporary output from a
-  running job) would fill snapshots and backups (see below).
 - If you need distinct `.bashrc` files for each machine, you should
   create separate files for them, named `.bashrc_<machine_name>`
-- Further, you may use private module files to simplify the process of
-  loading the right installation directories, see
-  **todo link: private modules - AnchorPrivateModule**.
 
-## Global /projects File System
+If a user exceeds her/his quota (total size OR total number of files) she/he cannot
+submit jobs into the batch system. Running jobs are not affected.
+
+!!! note
+
+     We have no feasible way to get the contribution of
+     a single user to a project's disk usage.
+
+## Global /projects Filesystem
 
 For project data, we have a global project directory, that allows better collaboration between the
-members of an HPC project. However, for compute nodes /projects is mounted as read-only, because it
-is not a filesystem for parallel I/O.
-
-## Backup and Snapshots of the File System
-
-- Backup is **only** available in the `/home` and the `/projects` file systems!
-- Files are backed up using snapshots of the NFS server and can be restored by the user
-- A changed file can always be recovered as it was at the time of the snapshot
-- Snapshots are taken:
-  - From Monday through Saturday between 06:00 and 18:00 every two hours and kept for one day
-    (7 snapshots)
-  - From Monday through Saturday at 23:30 and kept for two weeks (12 snapshots)
-  - Every Sunday st 23:45 and kept for 26 weeks
-- To restore a previous version of a file:
-  - Go into the directory of the file you want to restore
-  - Run `cd .snapshot` (this subdirectory exists in every directory on the `/home` file system
-    although it is not visible with `ls -a`)
-  - In the .snapshot-directory are all available snapshots listed
-  - Just `cd` into the directory of the point in time you wish to restore and copy the file you
-    wish to restore to where you want it
-  - **Attention** The `.snapshot` directory is not only hidden from normal view (`ls -a`), it is
-    also embedded in a different directory structure. An `ls ../..` will not list the directory
-    where you came from. Thus, we recommend to copy the file from the location where it
-    originally resided:
-    `pwd /home/username/directory_a % cp .snapshot/timestamp/lostfile lostfile.backup`
-- `/home` and `/projects/` are definitely NOT made as a work directory:
-  since all files are kept in the snapshots and in the backup tapes over a long time, they
-  - Senseless fill the disks and
-  - Prevent the backup process by their sheer number and volume from working efficiently.
-
-## Group Quotas for the File System
-
-The quotas of the home file system are meant to help the users to keep in touch with their data.
+members of an HPC project.
+Typically, all members of the project have read/write access to that directory.
+It can only be written to on the login and export nodes.
+
+!!! note
+
+    On compute nodes, `/projects` is mounted as read-only, because it must not be used as
+    work directory and heavy I/O.
+
+## Snapshots
+
+A changed file can always be recovered as it was at the time of the snapshot.
+These snapshots are taken (subject to changes):
+
+- from Monday through Saturday between 06:00 and 18:00 every two hours and kept for one day
+  (7 snapshots)
+- from Monday through Saturday at 23:30 and kept for two weeks (12 snapshots)
+- every Sunday st 23:45 and kept for 26 weeks.
+
+To restore a previous version of a file:
+
+1. Go to the parent directory of the file you want to restore.
+1. Run `cd .snapshot` (this subdirectory exists in every directory on the `/home` filesystem
+  although it is not visible with `ls -a`).
+1. List the snapshots with `ls -l`.
+1. Just `cd` into the directory of the point in time you wish to restore and copy the file you
+  wish to restore to where you want it.
+
+!!! note
+
+    The `.snapshot` directory is embedded in a different directory structure. An `ls ../..` will not
+    show the directory where you came from. Thus, for your `cp`, you should *use an absolute path*
+    as destination.
+
+## Backup
+
+Just for the eventuality of a major filesystem crash, we keep tape-based backups of our
+permanent filesystems for 180 days.
+
+## Quotas
+
+The quotas of the permanent filesystem are meant to help users to keep only data that is necessary.
 Especially in HPC, it happens that millions of temporary files are created within hours. This is the
-main reason for performance degradation of the file system. If a project exceeds its quota (total
-size OR total number of files) it cannot submit jobs into the batch system. The following commands
-can be used for monitoring:
+main reason for performance degradation of the filesystem.
+
+!!! note
+
+    If a quota is exceeded - project or home - (total size OR total number of files)
+    job submission is forbidden. Running jobs are not affected.
+
+The following commands can be used for monitoring:
 
-- `showquota` shows your projects' usage of the file system.
-- `quota -s -f /home` shows the user's usage of the file system.
+- `showquota` shows your projects' usage of the filesystem.
+- `quota -s -f /home` shows the user's usage of the filesystem.
 
-In case a project is above it's limits please ...
+In case a quota is above its limits:
 
-- Remove core dumps, temporary data
-- Talk with your colleagues to identify the hotspots,
-- Check your workflow and use /tmp or the scratch file systems for temporary files
+- Remove core dumps and temporary data
+- Talk with your colleagues to identify unused or unnecessarily stored data
+- Check your workflow and use `/tmp` or the scratch filesystems for temporary files
 - *Systematically* handle your important data:
-  - For later use (weeks...months) at the HPC systems, build tar
-    archives with meaningful names or IDs and store e.g. them in an
-    [archive](intermediate_archive.md).
-  - Refer to the hints for [long term preservation for research data](preservation_research_data.md)
+    - For later use (weeks...months) at the ZIH systems, build and zip tar
+      archives with meaningful names or IDs and store them, e.g., in a workspace in the
+      [warm archive](warm_archive.md) or an [archive](intermediate_archive.md)
+    - Refer to the hints for [long term preservation for research data](preservation_research_data.md)
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/quotas.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/quotas.md
index 24665aa573549b6290fae90523450c98fc9d9240..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/quotas.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/quotas.md
@@ -1,56 +0,0 @@
-# Quotas for the home file system
-
-The quotas of the home file system are meant to help the users to keep in touch with their data.
-Especially in HPC, millions of temporary files can be created within hours. We have identified this
-as a main reason for performance degradation of the HOME file system. To stay in operation with out
-HPC systems we regrettably have to fall back to this unpopular technique.
-
-Based on a balance between the allotted disk space and the usage over the time, reasonable quotas
-(mostly above current used space) for the projects have been defined. The will be activated by the
-end of April 2012.
-
-If a project exceeds its quota (total size OR total number of files) it cannot submit jobs into the
-batch system. Running jobs are not affected.  The following commands can be used for monitoring:
-
--   `quota -s -g` shows the file system usage of all groups the user is
-    a member of.
--   `showquota` displays a more convenient output. Use `showquota -h` to
-    read about its usage. It is not yet available on all machines but we
-    are working on it.
-
-**Please mark:** We have no quotas for the single accounts, but for the
-project as a whole. There is no feasible way to get the contribution of
-a single user to a project's disk usage.
-
-## Alternatives
-
-In case a project is above its limits, please
-
--   remove core dumps, temporary data,
--   talk with your colleagues to identify the hotspots,
--   check your workflow and use /fastfs for temporary files,
--   *systematically* handle your important data:
-    -   for later use (weeks...months) at the HPC systems, build tar
-        archives with meaningful names or IDs and store them in the
-        [DMF system](#AnchorDataMigration). Avoid using this system
-        (`/hpc_fastfs`) for files < 1 MB!
-    -   refer to the hints for
-        [long term preservation for research data](../data_lifecycle/preservation_research_data.md).
-
-## No Alternatives
-
-The current situation is this:
-
--   `/home` provides about 50 TB of disk space for all systems. Rapidly
-    changing files (temporary data) decrease the size of usable disk
-    space since we keep all files in multiple snapshots for 26 weeks. If
-    the *number* of files comes into the range of a million the backup
-    has problems handling them.
--   The work file system for the clusters is `/fastfs`. Here, we have 60
-    TB disk space (without backup). This is the file system of choice
-    for temporary data.
--   About 180 projects have to share our resources, so it makes no sense
-    at all to simply move the data from `/home` to `/fastfs` or to
-    `/hpc_fastfs`.
-
-In case of problems don't hesitate to ask for support.
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/warm_archive.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/warm_archive.md
new file mode 100644
index 0000000000000000000000000000000000000000..01c6e319ea575ca971cd52bc7c9dca3f5fd85ff3
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/warm_archive.md
@@ -0,0 +1,30 @@
+# Warm Archive
+
+The warm archive is intended as a storage space for the duration of a running HPC project.
+It does **not** substitute a long-term archive, though.
+
+This storage is best suited for large files (like `tgz`s of input data data or intermediate results).
+
+The hardware consists of 20 storage nodes with a net capacity of 10 PiB on spinning disks.
+We have seen an total data rate of 50 GiB/s under benchmark conditions.
+
+A project can apply for storage space in the warm archive.
+This is limited in capacity and
+duration.
+
+## Access
+
+### As Filesystem
+
+On ZIH systems, users can access the warm archive via [workspaces](workspaces.md)).
+Although the lifetime is considerable long, please be aware that the data will be
+deleted as soon as the user's login expires.
+
+!!! attention
+
+    These workspaces can **only** be written to from the login or export nodes.
+    On all compute nodes, the warm archive is mounted read-only.
+
+### S3
+
+A limited S3 functionality is available.
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/workspaces.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/workspaces.md
index 8443727ab896a13da8d76684e3524c1e21cca936..f5e217de6b34e861004b54de3fb4d6cb5004a2ce 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/workspaces.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/workspaces.md
@@ -1,7 +1,7 @@
 # Workspaces
 
 Storage systems differ in terms of capacity, streaming bandwidth, IOPS rate, etc. Price and
-efficiency don't allow to have it all in one. That is why fast parallel file systems at ZIH have
+efficiency don't allow to have it all in one. That is why fast parallel filesystems at ZIH have
 restrictions with regards to **age of files** and [quota](quotas.md). The mechanism of workspaces
 enables users to better manage their HPC data.
 <!--Workspaces are primarily login-related.-->
@@ -19,16 +19,16 @@ times.
 
 !!! tip
 
-    Use the faster file systems if you need to write temporary data in your computations, and use
-    the capacity oriented file systems if you only need to read data for your computations. Please
+    Use the faster filesystems if you need to write temporary data in your computations, and use
+    the capacity oriented filesystems if you only need to read data for your computations. Please
     keep track of your data and move it to a capacity oriented filesystem after the end of your
     computations.
 
 ## Workspace Management
 
-### List Available File Systems
+### List Available Filesystems
 
-To list all available file systems for using workspaces use:
+To list all available filesystems for using workspaces use:
 
 ```bash
 zih$ ws_find -l
@@ -87,7 +87,7 @@ Options:
     remaining time in days: 90
     ```
 
-This will create a workspace with the name `test-workspace` on the `/scratch` file system for 90
+This will create a workspace with the name `test-workspace` on the `/scratch` filesystem for 90
 days with an email reminder for 7 days before the expiration.
 
 !!! Note
@@ -97,15 +97,15 @@ days with an email reminder for 7 days before the expiration.
 
 ### Extention of a Workspace
 
-The lifetime of a workspace is finite. Different file systems (storage systems) have different
-maximum durations. A workspace can be extended multiple times, depending on the file system.
+The lifetime of a workspace is finite. Different filesystems (storage systems) have different
+maximum durations. A workspace can be extended multiple times, depending on the filesystem.
 
 | Storage system (use with parameter -F ) | Duration, days | Extensions | Remarks |
 |:------------------------------------------:|:----------:|:-------:|:---------------------------------------------------------------------------------------:|
-| `ssd`                                       | 30 | 10 | High-IOPS file system (`/lustre/ssd`) on SSDs.                                          |
-| `beegfs`                                     | 30 | 2 | High-IOPS file system (`/lustre/ssd`) onNVMes.                                          |
-| `scratch`                                    | 100 | 2 | Scratch file system (/scratch) with high streaming bandwidth, based on spinning disks |
-| `warm_archive`                               | 365 | 2 | Capacity file system based on spinning disks                                          |
+| `ssd`                                       | 30 | 10 | High-IOPS filesystem (`/lustre/ssd`) on SSDs.                                          |
+| `beegfs`                                     | 30 | 2 | High-IOPS filesystem (`/lustre/ssd`) onNVMes.                                          |
+| `scratch`                                    | 100 | 2 | Scratch filesystem (/scratch) with high streaming bandwidth, based on spinning disks |
+| `warm_archive`                               | 365 | 2 | Capacity filesystem based on spinning disks                                          |
 
 To extend your workspace use the following command:
 
@@ -128,9 +128,9 @@ my-workspace 40`, it will now expire in 40 days **not** 130 days.
 ### Deletion of a Workspace
 
 To delete a workspace use the `ws_release` command. It is mandatory to specify the name of the
-workspace and the file system in which it is located:
+workspace and the filesystem in which it is located:
 
-`ws_release -F <file system> <workspace name>`
+`ws_release -F <filesystem> <workspace name>`
 
 ### Restoring Expired Workspaces
 
@@ -141,7 +141,7 @@ warm_archive: 2 months), you can still restore your data into an existing worksp
 
     When you release a workspace **by hand**, it will not receive a grace period and be
     **permanently deleted** the **next day**. The advantage of this design is that you can create
-    and release workspaces inside jobs and not swamp the file system with data no one needs anymore
+    and release workspaces inside jobs and not swamp the filesystem with data no one needs anymore
     in the hidden directories (when workspaces are in the grace period).
 
 Use:
@@ -162,7 +162,7 @@ username prefix and timestamp suffix (otherwise, it cannot be uniquely identifie
 workspace, on the other hand, must be given with just its short name, as listed by `ws_list`,
 without the username prefix.
 
-Both workspaces must be on the same file system. The data from the old workspace will be moved into
+Both workspaces must be on the same filesystem. The data from the old workspace will be moved into
 a directory in the new workspace with the name of the old one. This means a fresh workspace works as
 well as a workspace that already contains data.
 
@@ -282,5 +282,5 @@ Avoid "iso" codepages!
 **Q**: I am getting the error `Error: target workspace does not exist!`  when trying to restore my
 workspace.
 
-**A**: The workspace you want to restore into is either not on the same file system or you used the
+**A**: The workspace you want to restore into is either not on the same filesystem or you used the
 wrong name. Use only the short name that is listed after `id:` when using `ws_list`
diff --git a/doc.zih.tu-dresden.de/docs/data_transfer/data_mover.md b/doc.zih.tu-dresden.de/docs/data_transfer/data_mover.md
deleted file mode 100644
index 856af9f3080969f29ac71c7bc8bf6b8c79c45a60..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/data_transfer/data_mover.md
+++ /dev/null
@@ -1,85 +0,0 @@
-# Transferring files between HPC systems
-
-We provide a special data transfer machine providing the global file
-systems of each ZIH HPC system. This machine is not accessible through
-SSH as it is dedicated to data transfers. To move or copy files from one
-file system to another file system you have to use the following
-commands:
-
--   **dtcp**, **dtls, dtmv**, **dtrm, dtrsync**, **dttar**
-
-These commands submit a job to the data transfer machines performing the
-selected command. Except the following options their syntax is the same
-than the shell command without **dt** prefix (cp, ls, mv, rm, rsync,
-tar).
-
-Additional options:
-
-|                   |                                                                               |
-|-------------------|-------------------------------------------------------------------------------|
-| --account=ACCOUNT | Assign data transfer job to specified account.                                |
-| --blocking        | Do not return until the data transfer job is complete. (default for **dtls**) |
-| --time=TIME       | Job time limit (default 18h).                                                 |
-
--   **dtinfo**, **dtqueue**, **dtq**, **dtcancel**
-
-**dtinfo** shows information about the nodes of the data transfer
-machine (like sinfo). **dtqueue** and **dtq** shows all the data
-transfer jobs that belong to you (like squeue -u $USER). **dtcancel**
-signals data transfer jobs (like scancel).
-
-To identify the mount points of the different HPC file systems on the
-data transfer machine, please use **dtinfo**. It shows an output like
-this (attention, the mount points can change without an update on this
-web page) :
-
-| HPC system         | Local directory  | Directory on data transfer machine |
-|:-------------------|:-----------------|:-----------------------------------|
-| Taurus, Venus      | /scratch/ws      | /scratch/ws                        |
-|                    | /ssd/ws          | /ssd/ws                            |
-|                    | /warm_archive/ws | /warm_archive/ws                   |
-|                    | /home            | /home                              |
-|                    | /projects        | /projects                          |
-| **Archive**        |                  | /archiv                            |
-| **Group Storages** |                  | /grp/\<group storage>              |
-
-## How to copy your data from an old scratch (Atlas, Triton, Venus) to our new scratch (Taurus)
-
-You can use our tool called Datamover to copy your data from A to B.
-
-    dtcp -r /scratch/<project or user>/<directory> /projects/<project or user>/<directory> # or
-    dtrsync -a /scratch/<project or user>/<directory> /lustre/ssd/<project or user>/<directory>
-
-Options for dtrsync:
-
-    -a, --archive               archive mode; equals -rlptgoD (no -H,-A,-X)
-
-    -r, --recursive             recurse into directories
-    -l, --links                 copy symlinks as symlinks
-    -p, --perms                 preserve permissions
-    -t, --times                 preserve modification times
-    -g, --group                 preserve group
-    -o, --owner                 preserve owner (super-user only)
-    -D                          same as --devices --specials
-
-Example:
-
-    dtcp -r /scratch/rotscher/results /luste/ssd/rotscher/ # or
-    new: dtrsync -a /scratch/rotscher/results /home/rotscher/results
-
-## Examples on how to use data transfer commands:
-
-Copying data from Taurus' /scratch to Taurus' /projects
-
-    % dtcp -r /scratch/jurenz/results/ /home/jurenz/
-
-Moving data from Venus' /sratch to Taurus' /luste/ssd
-
-    % dtmv /scratch/jurenz/results/ /lustre/ssd/jurenz/results
-
-TGZ data from Taurus' /scratch to the Archive
-
-    % dttar -czf /archiv/jurenz/taurus_results_20140523.tgz /scratch/jurenz/results
-
-**%RED%Note:<span class="twiki-macro ENDCOLOR"></span>**Please do not
-generate files in the archive much larger that 500 GB.
diff --git a/doc.zih.tu-dresden.de/docs/data_transfer/datamover.md b/doc.zih.tu-dresden.de/docs/data_transfer/datamover.md
new file mode 100644
index 0000000000000000000000000000000000000000..41333949cb352630294ccb3a2ffac7ea65d980e6
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/data_transfer/datamover.md
@@ -0,0 +1,69 @@
+# Transferring Files Between ZIH Systems
+
+With the **datamover**, we provide a special data transfer machine for transferring data with best
+transfer speed between the filesystems of ZIH systems. The datamover machine is not accessible
+through SSH as it is dedicated to data transfers. To move or copy files from one filesystem to
+another filesystem, you have to use the following commands:
+
+- `dtcp`, `dtls`, `dtmv`, `dtrm`, `dtrsync`, `dttar`, and `dtwget`
+
+These commands submit a [batch job](../jobs_and_resources/slurm.md) to the data transfer machines
+performing the selected command. Except the following options their syntax is the very same as the
+well-known shell commands without the prefix *dt*.
+
+| Additional Option   | Description                                                                   |
+|---------------------|-------------------------------------------------------------------------------|
+| `--account=ACCOUNT` | Assign data transfer job to specified account.                                |
+| `--blocking       ` | Do not return until the data transfer job is complete. (default for `dtls`)   |
+| `--time=TIME      ` | Job time limit (default: 18 h).                                               |
+
+## Managing Transfer Jobs
+
+There are the commands `dtinfo`, `dtqueue`, `dtq`, and `dtcancel` to manage your transfer commands
+and jobs.
+
+* `dtinfo` shows information about the nodes of the data transfer machine (like `sinfo`).
+* `dtqueue` and `dtq` show all your data transfer jobs (like `squeue -u $USER`).
+* `dtcancel` signals data transfer jobs (like `scancel`).
+
+To identify the mount points of the different filesystems on the data transfer machine, use
+`dtinfo`. It shows an output like this:
+
+| ZIH system         | Local directory      | Directory on data transfer machine |
+|:-------------------|:---------------------|:-----------------------------------|
+| Taurus             | `/scratch/ws`        | `/scratch/ws`                      |
+|                    | `/ssd/ws`            | `/ssd/ws`                          |
+|                    | `/beegfs/global0/ws` | `/beegfs/global0/ws`               |
+|                    | `/warm_archive/ws`   | `/warm_archive/ws`                 |
+|                    | `/home`              | `/home`                            |
+|                    | `/projects`          | `/projects`                        |
+| **Archive**        |                      | `/archive`                         |
+| **Group storage**  |                      | `/grp/<group storage>`             |
+
+## Usage of Datamover
+
+!!! example "Copying data from `/beegfs/global0` to `/projects` filesystem."
+
+    ``` console
+    marie@login$ dtcp -r /beegfs/global0/ws/marie-workdata/results /projects/p_marie/.
+    ```
+
+!!! example "Moving data from `/beegfs/global0` to `/warm_archive` filesystem."
+
+    ``` console
+    marie@login$ dtmv /beegfs/global0/ws/marie-workdata/results /warm_archive/ws/marie-archive/.
+    ```
+
+!!! example "Archive data from `/beegfs/global0` to `/archiv` filesystem."
+
+    ``` console
+    marie@login$ dttar -czf /archiv/p_marie/results.tgz /beegfs/global0/ws/marie-workdata/results
+    ```
+
+!!! warning
+    Do not generate files in the `/archiv` filesystem much larger that 500 GB!
+
+!!! note
+    The [warm archive](../data_lifecycle/warm_archive.md) and the `projects` filesystem are not
+    writable from within batch jobs.
+    However, you can store the data in the `warm_archive` using the datamover.
diff --git a/doc.zih.tu-dresden.de/docs/data_transfer/export_nodes.md b/doc.zih.tu-dresden.de/docs/data_transfer/export_nodes.md
index ccd1e87e7c47ad3beae89caef3620d9f32d108ba..9ccba626f713f7be9aa72488866fbc34776cc5ee 100644
--- a/doc.zih.tu-dresden.de/docs/data_transfer/export_nodes.md
+++ b/doc.zih.tu-dresden.de/docs/data_transfer/export_nodes.md
@@ -1,146 +1,135 @@
-# Move data to/from ZIH's File Systems
+# Export Nodes: Transfer Data to/from ZIH's Filesystems
 
-## Export Nodes
-
-To copy large data to/from the HPC machines, the Taurus export nodes should be used. While it is
+To copy large data to/from ZIH systems, the so-called **export nodes** should be used. While it is
 possible to transfer small files directly via the login nodes, they are not intended to be used that
-way and there exists a CPU time limit on the login nodes, killing each process that takes up too
-much CPU time, which also affects file-copy processes if the copied files are very large. The export
-nodes have a better uplink (10GBit/s) and are generally the preferred way to transfer your data.
-Note that you cannot log in via ssh to the export nodes, but only use scp, rsync or sftp on them.
+way. Furthermore, longer transfers will hit the CPU time limit on the login nodes, i.e. the process
+get killed. The **export nodes** have a better uplink (10 GBit/s) allowing for higher bandwidth. Note
+that you cannot log in via SSH to the export nodes, but only use `scp`, `rsync` or `sftp` on them.
 
-They are reachable under the hostname: **taurusexport.hrsk.tu-dresden.de** (or
-taurusexport3.hrsk.tu-dresden.de, taurusexport4.hrsk.tu-dresden.de).
+The export nodes are reachable under the hostname `taurusexport.hrsk.tu-dresden.de` (or
+`taurusexport3.hrsk.tu-dresden.de` and `taurusexport4.hrsk.tu-dresden.de`).
 
-## Access from Linux Machine
+## Access From Linux
 
-There are three possibilities to exchange data between your local machine (lm) and the hpc machines
-(hm), which are explained in the following abstract in more detail.
+There are at least three tool to exchange data between your local workstation and ZIH systems. All
+are explained in the following abstract in more detail.
 
 ### SCP
 
-Type following commands in the terminal when you are in the directory of
-the local machine.
+The tool [`scp`](https://www.man7.org/linux/man-pages/man1/scp.1.html)
+(OpenSSH secure file copy) copies files between hosts on a network. To copy all files
+in a directory, the option `-r` has to be specified.
 
-#### Copy data from lm to hm
-
-```Bash
-# Copy file
-scp <file> <zih-user>@<machine>:<target-location>
-# Copy directory
-scp -r <directory> <zih-user>@<machine>:<target-location>
-```
+??? example "Example: Copy a file from your workstation to ZIH systems"
 
-#### Copy data from hm to lm
+    ```console
+    marie@local$ scp <file> <zih-user>@taurusexport.hrsk.tu-dresden.de:<target-location>
 
-```Bash
-# Copy file
-scp <zih-user>@<machine>:<file> <target-location>
-# Copy directory
-scp -r <zih-user>@<machine>:<directory> <target-location>
-```
+    # Add -r to copy whole directory
+    marie@local$ scp -r <directory> <zih-user>@taurusexport.hrsk.tu-dresden.de:<target-location>
+    ```
 
-Example:
+??? example "Example: Copy a file from ZIH systems to your workstation"
 
-```Bash
-scp helloworld.txt mustermann@taurusexport.hrsk.tu-dresden.de:~/.
-```
+    ```console
+    marie@login$ scp <zih-user>@taurusexport.hrsk.tu-dresden.de:<file> <target-location>
 
-Additional information: <http://www.computerhope.com/unix/scp.htm>
+    # Add -r to copy whole directory
+    marie@login$ scp -r <zih-user>@taurusexport.hrsk.tu-dresden.de:<directory> <target-location>
+    ```
 
 ### SFTP
 
-Is a virtual command line, which you could access with the following
-line:
+The tool [`sftp`](https://man7.org/linux/man-pages/man1/sftp.1.html) (OpenSSH secure file transfer)
+is a file transfer program, which performs all operations over an encrypted SSH transport. It may
+use compression to increase performance.
+
+`sftp` is basically a virtual command line, which you could access and exit as follows.
 
-```Bash
+```console
 # Enter virtual command line
-sftp <zih-user>@<machine>
+marie@local$ sftp <zih-user>@taurusexport.hrsk.tu-dresden.de
 # Exit virtual command line
-sftp> exit 
+sftp> exit
 # or
 sftp> <Ctrl+D>
 ```
 
-After that you have access to the filesystem on the hpc machine and you
-can use the same commands as on your local machine, e.g. ls, cd, pwd and
-many more. If you would access to your local machine from this virtual
-command line, then you have to put the letter l (local machine) before
-the command, e.g. lls, lcd or lpwd.
+After that you have access to the filesystem on ZIH systems, you can use the same commands as on
+your local workstation, e.g., `ls`, `cd`, `pwd` etc. If you would access to your local workstation
+from this virtual command line, then you have to prefix the command with the letter `l`
+(`l`ocal),e.g., `lls`, `lcd` or `lpwd`.
 
-#### Copy data from lm to hm
+??? example "Example: Copy a file from your workstation to ZIH systems"
 
-```Bash
-# Copy file
-sftp> put <file>
-# Copy directory
-sftp> put -r <directory>
-```
+    ```console
+    marie@local$ sftp <zih-user>@taurusexport.hrsk.tu-dresden.de
+    # Copy file
+    sftp> put <file>
+    # Copy directory
+    sftp> put -r <directory>
+    ```
 
-#### Copy data from hm to lm
+??? example "Example: Copy a file from ZIH systems to your local workstation"
 
-```Bash
-# Copy file
-sftp> get <file>
-# Copy directory
-sftp> get -r <directory>
-```
+    ```console
+    marie@local$ sftp <zih-user>@taurusexport.hrsk.tu-dresden.de
+    # Copy file
+    sftp> get <file>
+    # Copy directory
+    sftp> get -r <directory>
+    ```
 
-Example:
+### Rsync
 
-```Bash
-sftp> get helloworld.txt
-```
-
-Additional information: http://www.computerhope.com/unix/sftp.htm
-
-### RSYNC
+[`Rsync`](https://man7.org/linux/man-pages/man1/rsync.1.html), is a fast and extraordinarily
+versatile file copying tool. It can copy locally, to/from another host over any remote shell, or
+to/from a remote `rsync` daemon. It is famous for its delta-transfer algorithm, which reduces the
+amount of data sent over the network by sending only the differences between the source files and
+the existing files in the destination.
 
 Type following commands in the terminal when you are in the directory of
 the local machine.
 
-#### Copy data from lm to hm
-
-```Bash
-# Copy file
-rsync <file> <zih-user>@<machine>:<target-location>
-# Copy directory
-rsync -r <directory> <zih-user>@<machine>:<target-location>
-```
+??? example "Example: Copy a file from your workstation to ZIH systems"
 
-#### Copy data from hm to lm
+    ```console
+    # Copy file
+    marie@local$ rsync <file> <zih-user>@taurusexport.hrsk.tu-dresden.de:<target-location>
+    # Copy directory
+    marie@local$ rsync -r <directory> <zih-user>@taurusexport.hrsk.tu-dresden.de:<target-location>
+    ```
 
-```Bash
-# Copy file
-rsync <zih-user>@<machine>:<file> <target-location>
-# Copy directory
-rsync -r <zih-user>@<machine>:<directory> <target-location>
-```
-
-Example:
-
-```Bash
-rsync helloworld.txt mustermann@taurusexport.hrsk.tu-dresden.de:~/.
-```
+??? example "Example: Copy a file from ZIH systems to your local workstation"
 
-Additional information: http://www.computerhope.com/unix/rsync.htm
+    ```console
+    # Copy file
+    marie@local$ rsync <zih-user>@taurusexport.hrsk.tu-dresden.de:<file> <target-location>
+    # Copy directory
+    marie@local$ rsync -r <zih-user>@taurusexport.hrsk.tu-dresden.de:<directory> <target-location>
+    ```
 
-## Access from Windows machine
+## Access From Windows
 
 First you have to install [WinSCP](http://winscp.net/eng/download.php).
 
 Then you have to execute the WinSCP application and configure some
 option as described below.
 
-<span class="twiki-macro IMAGE" size="600">WinSCP_001_new.PNG</span>
+![Login - WinSCP](misc/WinSCP_001_new.PNG)
+{: align="center"}
 
-<span class="twiki-macro IMAGE" size="600">WinSCP_002_new.PNG</span>
+![Save session as site](misc/WinSCP_002_new.PNG)
+{: align="center"}
 
-<span class="twiki-macro IMAGE" size="600">WinSCP_003_new.PNG</span>
+![Login - WinSCP click Login](misc/WinSCP_003_new.PNG)
+{: align="center"}
 
-<span class="twiki-macro IMAGE" size="600">WinSCP_004_new.PNG</span>
+![Enter password and click OK](misc/WinSCP_004_new.PNG)
+{: align="center"}
 
-After your connection succeeded, you can copy files from your local
-machine to the hpc machine and the other way around.
+After your connection succeeded, you can copy files from your local workstation to ZIH systems and
+the other way around.
 
-<span class="twiki-macro IMAGE" size="600">WinSCP_005_new.PNG</span>
+![WinSCP document explorer](misc/WinSCP_005_new.PNG)
+{: align="center"}
diff --git a/Compendium_attachments/ExportNodes/WinSCP_001_new.PNG b/doc.zih.tu-dresden.de/docs/data_transfer/misc/WinSCP_001_new.PNG
similarity index 100%
rename from Compendium_attachments/ExportNodes/WinSCP_001_new.PNG
rename to doc.zih.tu-dresden.de/docs/data_transfer/misc/WinSCP_001_new.PNG
diff --git a/Compendium_attachments/ExportNodes/WinSCP_002_new.PNG b/doc.zih.tu-dresden.de/docs/data_transfer/misc/WinSCP_002_new.PNG
similarity index 100%
rename from Compendium_attachments/ExportNodes/WinSCP_002_new.PNG
rename to doc.zih.tu-dresden.de/docs/data_transfer/misc/WinSCP_002_new.PNG
diff --git a/Compendium_attachments/ExportNodes/WinSCP_003_new.PNG b/doc.zih.tu-dresden.de/docs/data_transfer/misc/WinSCP_003_new.PNG
similarity index 100%
rename from Compendium_attachments/ExportNodes/WinSCP_003_new.PNG
rename to doc.zih.tu-dresden.de/docs/data_transfer/misc/WinSCP_003_new.PNG
diff --git a/Compendium_attachments/ExportNodes/WinSCP_004_new.PNG b/doc.zih.tu-dresden.de/docs/data_transfer/misc/WinSCP_004_new.PNG
similarity index 100%
rename from Compendium_attachments/ExportNodes/WinSCP_004_new.PNG
rename to doc.zih.tu-dresden.de/docs/data_transfer/misc/WinSCP_004_new.PNG
diff --git a/Compendium_attachments/ExportNodes/WinSCP_005_new.PNG b/doc.zih.tu-dresden.de/docs/data_transfer/misc/WinSCP_005_new.PNG
similarity index 100%
rename from Compendium_attachments/ExportNodes/WinSCP_005_new.PNG
rename to doc.zih.tu-dresden.de/docs/data_transfer/misc/WinSCP_005_new.PNG
diff --git a/doc.zih.tu-dresden.de/docs/data_transfer/overview.md b/doc.zih.tu-dresden.de/docs/data_transfer/overview.md
index 3f92972f39b320aef5b824e5a7146a2d25e5a503..095fa14a96d514f6daea6b8edc8850651ba5f367 100644
--- a/doc.zih.tu-dresden.de/docs/data_transfer/overview.md
+++ b/doc.zih.tu-dresden.de/docs/data_transfer/overview.md
@@ -1,37 +1,22 @@
 # Transfer of Data
 
-## Moving data to/from the HPC Machines
-
-To copy data to/from the HPC machines, the Taurus export nodes should be used as a preferred way.
-There are three possibilities to exchanging data between your local machine (lm) and the HPC
-machines (hm): SCP, RSYNC, SFTP. Type following commands in the terminal of the local machine. The
-SCP command was used for the following example.  Copy data from lm to hm
-
-```Bash
-# Copy file from your local machine. For example: scp helloworld.txt mustermann@taurusexport.hrsk.tu-dresden.de:/scratch/ws/mastermann-Macine_learning_project/
-scp <file> <zih-user>@taurusexport.hrsk.tu-dresden.de:<target-location>
-
-scp -r <directory> <zih-user>@taurusexport.hrsk.tu-dresden.de:<target-location>          #Copy directory from your local machine.
-```
-
-Copy data from hm to lm
-
-```Bash
-# Copy file. For example: scp mustermann@taurusexport.hrsk.tu-dresden.de:/scratch/ws/mastermann-Macine_learning_project/helloworld.txt /home/mustermann/Downloads
-scp <zih-user>@taurusexport.hrsk.tu-dresden.de:<file> <target-location>
-
-scp -r <zih-user>@taurusexport.hrsk.tu-dresden.de:<directory> <target-location>          #Copy directory
-```
-
-## Moving data inside the HPC machines: Datamover
-
-The best way to transfer data inside the Taurus is the datamover. It is the special data transfer
-machine provides the best data speed. To load, move, copy etc. files from one file system to another
-file system, you have to use commands with dt prefix, such as: dtcp, dtwget, dtmv, dtrm, dtrsync,
-dttar, dtls. These commands submit a job to the data transfer machines that execute the selected
-command. Except for the 'dt' prefix, their syntax is the same as the shell command without the 'dt'.
-
-Keep in mind: The warm_archive is not writable for jobs. However, you can store the data in the warm
-archive with the datamover.
-
-Useful links: [Data Mover]**todo link**, [Export Nodes]**todo link**
+## Moving Data to/from ZIH Systems
+
+There are at least three tools to exchange data between your local workstation and ZIH systems:
+`scp`, `rsync`, and `sftp`. Please refer to the offline or online man pages of
+[scp](https://www.man7.org/linux/man-pages/man1/scp.1.html),
+[rsync](https://man7.org/linux/man-pages/man1/rsync.1.html), and
+[sftp](https://man7.org/linux/man-pages/man1/sftp.1.html) for detailed information.
+
+No matter what tool you prefer, it is crucial that the **export nodes** are used prefered way to
+copy data to/from ZIH systems. Please follow the linkt to documentation on [export
+nodes](export_nodes.md) for further reference and examples.
+
+## Moving Data Inside ZIH Systems: Datamover
+
+The recommended way for data transfer inside ZIH Systems is the **datamover**. It is a special
+data transfer machine that provides the best transfer speed. To load, move, copy etc. files from one
+filesystem to another filesystem, you have to use commands prefixed with `dt`: `dtcp`, `dtwget`,
+`dtmv`, `dtrm`, `dtrsync`, `dttar`, `dtls`. These commands submit a job to the data transfer
+machines that execute the selected command.  Please refer to the detailed documentation regarding the
+[datamover](datamover.md).
diff --git a/doc.zih.tu-dresden.de/docs/index.md b/doc.zih.tu-dresden.de/docs/index.md
index cc174e052a72bf6258ce4844749690ae28d7a46c..24d3907def65508bc521a0fd3109b9792c76f19b 100644
--- a/doc.zih.tu-dresden.de/docs/index.md
+++ b/doc.zih.tu-dresden.de/docs/index.md
@@ -1,48 +1,30 @@
-# ZIH HPC Compendium
+# ZIH HPC Documentation
 
-Dear HPC users,
+This is the documentation of the HPC systems and services provided at
+[TU Dresden/ZIH](https://tu-dresden.de/zih/).  This documentation is work in progress, since we try
+to incorporate more information with increasing experience and with every question you ask us. The
+HPC team invites you to take part in the improvement of these pages by correcting or adding useful
+information.
 
-due to restrictions coming from data security and software incompatibilities the old
-"HPC Compendium" is now reachable only from inside TU Dresden campus (or via VPN).
+## Contribution
 
-Internal users should be redirected automatically.
+Issues concerning this documentation can reported via the GitLab
+[issue tracking system](https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium/-/issues).
+Please check for any already existing issue before submitting your issue in order to avoid duplicate
+issues.
 
-We apologize for this severe action, but we are in the middle of the preparation for a wiki
-relaunch, so we do not want to redirect resources to fix technical/security issues for a system
-that will last only a few weeks.
+Contributions from user-side are highly welcome. Please refer to
+the detailed [documentation](contrib/howto_contribute.md) to get started.
 
-Thank you for your understanding,
+**Reminder:** Non-documentation issues and requests need to be send as ticket to
+[hpcsupport@zih.tu-dresden.de](mailto:hpcsupport@zih.tu-dresden.de).
 
-your HPC Support Team ZIH
+---
 
-## What is new?
+---
 
-The desire for a new technical documentation is driven by two major aspects:
+## News
 
-1. Clear and user-oriented structure of the content
-1. Usage of modern tools for technical documentation
+**2021-10-05** Offline-maintenance (black building test)
 
-The HPC Compendium provided knowledge and help for many years. It grew with every new hardware
-installation and ZIH stuff tried its best to keep it up to date. But, to be honest, it has become
-quite messy, and housekeeping it was a nightmare.
-
-The new structure is designed with the schedule for an HPC project in mind. This will ease the start
-for new HPC users, as well speedup searching information w.r.t. a specific topic for advanced users.
-
-We decided against a classical wiki software. Instead, we write the documentation in markdown and
-make use of the static site generator [mkdocs](https://www.mkdocs.org/) to create static html files
-from this markdown files. All configuration, layout and content files are managed within a git
-repository. The generated static html files, i.e, the documentation you are now reading, is deployed
-to a web server.
-
-The workflow is flexible, allows a high level of automation, and is quite easy to maintain.
-
-From a technical point, our new documentation system is highly inspired by
-[OLFC User Documentation](https://docs.olcf.ornl.gov/) as well as
-[NERSC Technical Documentation](https://nersc.gitlab.io/).
-
-## Contribute
-
-Contributions are highly welcome. Please refere to
-[README.md](https://gitlab.hrz.tu-chemnitz.de/zih/hpc-compendium/hpc-compendium/-/blob/main/doc.zih.tu-dresden.de/README.md)
-file of this project.
+**2021-09-29** Introduction to HPC at ZIH ([slides](misc/HPC-Introduction.pdf))
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md
index 5324f550e30e66b6ec6830cf7fddbb921b0dbdbf..ca813dbe4b627f2ac74b33163f285c6caa93348b 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md
@@ -1,13 +1,13 @@
-# Alpha Centauri - Multi-GPU sub-cluster
+# Alpha Centauri - Multi-GPU Sub-Cluster
 
-The sub-cluster "AlphaCentauri" had been installed for AI-related computations (ScaDS.AI).
+The sub-cluster "Alpha Centauri" had been installed for AI-related computations (ScaDS.AI).
 It has 34 nodes, each with:
 
-- 8 x NVIDIA A100-SXM4 (40 GB RAM)
-- 2 x AMD EPYC CPU 7352 (24 cores) @ 2.3 GHz with multithreading enabled
-- 1 TB RAM 3.5 TB `/tmp` local NVMe device
-- Hostnames: `taurusi[8001-8034]`
-- Slurm partition `alpha` for batch jobs and `alpha-interactive` for interactive jobs
+* 8 x NVIDIA A100-SXM4 (40 GB RAM)
+* 2 x AMD EPYC CPU 7352 (24 cores) @ 2.3 GHz with multi-threading enabled
+* 1 TB RAM 3.5 TB `/tmp` local NVMe device
+* Hostnames: `taurusi[8001-8034]`
+* Slurm partition `alpha` for batch jobs and `alpha-interactive` for interactive jobs
 
 !!! note
 
@@ -19,12 +19,12 @@ It has 34 nodes, each with:
 ### Modules
 
 The easiest way is using the [module system](../software/modules.md).
-The software for the `alpha` partition is available in `modenv/hiera` module environment.
+The software for the partition alpha is available in `modenv/hiera` module environment.
 
 To check the available modules for `modenv/hiera`, use the command
 
-```bash
-module spider <module_name>
+```console
+marie@alpha$ module spider <module_name>
 ```
 
 For example, to check whether PyTorch is available in version 1.7.1:
@@ -95,11 +95,11 @@ Successfully installed torchvision-0.10.0
 
 ### JupyterHub
 
-[JupyterHub](../access/jupyterhub.md) can be used to run Jupyter notebooks on AlphaCentauri
+[JupyterHub](../access/jupyterhub.md) can be used to run Jupyter notebooks on Alpha Centauri
 sub-cluster. As a starting configuration, a "GPU (NVIDIA Ampere A100)" preset can be used
 in the advanced form. In order to use latest software, it is recommended to choose
 `fosscuda-2020b` as a standard environment. Already installed modules from `modenv/hiera`
-can be pre-loaded in "Preload modules (modules load):" field.
+can be preloaded in "Preload modules (modules load):" field.
 
 ### Containers
 
@@ -109,6 +109,6 @@ Detailed information about containers can be found [here](../software/containers
 Nvidia
 [NGC](https://developer.nvidia.com/blog/how-to-run-ngc-deep-learning-containers-with-singularity/)
 containers can be used as an effective solution for machine learning related tasks. (Downloading
-containers requires registration).  Nvidia-prepared containers with software solutions for specific
+containers requires registration). Nvidia-prepared containers with software solutions for specific
 scientific problems can simplify the deployment of deep learning workloads on HPC. NGC containers
 have shown consistent performance compared to directly run code.
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/batch_systems.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/batch_systems.md
deleted file mode 100644
index 06e9be7e7a8ab5efa0ae1272ba6159ac50310e0b..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/batch_systems.md
+++ /dev/null
@@ -1,56 +0,0 @@
-# Batch Systems
-
-Applications on an HPC system can not be run on the login node. They have to be submitted to compute
-nodes with dedicated resources for user jobs. Normally a job can be submitted with these data:
-
-- number of CPU cores,
-- requested CPU cores have to belong on one node (OpenMP programs) or
-  can distributed (MPI),
-- memory per process,
-- maximum wall clock time (after reaching this limit the process is
-  killed automatically),
-- files for redirection of output and error messages,
-- executable and command line parameters.
-
-Depending on the batch system the syntax differs slightly:
-
-- [Slurm](../jobs_and_resources/slurm.md) (taurus, venus)
-
-If you are confused by the different batch systems, you may want to enjoy this [batch system
-commands translation table](http://slurm.schedmd.com/rosetta.pdf).
-
-**Comment:** Please keep in mind that for a large runtime a computation may not reach its end. Try
-to create shorter runs (4...8 hours) and use checkpointing.  Here is an extreme example from
-literature for the waste of large computing resources due to missing checkpoints:
-
-*Earth was a supercomputer constructed to find the question to the answer to the Life, the Universe,
-and Everything by a race of hyper-intelligent pan-dimensional beings. Unfortunately 10 million years
-later, and five minutes before the program had run to completion, the Earth was destroyed by
-Vogons.* (Adams, D. The Hitchhikers Guide Through the Galaxy)
-
-## Exclusive Reservation of Hardware
-
-If you need for some special reasons, e.g., for benchmarking, a project or paper deadline, parts of
-our machines exclusively, we offer the opportunity to request and reserve these parts for your
-project.
-
-Please send your request **7 working days** before the reservation should start (as that's our
-maximum time limit for jobs and it is therefore not guaranteed that resources are available on
-shorter notice) with the following information to the [HPC
-support](mailto:hpcsupport@zih.tu-dresden.de?subject=Request%20for%20a%20exclusive%20reservation%20of%20hardware&body=Dear%20HPC%20support%2C%0A%0AI%20have%20the%20following%20request%20for%20a%20exclusive%20reservation%20of%20hardware%3A%0A%0AProject%3A%0AReservation%20owner%3A%0ASystem%3A%0AHardware%20requirements%3A%0ATime%20window%3A%20%3C%5Byear%5D%3Amonth%3Aday%3Ahour%3Aminute%20-%20%5Byear%5D%3Amonth%3Aday%3Ahour%3Aminute%3E%0AReason%3A):
-
-- `Project:` *\<Which project will be credited for the reservation?>*
-- `Reservation owner:` *\<Who should be able to run jobs on the
-  reservation? I.e., name of an individual user or a group of users
-  within the specified project.>*
-- `System:` *\<Which machine should be used?>*
-- `Hardware requirements:` *\<How many nodes and cores do you need? Do
-  you have special requirements, e.g., minimum on main memory,
-  equipped with a graphic card, special placement within the network
-  topology?>*
-- `Time window:` *\<Begin and end of the reservation in the form
-  year:month:dayThour:minute:second e.g.: 2020-05-21T09:00:00>*
-- `Reason:` *\<Reason for the reservation.>*
-
-**Please note** that your project CPU hour budget will be credited for the reserved hardware even if
-you don't use it.
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/binding_and_distribution_of_tasks.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/binding_and_distribution_of_tasks.md
index 4e8bde8c6e43ab765135f3199525a09820abf8d1..4677a625300c59a04160389f4cf9a3bf975018c8 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/binding_and_distribution_of_tasks.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/binding_and_distribution_of_tasks.md
@@ -1,45 +1,76 @@
 # Binding and Distribution of Tasks
 
+Slurm provides several binding strategies to place and bind the tasks and/or threads of your job
+to cores, sockets and nodes.
+
+!!! note
+
+    Keep in mind that the distribution method might have a direct impact on the execution time of
+    your application. The manipulation of the distribution can either speed up or slow down your
+    application.
+
 ## General
 
-To specify a pattern the commands `--cpu_bind=<cores|sockets>` and
-`--distribution=<block | cyclic>` are needed. cpu_bind defines the resolution in which the tasks
-will be allocated. While --distribution determinates the order in which the tasks will be allocated
-to the cpus.  Keep in mind that the allocation pattern also depends on your specification.
+To specify a pattern the commands `--cpu_bind=<cores|sockets>` and `--distribution=<block|cyclic>`
+are needed. The option `cpu_bind` defines the resolution in which the tasks will be allocated. While
+`--distribution` determinate the order in which the tasks will be allocated to the CPUs. Keep in
+mind that the allocation pattern also depends on your specification.
 
-```Bash
-#!/bin/bash 
-#SBATCH --nodes=2                        # request 2 nodes 
-#SBATCH --cpus-per-task=4                # use 4 cores per task 
-#SBATCH --tasks-per-node=4               # allocate 4 tasks per node - 2 per socket 
+!!! example "Explicitly specify binding and distribution"
 
-srun --ntasks 8 --cpus-per-task 4 --cpu_bind=cores --distribution=block:block ./application
-```
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2                        # request 2 nodes
+    #SBATCH --cpus-per-task=4                # use 4 cores per task
+    #SBATCH --tasks-per-node=4               # allocate 4 tasks per node - 2 per socket
+
+    srun --ntasks 8 --cpus-per-task 4 --cpu_bind=cores --distribution=block:block ./application
+    ```
 
 In the following sections there are some selected examples of the combinations between `--cpu_bind`
 and `--distribution` for different job types.
 
+## OpenMP Strategies
+
+The illustration below shows the default binding of a pure OpenMP-job on a single node with 16 CPUs
+on which 16 threads are allocated.
+
+```Bash
+#!/bin/bash
+#SBATCH --nodes=1
+#SBATCH --tasks-per-node=1
+#SBATCH --cpus-per-task=16
+
+export OMP_NUM_THREADS=16
+
+srun --ntasks 1 --cpus-per-task $OMP_NUM_THREADS ./application
+```
+
+![OpenMP](misc/openmp.png)
+{: align=center}
+
 ## MPI Strategies
 
-### Default Binding and Dsitribution Pattern
+### Default Binding and Distribution Pattern
 
-The default binding uses --cpu_bind=cores in combination with --distribution=block:cyclic. The
-default (as well as block:cyclic) allocation method will fill up one node after another, while
+The default binding uses `--cpu_bind=cores` in combination with `--distribution=block:cyclic`. The
+default (as well as `block:cyclic`) allocation method will fill up one node after another, while
 filling socket one and two in alternation. Resulting in only even ranks on the first socket of each
 node and odd on each second socket of each node.
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAw4AAADeCAIAAAAb9sCoAAAABmJLR0QA/wD/AP+gvaeTAAAfBklEQVR4nO3dfXBU1f348bshJEA2ISGbB0gIZAMJxqciIhCktGKxaqs14UEGC9gBJVUjxIo4EwFlpiqMOgydWipazTBNVATbGevQMQQYUMdSEEUNYGIID8kmMewmm2TzeH9/3On+9pvN2T27N9nsJu/XX+Tu/dx77uee8+GTu8tiUFVVAQAAQH/ChnoAAAAAwYtWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQChcT7DBYBiocQAIOaqqDvUQfEC9AkYyPfWKp0oAAABCup4qaULrN0sA+oXuExrqFTDS6K9XPFUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUauX72s58ZDIZPP/3UuSU5OfnDDz+UP8KXX35pNBrl9y8uLs7JyYmKikpOTvZhoABGvMDXq40bN2ZnZ48bNy4tLW3Tpk2dnZ0+DBfDC63SiBYfH//0008H7HQmk2nDhg3btm0L2BkBDBsBrld2u33Pnj2XLl0qLS0tLS3dunVrwE6NYEOrNKKtXbu2srLygw8+cH+ptrZ26dKliYmJqampjz/+eFtbm7b90qVLd911V2xs7A033HDixAnn/s3Nzfn5+ZMnT05ISHjwwQcbGxvdj3nPPfcsW7Zs8uTJg3Q5AIaxANerN954Y8GCBfHx8Tk5OQ8//LBrOEYaWqURzWg0btu27dlnn+3q6urzUl5e3ujRoysrK0+ePHnq1KnCwkJt+9KlS1NTU+vq6v71r3/95S9/ce6/cuVKi8Vy+vTpmpqa8ePHr1mzJmBXAWAkGMJ6dfz48VmzZg3o1SCkqDroPwKG0MKFC7dv397V1TVjxozdu3erqpqUlHTw4EFVVSsqKhRFqa+v1/YsKysbM2ZMT09PRUWFwWBoamrSthcXF0dFRamqWlVVZTAYnPvbbDaDwWC1Wvs9b0lJSVJS0mBfHQZVKK79UBwznIaqXqmqumXLlvT09MbGxkG9QAwe/Ws/PNCtGYJMeHj4Sy+9tG7dulWrVjk3Xr58OSoqKiEhQfvRbDY7HI7GxsbLly/Hx8fHxcVp26dPn679obq62mAwzJ4923mE8ePHX7lyZfz48YG6DgDDX+Dr1QsvvLBv377y8vL4+PjBuioEPVolKPfff/8rr7zy0ksvObekpqa2trY2NDRo1ae6ujoyMtJkMqWkpFit1o6OjsjISEVR6urqtP3T0tIMBsOZM2fojQAMqkDWq82bNx84cODo0aOpqamDdkEIAXxWCYqiKDt37ty1a1dLS4v2Y2Zm5ty5cwsLC+12u8ViKSoqWr16dVhY2IwZM2bOnPnaa68pitLR0bFr1y5t/4yMjMWLF69du7a2tlZRlIaGhv3797ufpaenx+FwaJ8zcDgcHR0dAbo8AMNIYOpVQUHBgQMHDh06ZDKZHA4HXxYwktEqQVEUZc6cOffee6/zn40YDIb9+/e3tbWlp6fPnDnzpptuevXVV7WX3n///bKysltuueWOO+644447nEcoKSmZNGlSTk5OdHT03Llzjx8/7n6WN954Y+zYsatWrbJYLGPHjuWBNgA/BKBeWa3W3bt3X7hwwWw2jx07duzYsdnZ2YG5OgQhg/MTT/4EGwyKoug5AoBQFIprPxTHDEA//Wufp0oAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABC4UM9AAAInKqqqqEeAoAQY1BV1f9gg0FRFD1HABCKQnHta2MGMDLpqVcD8FSJAgQg+JnN5qEeAoCQNABPlQCMTKH1VAkA/KOrVQIAABje+BdwAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrq+gpLvVRoJ/Ps6CebGSBBaXzXCnBwJqFcQ0VOveKoEAAAgNAD/sUlo/WYJefp/02JuDFeh+1s4c3K4ol5BRP/c4KkSAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACA0PBvlb799ttf//rXJpNp3LhxM2bMeOaZZ/w4yIwZMz788EPJnX/yk5+Ulpb2+1JxcXFOTk5UVFRycrIfw8DACqq5sXHjxuzs7HHjxqWlpW3atKmzs9OPwSDUBdWcpF4FlaCaGyOtXg3zVqm3t/eXv/zlpEmTvv7668bGxtLSUrPZPITjMZlMGzZs2LZt2xCOAZpgmxt2u33Pnj2XLl0qLS0tLS3dunXrEA4GQyLY5iT1KngE29wYcfVK1UH/EQbbpUuXFEX59ttv3V+6evXqkiVLEhISUlJSHnvssdbWVm37tWvX8vPz09LSoqOjZ86cWVFRoapqVlbWwYMHtVcXLly4atWqzs5Om822fv361NRUk8m0fPnyhoYGVVUff/zx0aNHm0ymKVOmrFq1qt9RlZSUJCUlDdY1Dxw995e54d/c0GzZsmXBggUDf80DJ/jvr7vgH3NwzknqVTAIzrmhGQn1apg/VZo0aVJmZub69evffffdmpoa15fy8vJGjx5dWVl58uTJU6dOFRYWattXrFhx8eLFzz77zGq1vvPOO9HR0c6Qixcvzp8///bbb3/nnXdGjx69cuVKi8Vy+vTpmpqa8ePHr1mzRlGU3bt3Z2dn7969u7q6+p133gngtcI3wTw3jh8/PmvWrIG/ZgS3YJ6TGFrBPDdGRL0a2k4tACwWy+bNm2+55Zbw8PBp06aVlJSoqlpRUaEoSn19vbZPWVnZmDFjenp6KisrFUW5cuVKn4NkZWU999xzqampe/bs0bZUVVUZDAbnEWw2m8FgsFqtqqrefPPN2llE+C0tSATh3FBVdcuWLenp6Y2NjQN4pQMuJO5vHyEx5iCck9SrIBGEc0MdMfVq+LdKTi0tLa+88kpYWNhXX331ySefREVFOV/64YcfFEWxWCxlZWXjxo1zj83KykpKSpozZ47D4dC2HD58OCwsbIqL2NjYb775RqX06I4NvOCZG88//7zZbK6urh7Q6xt4oXV/NaE15uCZk9SrYBM8c2Pk1Kth/gacK6PRWFhYOGbMmK+++io1NbW1tbWhoUF7qbq6OjIyUntTtq2trba21j18165dCQkJ9913X1tbm6IoaWlpBoPhzJkz1f9z7dq17OxsRVHCwkZQVoeHIJkbmzdv3rdv39GjR6dMmTIIV4lQEiRzEkEoSObGiKpXw3yR1NXVPf3006dPn25tbW1qanrxxRe7urpmz56dmZk5d+7cwsJCu91usViKiopWr14dFhaWkZGxePHiRx55pLa2VlXVs2fPOqdaZGTkgQMHYmJi7r777paWFm3PtWvXajs0NDTs379f2zM5OfncuXP9jqenp8fhcHR1dSmK4nA4Ojo6ApIG9CPY5kZBQcGBAwcOHTpkMpkcDsew/8e3cBdsc5J6FTyCbW6MuHo1tA+1BpvNZlu3bt306dPHjh0bGxs7f/78jz76SHvp8uXLubm5JpNp4sSJ+fn5drtd297U1LRu3bqUlJTo6Ohbbrnl3Llzqsu/Guju7v7tb3972223NTU1Wa3WgoKCqVOnGo1Gs9n85JNPakc4cuTI9OnTY2Nj8/Ly+ozn9ddfd02+64PTIKTn/jI3fJob165d67MwMzIyApcL3wX//XUX/GMOqjmpUq+CSVDNjRFYrwzOo/jBYDBop/f7CAhmeu4vc2N4C8X7G4pjhjzqFUT0399h/gYcAACAHrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQuH6D2EwGPQfBMMScwPBhjkJEeYGRHiqBAAAIGRQVXWoxwAAABCkeKoEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgpOvbuvlu05HAv2/eYm6MBKH1rWzMyZGAegURPfWKp0oAAABCA/B/wIXWb5aQp/83LebGcBW6v4UzJ4cr6hVE9M8NnioBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAIDdtW6cSJE/fee++ECROioqJuvPHGoqKi1tbWAJy3u7u7oKBgwoQJMTExK1eubG5u7nc3o9FocBEZGdnR0RGA4Y1YQzUfLBbLsmXLTCZTbGzsXXfdde7cuX53Ky4uzsnJiYqKSk5Odt2+Zs0a13lSWloagDEj8KhXcEW9CjbDs1X65z//uWjRoptvvvmzzz6rr6/ft29ffX39mTNnZGJVVe3q6vL71M8///yhQ4dOnjz5/fffX7x4cf369f3uZrFYWv4nNzf3gQceiIyM9Puk8GwI50N+fr7Vaj1//vyVK1cmTpy4dOnSfnczmUwbNmzYtm2b+0uFhYXOqbJkyRK/R4KgRb2CK+pVMFJ10H+EwdDT05OamlpYWNhne29vr6qqV69eXbJkSUJCQkpKymOPPdba2qq9mpWVVVRUdPvtt2dmZpaXl9tstvXr16empppMpuXLlzc0NGi7vfrqq1OmTBk/fvzEiRO3b9/ufvbExMS33npL+3N5eXl4ePi1a9c8jLahoSEyMvLw4cM6r3ow6Lm/wTM3hnY+ZGRk7N27V/tzeXl5WFhYd3e3aKglJSVJSUmuW1avXv3MM8/4e+mDKHjur7zgHDP1aqBQr6hXIgPQ7Qzt6QeD1n2fPn2631fnzZu3YsWK5ubm2traefPmPfroo9r2rKysG264obGxUfvxV7/61QMPPNDQ0NDW1vbII4/ce++9qqqeO3fOaDReuHBBVVWr1frf//63z8Fra2tdT609zT5x4oSH0e7cuXP69Ok6LncQDY/SM4TzQVXVTZs2LVq0yGKx2Gy2hx56KDc318NQ+y09EydOTE1NnTVr1ssvv9zZ2el7AgZF8NxfecE5ZurVQKFeUa9EaJX68cknnyiKUl9f7/5SRUWF60tlZWVjxozp6elRVTUrK+tPf/qTtr2qqspgMDh3s9lsBoPBarVWVlaOHTv2vffea25u7vfU58+fVxSlqqrKuSUsLOzjjz/2MNrMzMydO3f6fpWBMDxKzxDOB23nhQsXatm47rrrampqPAzVvfQcOnTo008/vXDhwv79+1NSUtx/1xwqwXN/5QXnmKlXA4V6pW2nXrnTf3+H4WeVEhISFEW5cuWK+0uXL1+OiorSdlAUxWw2OxyOxsZG7cdJkyZpf6iurjYYDLNnz546derUqVNvuumm8ePHX7lyxWw2FxcX//nPf05OTv7pT3969OjRPsePjo5WFMVms2k/trS09Pb2xsTEvP32285PurnuX15eXl1dvWbNmoG6drgbwvmgquqdd95pNpubmprsdvuyZctuv/321tZW0Xxwt3jx4nnz5k2bNi0vL+/ll1/et2+fnlQgCFGv4Ip6FaSGtlMbDNp7vU899VSf7b29vX268vLy8sjISGdXfvDgQW37999/P2rUKKvVKjpFW1vbH//4x7i4OO39Y1eJiYl/+9vftD8fOXLE83v/y5cvf/DBB327vADSc3+DZ24M4XxoaGhQ3N7g+Pzzz0XHcf8tzdV77703YcIET5caQMFzf+UF55ipVwOFeqVtp165G4BuZ2hPP0j+8Y9/jBkz5rnnnqusrHQ4HGfPns3Pzz9x4kRvb+/cuXMfeuihlpaWurq6+fPnP/LII1qI61RTVfXuu+9esmTJ1atXVVWtr69///33VVX97rvvysrKHA6HqqpvvPFGYmKie+kpKirKysqqqqqyWCwLFixYsWKFaJD19fURERHB+QFJzfAoPeqQzocpU6asW7fOZrO1t7e/8MILRqOxqanJfYTd3d3t7e3FxcVJSUnt7e3aMXt6evbu3VtdXW21Wo8cOZKRkeH8aMKQC6r7Kylox0y9GhDUK+cRqFd90CoJHT9+/O67746NjR03btyNN9744osvav9Y4PLly7m5uSaTaeLEifn5+Xa7Xdu/z1SzWq0FBQVTp041Go1ms/nJJ59UVfXUqVO33XZbTExMXFzcnDlzjh075n7ezs7OJ554IjY21mg0rlixwmaziUa4Y8eOoP2ApGbYlB516ObDmTNnFi9eHBcXFxMTM2/ePNHfNK+//rrrs96oqChVVXt6eu688874+PiIiAiz2fzss8+2tbUNeGb8E2z3V0Ywj5l6pR/1yhlOvepD//01OI/iB+2dSz1HQDDTc3+ZG8NbKN7fUBwz5FGvIKL//g7Dj3UDAAAMFFolAAAAIVolAAAAIVolAAAAIVolAAAAIVolAAAAIVolAAAAIVolAAAAIVolAAAAoXD9h/D6vw1jxGJuINgwJyHC3IAIT5UAAACEdP0fcAAAAMMbT5UAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEdH1bN99tOhL4981bzI2RILS+lY05ORJQryCip17xVAkAAEBoAP4POD1dPLHBH6tHKF4vsfKxoSgU80ysfKweoXi9xMrH6sFTJQAAACFaJQAAACFaJQAAAKFBaZW6u7sLCgomTJgQExOzcuXK5uZm+diNGzdmZ2ePGzcuLS1t06ZNnZ2dfpx95syZBoOhrq7Op8B///vfc+bMGTNmTEJCwqZNm+QDLRbLsmXLTCZTbGzsXXfdde7cOc/7FxcX5+TkREVFJScn9xm517yJYmXyJop1nt2/vPnE8xg8KyoqSk9Pj4yMjI+Pv++++77//nv52DVr1hhclJaWyscajUbX2MjIyI6ODsnYy5cv5+XlxcfHT5gw4fe//73XQFF+ZPIm2kcmb6JYPXkLZh7y6bUOiGJl6oBoncqsfVGszNr3vI/nte8h1muuRLEyuRLNWz1/v8gQ3V+ZOiCKlakDolzJrH1RrMzaF8XKrH1RrEyuRLEyuRJdl56/X7xQdRAdoaioKDMzs7Ky0mKxzJ8/f8WKFfKxa9euPXbsWGNj44kTJyZPnrx582b5WM327dsXLVqkKEptba18bFlZmdFo/Otf/1pXV1dTU3Ps2DH52AceeOAXv/jFjz/+aLfbV69efeONN3qO/eijj959990dO3YkJSW57iPKm0ysKG8ysRr3vOmZIaJYz2PwHPv5559XVlY2NzdXVVXdf//9OTk58rGrV68uLCxs+Z+uri75WLvd7gzMzc1dvny5fOxtt9324IMP2my2q1evzp0798knn/QcK8qPaLtMrChvMrGivOmvHoEnc72iOiATK6oDrrGidSqz9kWxMmvfc131vPZFsTK5EsXK5Eo0b2Vy5SuZ+yuqAzKxojogkyuZtS+KlVn7oliZtS+KlcmVKFYmV6LrksmVfwalVUpMTHzrrbe0P5eXl4eHh1+7dk0y1tWWLVsWLFggf15VVb/55puMjIwvvvhC8bFVysnJeeaZZzyPRxSbkZGxd+9e7c/l5eVhYWHd3d1eY0tKSvrcTlHeZGJdueZNMrbfvA1U6XHnefxez9vZ2Zmfn3/PPffIx65evdrv++vU0NAQGRl5+PBhydgrV64oilJRUaH9ePDgQaPR2NHR4TVWlB/37T7NjT55k4kV5U1/6Qk8mesV1QGZWFEdEOXKdZ3Kr333WNF2yVif1r5rrHyu3GN9ylWfeetrrmT4tI761AGvsR7qgPz9lVn7olhVYu27x/q69vs9r9dc9Yn1NVf9/l0gnyt5A/8GXF1dXX19/cyZM7UfZ82a1d3d/e233/pxqOPHj8+aNUt+/56ent/97nevvfZadHS0TydyOByff/55T0/PddddFxcXt2jRoq+++ko+PC8vr6SkpL6+vrm5+c033/zNb34zatQonwaghGbeAq+4uDg5OTk6Ovrrr7/++9//7mvs5MmTb7311h07dnR1dflx9rfffjstLe3nP/+55P7OJepkt9t9et9woAxt3kJFgOuAc536sfZFa1xm7bvu4+vad8b6kSvX80rmyn3eDmCd9FsA6oCvNdxDrE9r3z1Wfu33O2bJXDlj5XOlp6b5Q0+f1e8Rzp8/ryhKVVXV/2/HwsI+/vhjmVhXW7ZsSU9Pb2xslDyvqqo7d+5cunSpqqrfffed4stTpdraWkVR0tPTz549a7fbN2zYkJKSYrfbJc9rs9kWLlyovXrdddfV1NTInLdP5+shb15jXfXJm0ysKG96ZojnWL+fKrW1tV29evXYsWMzZ85cu3atfOyhQ4c+/fTTCxcu7N+/PyUlpbCw0Ncxq6qamZm5c+dOn8Z86623Oh8mz5s3T1GUzz77zGvsgD9V6jdvMrGivOmvHoHn9Xo91AGZXInqQL+5cl2nPq19VVwbva599318WvuusT7lyv28krlyn7e+5kqSTzW2Tx2QiRXVAfn7K/mkxD1Wcu27x/q09kVz0muu3GMlc+Xh74LBeKo08K2StoROnz6t/ah95u7EiRMysU7PP/+82Wyurq6WP++FCxcmTZpUV1en+t4qtbS0KIqyY8cO7cf29vZRo0YdPXpUJra3t3f27NkPP/xwU1OT3W7funVrWlqaTJvVb5nuN2/yy9g9b15jPeRtYEuPzPjlz3vs2DGDwdDa2upH7L59+xITE3097+HDhyMiIhoaGnwa88WLF3Nzc5OSktLT07du3aooyvnz573GDtIbcOr/zZuvsa550196As/r9XqoA15jPdQB99g+69SntS+qjTJrv88+Pq39PrE+5apPrE+50jjnrU+5kie/FtzrgEysqA7I31+Zte/5703Pa99zrOe1L4qVyZV7rHyu3K9LExpvwCUnJycmJn755Zfaj6dOnQoPD8/OzpY/wubNm/ft23f06NEpU6bIRx0/fryxsfH66683mUxaK3r99de/+eabMrFGo3HatGnOL/T06Zs9f/zxx//85z8FBQVxcXFRUVFPPfVUTU3N2bNn5Y+gCcW8Da1Ro0b58UanoigRERHd3d2+Ru3Zsyc3N9dkMvkUlZaW9sEHH9TV1VVVVaWmpqakpEybNs3XUw+sAOcthASmDrivU/m1L1rjMmvffR/5te8eK58r91j/aqY2b/XXSZ0GtQ74V8PlY0Vr32ush7XvIdZrrvqN9aNm+l3TfKCnzxIdoaioKCsrq6qqymKxLFiwwKd/AffEE09Mnz69qqqqvb29vb3d/TOwotjW1tZL/3PkyBFFUU6dOiX/Jtqrr75qNpvPnTvX3t7+hz/8YfLkyfJPLKZMmbJu3Tqbzdbe3v7CCy8YjcampiYPsd3d3e3t7cXFxUlJSe3t7Q6HQ9suyptMrChvXmM95E3PDBHFisbvNbazs/PFF1+sqKiwWq1ffPHFrbfempeXJxnb09Ozd+/e6upqq9V65MiRjIyMRx99VH7MqqrW19dHRET0+4Fuz7EnT5784YcfGhsbDxw4kJCQ8Pbbb3uOFeVHtN1rrIe8eY31kDf91SPwZPIsqgMysaI64BorWqcya18UK7P2+91Hcu2Lji+TK1Gs11x5mLcyuRqMuaEK6oBMrKgOyORKZu33Gyu59vuNlVz7Hv6+9porUazXXHm4Lplc+WdQWqXOzs4nnngiNjbWaDSuWLHCZrNJxl67dk35vzIyMuTP6+TrG3Cqqvb29m7ZsiUpKSkmJuaOO+74+uuv5WPPnDmzePHiuLi4mJiYefPmef0XUq+//rrrNUZFRWnbRXnzGushbzLnFeVNz/QSxXodgyi2q6vrvvvuS0pKioiImDp16saNG+XnVU9Pz5133hkfHx8REWE2m5999tm2tjb5MauqumPHjunTp/f7kufYXbt2JSYmjh49Ojs7u7i42GusKD+i7V5jPeTNa6yHvOmZG0NFJs+iOiATK6oDzlgP69Tr2hfFyqx9mboqWvseYr3mykOs11x5mLcyddJXMvdXFdQBmVhRHZDJlde1L4qVWfuiWJm173leec6Vh1ivufJwXTJ10j8G51H8ELr/bR6xxBI7VLFDJRRzRSyxxA5trIb/2AQAAECIVgkAAECIVgkAAECIVgkAAEBoAD7WjeFNz8foMLyF4se6MbxRryDCx7oBAAAGha6nSgAAAMMbT5UAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACE/h82xQH7rLtt0wAAAABJRU5ErkJggg=="
-/>
+![Default distribution](misc/mpi_default.png)
+{: align="center"}
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=16
-#SBATCH --cpus-per-task=1
+!!! example "Default binding and default distribution"
 
-srun --ntasks 32 ./application
-```
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2
+    #SBATCH --tasks-per-node=16
+    #SBATCH --cpus-per-task=1
+
+    srun --ntasks 32 ./application
+    ```
 
 ### Core Bound
 
@@ -50,18 +81,19 @@ application.
 
 This method allocates the tasks linearly to the cores.
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAw4AAADeCAIAAAAb9sCoAAAABmJLR0QA/wD/AP+gvaeTAAAe5UlEQVR4nO3dfVRUdf7A8TuIoDIgyPCgIMigYPS0ZqZirrvZ6la7tYEPeWzV9mjJVqS0mZ1DanXOVnqq43HPtq7WFsezUJm2e07bcU+IerQ6ratZVqhBiA8wQDgDAwyP9/fH/TW/+TF8Z74zw8MdeL/+gjv3c7/fO/fz/fDhzjAYVFVVAAAA0JeQoZ4AAACAftEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACIUGEmwwGPprHgCCjqqqQz0FH1CvgJEskHrFXSUAAAChgO4qaYLrN0sAgQveOzTUK2CkCbxecVcJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFZp5PrZz35mMBg++eQT55bExMQPPvhA/ghffPGF0WiU37+oqCg7OzsiIiIxMdGHiQIY8Qa/Xm3cuDErK2vcuHEpKSmbNm3q6OjwYboYXmiVRrTY2Ninnnpq0IYzmUwbNmzYtm3boI0IYNgY5Hplt9t379596dKlkpKSkpKSrVu3DtrQ0BtapRFt7dq1FRUV77//vvtDNTU1S5cujY+PT05Ofuyxx1pbW7Xtly5dWrx4cXR09A033HDixAnn/k1NTXl5eZMnT46Li3vggQcaGhrcj3n33XcvW7Zs8uTJA3Q6AIaxQa5Xe/bsmT9/fmxsbHZ29kMPPeQajpGGVmlEMxqN27Zte+aZZzo7O3s9lJubO3r06IqKipMnT546daqgoEDbvnTp0uTk5Nra2n/9619/+ctfnPuvXLnSYrGcPn26urp6/Pjxa9asGbSzADASDGG9On78+MyZM/v1bBBU1AAEfgQMoQULFrzwwgudnZ3Tp0/ftWuXqqoJCQkHDx5UVbW8vFxRlLq6Om3P0tLSMWPGdHd3l5eXGwyGxsZGbXtRUVFERISqqpWVlQaDwbm/zWYzGAxWq7XPcYuLixMSEgb67DCggnHtB+Oc4TRU9UpV1S1btqSlpTU0NAzoCWLgBL72Qwe7NYPOhIaGvvTSS+vWrVu1apVz4+XLlyMiIuLi4rRvzWazw+FoaGi4fPlybGxsTEyMtn3atGnaF1VVVQaDYdasWc4jjB8//sqVK+PHjx+s8wAw/A1+vXr++ef37dtXVlYWGxs7UGcF3aNVgnLfffe98sorL730knNLcnJyS0tLfX29Vn2qqqrCw8NNJlNSUpLVam1vbw8PD1cUpba2Vts/JSXFYDCcOXOG3gjAgBrMerV58+YDBw4cPXo0OTl5wE4IQYD3KkFRFGXHjh07d+5sbm7Wvs3IyJgzZ05BQYHdbrdYLIWFhatXrw4JCZk+ffqMGTNee+01RVHa29t37typ7Z+enr5o0aK1a9fW1NQoilJfX79//373Ubq7ux0Oh/Y+A4fD0d7ePkinB2AYGZx6lZ+ff+DAgUOHDplMJofDwYcFjGS0SlAURZk9e/Y999zj/LMRg8Gwf//+1tbWtLS0GTNm3HTTTa+++qr20HvvvVdaWnrLLbfccccdd9xxh/MIxcXFkyZNys7OjoyMnDNnzvHjx91H2bNnz9ixY1etWmWxWMaOHcsNbQB+GIR6ZbVad+3adeHCBbPZPHbs2LFjx2ZlZQ3O2UGHDM53PPkTbDAoihLIEQAEo2Bc+8E4ZwCBC3ztc1cJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAKHSoJwAAg6eysnKopwAgyBhUVfU/2GBQFCWQIwAIRsG49rU5AxiZAqlX/XBXiQIEQP/MZvNQTwFAUOqHu0oARqbguqsEAP4JqFUCAAAY3vgLOAAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAAKGAPoKSz1UaCfz7OAlyYyQIro8aISdHAuoVRAKpV9xVAgAAEOqHf2wSXL9ZQl7gv2mRG8NV8P4WTk4OV9QriASeG9xVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEBr+rdI333zz61//2mQyjRs3bvr06U8//bQfB5k+ffoHH3wgufNPfvKTkpKSPh8qKirKzs6OiIhITEz0YxroX7rKjY0bN2ZlZY0bNy4lJWXTpk0dHR1+TAbBTlc5Sb3SFV3lxkirV8O8Verp6fnlL385adKkr776qqGhoaSkxGw2D+F8TCbThg0btm3bNoRzgEZvuWG323fv3n3p0qWSkpKSkpKtW7cO4WQwJPSWk9Qr/dBbboy4eqUGIPAjDLRLly4pivLNN9+4P3T16tUlS5bExcUlJSU9+uijLS0t2vZr167l5eWlpKRERkbOmDGjvLxcVdXMzMyDBw9qjy5YsGDVqlUdHR02m239+vXJyckmk2n58uX19fWqqj722GOjR482mUypqamrVq3qc1bFxcUJCQkDdc79J5DrS274lxuaLVu2zJ8/v//Puf/o//q60/+c9ZmT1Cs90GduaEZCvRrmd5UmTZqUkZGxfv36d955p7q62vWh3Nzc0aNHV1RUnDx58tSpUwUFBdr2FStWXLx48dNPP7VarW+//XZkZKQz5OLFi/Pmzbv99tvffvvt0aNHr1y50mKxnD59urq6evz48WvWrFEUZdeuXVlZWbt27aqqqnr77bcH8VzhGz3nxvHjx2fOnNn/5wx903NOYmjpOTdGRL0a2k5tEFgsls2bN99yyy2hoaFTp04tLi5WVbW8vFxRlLq6Om2f0tLSMWPGdHd3V1RUKIpy5cqVXgfJzMx89tlnk5OTd+/erW2prKw0GAzOI9hsNoPBYLVaVVW9+eabtVFE+C1NJ3SYG6qqbtmyJS0traGhoR/PtN8FxfXtJSjmrMOcpF7phA5zQx0x9Wr4t0pOzc3Nr7zySkhIyJdffvnxxx9HREQ4H/r+++8VRbFYLKWlpePGjXOPzczMTEhImD17tsPh0LYcPnw4JCQk1UV0dPTXX3+tUnoCjh18+smN5557zmw2V1VV9ev59b/gur6a4JqzfnKSeqU3+smNkVOvhvkLcK6MRmNBQcGYMWO+/PLL5OTklpaW+vp67aGqqqrw8HDtRdnW1taamhr38J07d8bFxd17772tra2KoqSkpBgMhjNnzlT96Nq1a1lZWYqihISMoGd1eNBJbmzevHnfvn1Hjx5NTU0dgLNEMNFJTkKHdJIbI6peDfNFUltb+9RTT50+fbqlpaWxsfHFF1/s7OycNWtWRkbGnDlzCgoK7Ha7xWIpLCxcvXp1SEhIenr6okWLHn744ZqaGlVVz54960y18PDwAwcOREVF3XXXXc3Nzdqea9eu1Xaor6/fv3+/tmdiYuK5c+f6nE93d7fD4ejs7FQUxeFwtLe3D8rTgD7oLTfy8/MPHDhw6NAhk8nkcDiG/R/fwp3ecpJ6pR96y40RV6+G9qbWQLPZbOvWrZs2bdrYsWOjo6PnzZv34Ycfag9dvnw5JyfHZDJNnDgxLy/Pbrdr2xsbG9etW5eUlBQZGXnLLbecO3dOdfmrga6urt/+9re33XZbY2Oj1WrNz8+fMmWK0Wg0m81PPPGEdoQjR45MmzYtOjo6Nze313xef/111yff9capDgVyfckNn3Lj2rVrvRZmenr64D0XvtP/9XWn/znrKidV6pWe6Co3RmC9MjiP4geDwaAN7/cRoGeBXF9yY3gLxusbjHOGPOoVRAK/vsP8BTgAAIBA0CoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAIhQZ+CIPBEPhBMCyRG9AbchIi5AZEuKsEAAAgZFBVdajnAAAAoFPcVQIAABCiVQIAABCiVQIAABCiVQIAABCiVQIAABCiVQIAABCiVQIAABAK6NO6+WzTkcC/T94iN0aC4PpUNnJyJKBeQSSQesVdJQAAAKF++B9wwfWbJeQF/psWuTFcBe9v4eTkcEW9gkjgucFdJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAAKFh2yqdOHHinnvumTBhQkRExI033lhYWNjS0jII43Z1deXn50+YMCEqKmrlypVNTU197mY0Gg0uwsPD29vbB2F6I9ZQ5YPFYlm2bJnJZIqOjl68ePG5c+f63K2oqCg7OzsiIiIxMdF1+5o1a1zzpKSkZBDmjMFHvYIr6pXeDM9W6Z///OfChQtvvvnmTz/9tK6ubt++fXV1dWfOnJGJVVW1s7PT76Gfe+65Q4cOnTx58rvvvrt48eL69ev73M1isTT/KCcn5/777w8PD/d7UHg2hPmQl5dntVrPnz9/5cqViRMnLl26tM/dTCbThg0btm3b5v5QQUGBM1WWLFni90ygW9QruKJe6ZEagMCPMBC6u7uTk5MLCgp6be/p6VFV9erVq0uWLImLi0tKSnr00UdbWlq0RzMzMwsLC2+//faMjIyysjKbzbZ+/frk5GSTybR8+fL6+nptt1dffTU1NXX8+PETJ0584YUX3EePj49/8803ta/LyspCQ0OvXbvmYbb19fXh4eGHDx8O8KwHQiDXVz+5MbT5kJ6evnfvXu3rsrKykJCQrq4u0VSLi4sTEhJct6xevfrpp5/299QHkH6urzx9zpl61V+oV9QrkX7odoZ2+IGgdd+nT5/u89G5c+euWLGiqamppqZm7ty5jzzyiLY9MzPzhhtuaGho0L791a9+df/999fX17e2tj788MP33HOPqqrnzp0zGo0XLlxQVdVqtf73v//tdfCamhrXobW72SdOnPAw2x07dkybNi2A0x1Aw6P0DGE+qKq6adOmhQsXWiwWm8324IMP5uTkeJhqn6Vn4sSJycnJM2fOfPnllzs6Onx/AgaEfq6vPH3OmXrVX6hX1CsRWqU+fPzxx4qi1NXVuT9UXl7u+lBpaemYMWO6u7tVVc3MzPzTn/6kba+srDQYDM7dbDabwWCwWq0VFRVjx4599913m5qa+hz6/PnziqJUVlY6t4SEhHz00UceZpuRkbFjxw7fz3IwDI/SM4T5oO28YMEC7dm47rrrqqurPUzVvfQcOnTok08+uXDhwv79+5OSktx/1xwq+rm+8vQ5Z+pVf6FeadupV+4Cv77D8L1KcXFxiqJcuXLF/aHLly9HRERoOyiKYjabHQ5HQ0OD9u2kSZO0L6qqqgwGw6xZs6ZMmTJlypSbbrpp/PjxV65cMZvNRUVFf/7znxMTE3/6058ePXq01/EjIyMVRbHZbNq3zc3NPT09UVFRb731lvOdbq77l5WVVVVVrVmzpr/OHe6GMB9UVb3zzjvNZnNjY6Pdbl+2bNntt9/e0tIiygd3ixYtmjt37tSpU3Nzc19++eV9+/YF8lRAh6hXcEW90qmh7dQGgvZa75NPPtlre09PT6+uvKysLDw83NmVHzx4UNv+3XffjRo1ymq1ioZobW394x//GBMTo71+7Co+Pv5vf/ub9vWRI0c8v/a/fPnyBx54wLfTG0SBXF/95MYQ5kN9fb3i9gLHZ599JjqO+29prt59990JEyZ4OtVBpJ/rK0+fc6Ze9RfqlbadeuWuH7qdoR1+gPzjH/8YM2bMs88+W1FR4XA4zp49m5eXd+LEiZ6enjlz5jz44IPNzc21tbXz5s17+OGHtRDXVFNV9a677lqyZMnVq1dVVa2rq3vvvfdUVf32229LS0sdDoeqqnv27ImPj3cvPYWFhZmZmZWVlRaLZf78+StWrBBNsq6uLiwsTJ9vkNQMj9KjDmk+pKamrlu3zmaztbW1Pf/880ajsbGx0X2GXV1dbW1tRUVFCQkJbW1t2jG7u7v37t1bVVVltVqPHDmSnp7ufGvCkNPV9ZWk2zlTr/oF9cp5BOpVL7RKQsePH7/rrruio6PHjRt34403vvjii9ofC1y+fDknJ8dkMk2cODEvL89ut2v790o1q9Wan58/ZcoUo9FoNpufeOIJVVVPnTp12223RUVFxcTEzJ49+9ixY+7jdnR0PP7449HR0UajccWKFTabTTTD7du36/YNkpphU3rUocuHM2fOLFq0KCYmJioqau7cuaKfNK+//rrrvd6IiAhVVbu7u++8887Y2NiwsDCz2fzMM8+0trb2+zPjH71dXxl6njP1KnDUK2c49aqXwK+vwXkUP2ivXAZyBOhZINeX3BjegvH6BuOcIY96BZHAr+8wfFs3AABAf6FVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEAoN/BBe/9swRixyA3pDTkKE3IAId5UAAACEAvofcAAAAMMbd5UAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEAvq0bj7bdCTw75O3yI2RILg+lY2cHAmoVxAJpF5xVwkAAECoH/4HXCBdPLH6jw1EMJ4vsfKxwSgYn2di5WMDEYznS6x8bCC4qwQAACBEqwQAACBEqwQAACA0IK1SV1dXfn7+hAkToqKiVq5c2dTUJB9bVFSUnZ0dERGRmJjo67gbN27MysoaN25cSkrKpk2bOjo65GMLCwvT0tLCw8NjY2Pvvffe7777ztfRu7q6ZsyYYTAYamtr5aPWrFljcFFSUuLToP/+979nz549ZsyYuLi4TZs2yQcajUbXccPDw9vb230a2j8Wi2XZsmUmkyk6Onrx4sXnzp2Tj718+XJubm5sbOyECRN+//vfe52wKJdk8lMUK5Ofon1k8lMUK5OfnufmOT9FsQHmp255eK68rilRrMyaEsXKrAtRrMy6EOWezFoQxcqsBVGszFoQ7RN4rfbM89w8ryNRrMw68jCu15wUxcrkpChWJidFsTI5KbqOMjkpig2kf/BCDYDoCIWFhRkZGRUVFRaLZd68eStWrJCP/fDDD995553t27cnJCT4Ou7atWuPHTvW0NBw4sSJyZMnb968WT72s88+q6ioaGpqqqysvO+++7Kzs+VjNS+88MLChQsVRampqZGPXb16dUFBQfOPOjs75WNLS0uNRuNf//rX2tra6urqY8eOycfa7XbnoDk5OcuXL5ePlSGKvf/++3/xi1/88MMPdrt99erVN954o3zsbbfd9sADD9hstqtXr86ZM+eJJ57wHCvKJVF+ysSKtsvEivJTJlaUnzKxGvf8lIkV5Wfg1WPwyZyvaE3JxIrWlEysaF3IxIrWhWusKPdk1oIoVmYtiGJl1oJoH5m14CuZcTWe15EoVmYdiWJlclIUK5OToliZnBTFyuSk6DrK5KQoViYn/TMgrVJ8fPybb76pfV1WVhYaGnrt2jXJWE1xcbEfrZKrLVu2zJ8/34/Yjo6OvLy8u+++26fYr7/+Oj09/fPPP1d8b5WefvppD/PxEJudne13rFN9fX14ePjhw4f9iPVj3PT09L1792pfl5WVhYSEdHV1ycReuXJFUZTy8nLt24MHDxqNxvb2dq+x7rkkyk+ZWNF2+ViNa376FNsrPyVj+8xPmVhRfgZeegafzPmK1pRP16jXmpKJFa0Lr7Ee1oXoGrnmnvxacI8VnYt8rPt2n2K9rgV5kuNKriP3WF/XkWusfE72OWeN15x0j5XPyV6xvuZkr+voU072+fNaPifl9f8LcLW1tXV1dTNmzNC+nTlzZldX1zfffNPvA3l2/PjxmTNn+hRSVFSUmJgYGRn51Vdf/f3vf5cP7O7u/t3vfvfaa69FRkb6OM3/HXfy5Mm33nrr9u3bOzs7JaMcDsdnn33W3d193XXXxcTELFy48Msvv/Rj9LfeeislJeXnP/+5H7F+yM3NLS4urqura2pqeuONN37zm9+MGjVKJtCZ7k52u92Pe+/kp6/8y89gNIRrajDXhTP3/FgLfuSt11iZY/bax++14CvXcX1dR+5zll9Hzlg/crLP51MyJ11jfc1JZ6x8TrpfR/mcHLQc+F+B9Fl9HuH8+fOKolRWVv5fOxYS8tFHH8nEOgV4V2nLli1paWkNDQ0+xba2tl69evXYsWMzZsxYu3atfOyOHTuWLl2qquq3336r+HhX6dChQ5988smFCxf279+flJRUUFAgGVtTU6MoSlpa2tmzZ+12+4YNG5KSkux2u/z5ajIyMnbs2NHnQ4FkiCjWZrMtWLBAe/S6666rrq6Wj7311ludN3Xnzp2rKMqnn37qNbZXLnnIT6+xHrbLx6pu+SkZ22d+ysSK8lMmVpSfgVePwef1fD2sKZ+ub681JRMrWhcysaJ10ec1cs09n9aCKqirkr/Bi2qy17XQZ6zkWpAnM678OnKP9Wkducb6lJPu4zp5zUn3WPmcdI+VzEn36yifkx5+Xg/EXaX+b5W0S3v69GntW+09WSdOnJCJdQqkVXruuefMZnNVVZUfsZpjx44ZDIaWlhaZ2AsXLkyaNKm2tlb1q1VytW/fvvj4eMnY5uZmRVG2b9+ufdvW1jZq1KijR4/6NO7hw4fDwsLq6+v7fLTfS09PT8+sWbMeeuihxsZGu92+devWlJQU+fbu4sWLOTk5CQkJaWlpW7duVRTl/PnzXmP7/HHYZ37K/FgSbZePdc9P+ViNa356jfWQn76O65qfgZeewef1fD2sKfnnyn1NeY31sC5kxhWtC/fYXrnn01oQ1VWZtSCKlVkLnuu557Ugz+u4Pq0jz3P2vI56xfqUk6JxZXKyV6xPOek+rnxOapzX0aec7BXr3BIcL8AlJibGx8d/8cUX2renTp0KDQ3Nysrq94H6tHnz5n379h09ejQ1NTWQ44waNUryBvjx48cbGhquv/56k8mktc/XX3/9G2+84cegYWFhXV1dkjsbjcapU6c6P4TUv08j3b17d05Ojslk8iPWDz/88MN//vOf/Pz8mJiYiIiIJ598srq6+uzZs5LhKSkp77//fm1tbWVlZXJyclJS0tSpU32dA/k5OPkZjIZqTQ3OunDPPfm1EEjeimJljimzj/xakOc+rvw68jpnD+vIPVY+Jz2M6zUn3WPlc7LPcf2o1dp19K8+D0QO9BZInyU6QmFhYWZmZmVlpcVimT9/vk9/AdfV1dXW1lZUVJSQkNDW1uZwOORjH3/88WnTplVWVra1tbW1tbm/51cU29HR8eKLL5aXl1ut1s8///zWW2/Nzc2VjG1pabn0oyNHjiiKcurUKck7Jd3d3Xv37q2qqrJarUeOHElPT3/kkUfkz/fVV181m83nzp1ra2v7wx/+MHnyZMk7YZq6urqwsLA+39DtNdYrUWxqauq6detsNltbW9vzzz9vNBobGxslY0+ePPn99983NDQcOHAgLi7urbfe8jyuKJdE+SkTK9ouEyvKT6+xHvLTa6yH/PQa6yE/A68eg0/mGonWlEysKlhTMrGidSETK1oXrrGi3JNZC6JYmbUgipVZC33uI7kWfOV1XMl11Ges5DoSPScyOenhZ5/XnBTFyuSkKNZrTnq4jl5z0kOsTE76Z0BapY6Ojscffzw6OtpoNK5YscJms8nHvv7664qLiIgIydhr164p/196erpkbGdn57333puQkBAWFjZlypSNGzf6NGcnX1+A6+7uvvPOO2NjY8PCwsxm8zPPPNPa2io/bk9Pz5YtWxISEqKiou64446vvvrKpzlv37592rRpHk6nv0qPqzNnzixatCgmJiYqKmru3Lk+/eXdzp074+PjR48enZWVVVRU5HVcUS6J8lMmVrTda6yH/PQa6yE/Zebs5OGFgz5jPeRnILkxVGSeK9Gaknye+1xTMrGidSETK1oXzlgPued1LXiI9boWRLEya0G0j+Ra8JXM+TqJ1pEoVmYdeRjXa056nrPnnPQQ6zUnPcR6zUkP19FrTnqIlanP/jE4j+KH4P23ecQSS+xQxQ6VYHyuiCWW2KGN1fCPTQAAAIRolQAAAIRolQAAAIRolQAAAIT64W3dGN4CeRsdhrdgfFs3hjfqFUR4WzcAAMCACOiuEgAAwPDGXSUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAACh/wGLggH7ga71+AAAAABJRU5ErkJggg=="
-/>
+![block:block distribution](misc/mpi_block_block.png)
+{: align="center"}
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=16
-#SBATCH --cpus-per-task=1
+!!! example "Binding to cores and block:block distribution"
 
-srun --ntasks 32 --cpu_bind=cores --distribution=block:block ./application
-```
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2
+    #SBATCH --tasks-per-node=16
+    #SBATCH --cpus-per-task=1
+
+    srun --ntasks 32 --cpu_bind=cores --distribution=block:block ./application
+    ```
 
 #### Distribution: cyclic:cyclic
 
@@ -71,18 +103,19 @@ then the first socket of the second node until one task is placed on
 every first socket of every node. After that it will place a task on
 every second socket of every node and so on.
 
-\<img alt=""
-src="<data:;base64,iVBORw0KGgoAAAANSUhEUgAAAw4AAADeCAIAAAAb9sCoAAAABmJLR0QA/wD/AP+gvaeTAAAfCElEQVR4nO3de1BU5/348bOIoLIgyHJREGRRMORWY4yKWtuYapO0SQNe4piq6aiRJqKSxugMURNnmkQnyTh2am1MmjBOIYnRtDNpxk4QdTTJpFZjNAYvEMQLLBDchQWW6/n+cab748fuczi7y8JZeL/+kt3zec5zPs+Hxw9nl8Ugy7IEAAAAd4IGegIAAAD6RasEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgFOxLsMFg6Kt5AAg4siwP9BQ8wH4FDGW+7FfcVQIAABDy6a6SIrB+sgTgu8C9Q8N+BQw1vu9X3FUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUaun72s58ZDIYvvvjC+Uh8fPwnn3yifYRvvvnGaDRqP76goCAzMzMsLCw+Pt6DiQIY8vp/v9q4cWNGRsaoUaOSkpI2bdrU1tbmwXQxuNAqDWnR0dEvvPBCv53OZDJt2LBh+/bt/XZGAINGP+9Xdrt93759169fLyoqKioq2rZtW7+dGnpDqzSkrVq1qqys7OOPP3Z9qqqqatGiRbGxsYmJic8991xzc7Py+PXr1xcsWBAZGXnXXXedOnXKeXxDQ0NOTs748eNjYmKefPLJuro61zEfeeSRxYsXjx8/3k+XA2AQ6+f96u23354zZ050dHRmZubTTz/dPRxDDa3SkGY0Grdv375ly5b29vYeT2VnZw8fPrysrOz06dNnzpzJy8tTHl+0aFFiYmJ1dfW//vWvv/zlL87jly1bZrFYzp49W1lZOXr06JUrV/bbVQAYCgZwvzp58uTUqVP79GoQUGQf+D4CBtDcuXN37NjR3t4+efLkPXv2yLIcFxd3+PBhWZZLS0slSaqpqVGOLC4uHjFiRGdnZ2lpqcFgqK+vVx4vKCgICwuTZbm8vNxgMDiPt9lsBoPBarW6PW9hYWFcXJy/rw5+FYjf+4E4ZzgN1H4ly/LWrVtTUlLq6ur8eoHwH9+/94P7uzWDzgQHB7/22murV69evny588EbN26EhYXFxMQoX5rNZofDUVdXd+PGjejo6KioKOXxSZMmKf+oqKgwGAzTpk1zjjB69OibN2+OHj26v64DwODX//vVK6+8cuDAgZKSkujoaH9dFXSPVgnS448//sYbb7z22mvORxITE5uammpra5Xdp6KiIjQ01GQyJSQkWK3W1tbW0NBQSZKqq6uV45OSkgwGw7lz5+iNAPhVf+5XmzdvPnTo0PHjxxMTE/12QQgAvFcJkiRJu3bt2r17d2Njo/JlWlrajBkz8vLy7Ha7xWLJz89fsWJFUFDQ5MmTp0yZ8tZbb0mS1Nraunv3buX41NTU+fPnr1q1qqqqSpKk2tragwcPup6ls7PT4XAo7zNwOBytra39dHkABpH+2a9yc3MPHTp05MgRk8nkcDj4sIChjFYJkiRJ06dPf/TRR52/NmIwGA4ePNjc3JySkjJlypR77rnnzTffVJ766KOPiouL77vvvgcffPDBBx90jlBYWDhu3LjMzMzw8PAZM2acPHnS9Sxvv/32yJEjly9fbrFYRo4cyQ1tAF7oh/3KarXu2bPnypUrZrN55MiRI0eOzMjI6J+rgw4ZnO948ibYYJAkyZcRAASiQPzeD8Q5A/Cd79/73FUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQCh7oCQBA/ykvLx/oKQAIMAZZlr0PNhgkSfJlBACBKBC/95U5AxiafNmv+uCuEhsQAP0zm80DPQUAAakP7ioBGJoC664SAHjHp1YJAABgcOM34AAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIR8+ghKPldpKPDu4ySojaEgsD5qhJocCtivIOLLfsVdJQAAAKE++MMmgfWTJbTz/SctamOwCtyfwqnJwYr9CiK+1wZ3lQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIQGf6t08eLFX//61yaTadSoUZMnT37xxRe9GGTy5MmffPKJxoN/8pOfFBUVuX2qoKAgMzMzLCwsPj7ei2mgb+mqNjZu3JiRkTFq1KikpKRNmza1tbV5MRkEOl3VJPuVruiqNobafjXIW6Wurq5f/vKX48aNO3/+fF1dXVFRkdlsHsD5mEymDRs2bN++fQDnAIXeasNut+/bt+/69etFRUVFRUXbtm0bwMlgQOitJtmv9ENvtTHk9ivZB76P4G/Xr1+XJOnixYuuT926dWvhwoUxMTEJCQnPPvtsU1OT8vjt27dzcnKSkpLCw8OnTJlSWloqy3J6evrhw4eVZ+fOnbt8+fK2tjabzbZ27drExESTybRkyZLa2lpZlp977rnhw4ebTKbk5OTly5e7nVVhYWFcXJy/rrnv+LK+1IZ3taHYunXrnDlz+v6a+47+19eV/uesz5pkv9IDfdaGYijsV4P8rtK4cePS0tLWrl37wQcfVFZWdn8qOzt7+PDhZWVlp0+fPnPmTF5envL40qVLr1279uWXX1qt1vfffz88PNwZcu3atVmzZs2ePfv9998fPnz4smXLLBbL2bNnKysrR48evXLlSkmS9uzZk5GRsWfPnoqKivfff78frxWe0XNtnDx5curUqX1/zdA3PdckBpaea2NI7FcD26n1A4vFsnnz5vvuuy84OHjixImFhYWyLJeWlkqSVFNToxxTXFw8YsSIzs7OsrIySZJu3rzZY5D09PSXXnopMTFx3759yiPl5eUGg8E5gs1mMxgMVqtVluV7771XOYsIP6XphA5rQ5blrVu3pqSk1NXV9eGV9rmAWN8eAmLOOqxJ9iud0GFtyENmvxr8rZJTY2PjG2+8ERQU9O23337++edhYWHOp3744QdJkiwWS3Fx8ahRo1xj09PT4+Lipk+f7nA4lEeOHj0aFBSU3E1kZOR3330ns/X4HNv/9FMbL7/8stlsrqio6NPr63uBtb6KwJqzfmqS/Upv9FMbQ2e/GuQvwHVnNBrz8vJGjBjx7bffJiYmNjU11dbWKk9VVFSEhoYqL8o2NzdXVVW5hu/evTsmJuaxxx5rbm6WJCkpKclgMJw7d67if27fvp2RkSFJUlDQEMrq4KCT2ti8efOBAweOHz+enJzsh6tEINFJTUKHdFIbQ2q/GuTfJNXV1S+88MLZs2ebmprq6+tfffXV9vb2adOmpaWlzZgxIy8vz263WyyW/Pz8FStWBAUFpaamzp8/f82aNVVVVbIsX7hwwVlqoaGhhw4dioiIePjhhxsbG5UjV61apRxQW1t78OBB5cj4+PhLly65nU9nZ6fD4Whvb5ckyeFwtLa29ksa4IbeaiM3N/fQoUNHjhwxmUwOh2PQ//ItXOmtJtmv9ENvtTHk9quBvanlbzabbfXq1ZMmTRo5cmRkZOSsWbM+/fRT5akbN25kZWWZTKaxY8fm5OTY7Xbl8fr6+tWrVyckJISHh993332XLl2Su/3WQEdHx29/+9sHHnigvr7earXm5uZOmDDBaDSazeb169crIxw7dmzSpEmRkZHZ2dk95rN3797uye9+41SHfFlfasOj2rh9+3aPb8zU1NT+y4Xn9L++rvQ/Z13VpMx+pSe6qo0huF8ZnKN4wWAwKKf3egTomS/rS20MboG4voE4Z2jHfgUR39d3kL8ABwAA4AtaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAAKFg34cwGAy+D4JBidqA3lCTEKE2IMJdJQAAACGDLMsDPQcAAACd4q4SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAkE+f1s1nmw4F3n3yFrUxFATWp7JRk0MB+xVEfNmvuKsEAAAg1Ad/Ay6wfrKEdr7/pEVtDFaB+1M4NTlYsV9BxPfa4K4SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACA0KBtlU6dOvXoo4+OGTMmLCzs7rvvzs/Pb2pq6ofzdnR05ObmjhkzJiIiYtmyZQ0NDW4PMxqNhm5CQ0NbW1v7YXpD1kDVg8ViWbx4sclkioyMXLBgwaVLl9weVlBQkJmZGRYWFh8f3/3xlStXdq+ToqKifpgz+h/7Fbpjv9Kbwdkq/fOf/5w3b96999775Zdf1tTUHDhwoKam5ty5c1piZVlub2/3+tQvv/zykSNHTp8+ffXq1WvXrq1du9btYRaLpfF/srKynnjiidDQUK9PCnUDWA85OTlWq/Xy5cs3b94cO3bsokWL3B5mMpk2bNiwfft216fy8vKcpbJw4UKvZwLdYr9Cd+xXeiT7wPcR/KGzszMxMTEvL6/H411dXbIs37p1a+HChTExMQkJCc8++2xTU5PybHp6en5+/uzZs9PS0kpKSmw229q1axMTE00m05IlS2pra5XD3nzzzeTk5NGjR48dO3bHjh2uZ4+NjX333XeVf5eUlAQHB9++fVtltrW1taGhoUePHvXxqv3Bl/XVT20MbD2kpqbu379f+XdJSUlQUFBHR4doqoWFhXFxcd0fWbFixYsvvujtpfuRftZXO33Omf2qr7BfsV+J9EG3M7Cn9wel+z579qzbZ2fOnLl06dKGhoaqqqqZM2c+88wzyuPp6el33XVXXV2d8uWvfvWrJ554ora2trm5ec2aNY8++qgsy5cuXTIajVeuXJFl2Wq1/ve//+0xeFVVVfdTK3ezT506pTLbXbt2TZo0yYfL9aPBsfUMYD3Isrxp06Z58+ZZLBabzfbUU09lZWWpTNXt1jN27NjExMSpU6e+/vrrbW1tnifAL/Szvtrpc87sV32F/Yr9SoRWyY3PP/9ckqSamhrXp0pLS7s/VVxcPGLEiM7OTlmW09PT//SnPymPl5eXGwwG52E2m81gMFit1rKyspEjR3744YcNDQ1uT3358mVJksrLy52PBAUFffbZZyqzTUtL27Vrl+dX2R8Gx9YzgPWgHDx37lwlG3fccUdlZaXKVF23niNHjnzxxRdXrlw5ePBgQkKC68+aA0U/66udPufMftVX2K+Ux9mvXPm+voPwvUoxMTGSJN28edP1qRs3boSFhSkHSJJkNpsdDkddXZ3y5bhx45R/VFRUGAyGadOmTZgwYcKECffcc8/o0aNv3rxpNpsLCgr+/Oc/x8fH//SnPz1+/HiP8cPDwyVJstlsypeNjY1dXV0RERHvvfee851u3Y8vKSmpqKhYuXJlX107XA1gPciy/NBDD5nN5vr6ervdvnjx4tmzZzc1NYnqwdX8+fNnzpw5ceLE7Ozs119//cCBA76kAjrEfoXu2K90amA7NX9QXut9/vnnezze1dXVoysvKSkJDQ11duWHDx9WHr969eqwYcOsVqvoFM3NzX/84x+joqKU14+7i42N/dvf/qb8+9ixY+qv/S9ZsuTJJ5/07PL6kS/rq5/aGMB6qK2tlVxe4Pjqq69E47j+lNbdhx9+OGbMGLVL7Uf6WV/t9Dln9qu+wn6lPM5+5aoPup2BPb2f/OMf/xgxYsRLL71UVlbmcDguXLiQk5Nz6tSprq6uGTNmPPXUU42NjdXV1bNmzVqzZo0S0r3UZFl++OGHFy5ceOvWLVmWa2pqPvroI1mWv//+++LiYofDIcvy22+/HRsb67r15Ofnp6enl5eXWyyWOXPmLF26VDTJmpqakJAQfb5BUjE4th55QOshOTl59erVNputpaXllVdeMRqN9fX1rjPs6OhoaWkpKCiIi4traWlRxuzs7Ny/f39FRYXVaj127FhqaqrzrQkDTlfrq5Fu58x+1SfYr5wjsF/1QKskdPLkyYcffjgyMnLUqFF33333q6++qvyywI0bN7Kyskwm09ixY3Nycux2u3J8j1KzWq25ubkTJkwwGo1ms3n9+vWyLJ85c+aBBx6IiIiIioqaPn36iRMnXM/b1ta2bt26yMhIo9G4dOlSm80mmuHOnTt1+wZJxaDZeuSBq4dz587Nnz8/KioqIiJi5syZov9p9u7d2/1eb1hYmCzLnZ2dDz30UHR0dEhIiNls3rJlS3Nzc59nxjt6W18t9Dxn9ivfsV85w9mvevB9fQ3OUbygvHLpywjQM1/Wl9oY3AJxfQNxztCO/Qoivq/vIHxbNwAAQF+hVQIAABCiVQIAABCiVQIAABCiVQIAABCiVQIAABCiVQIAABCiVQIAABCiVQIAABAK9n2IXv/aMIYsagN6Q01ChNqACHeVAAAAhHz6G3AAAACDG3eVAAAAhGiVAAAAhGiVAAAAhGiVAAAAhGiVAAAAhGiVAAAAhGiVAAAAhHz6tG4+23Qo8O6Tt6iNoSCwPpWNmhwK2K8g4st+xV0lAAAAoT74G3C+dPHE6j/WF4F4vcRqjw1EgZhnYrXH+iIQr5dY7bG+4K4SAACAEK0SAACAEK0SAACAkF9apY6Ojtzc3DFjxkRERCxbtqyhocGLEaZMmWIwGKqrq7VHWSyWxYsXm0ymyMjIBQsWXLp0Sf34goKCzMzMsLCw+Pj47o9v3LgxIyNj1KhRSUlJmzZtamtr0x4rSdK///3v6dOnjxgxIiYmZtOmTa6xovG15E19bup5E8V6mjdfaMmtil5z251ojbTkWWV9pd7yLIrVkmdRfrTkTeWYXvOWn5+fkpISGhoaHR392GOPXb16VXuuAp36WqtbuXKloZuioiLtsTdu3MjOzo6Ojh4zZszvf//71tZW7+YpWjstsUajsfv8Q0NDXachqisteRPFasmbKNbTvPlCS25FtOS2O1E+teRZdIyWPItiteRZtEZa8iaK1ZI30fi+fC/3QvaBaIT8/Py0tLSysjKLxTJr1qylS5dqj1Xs2LFj3rx5kiRVVVVpj33iiSd+8Ytf/Pjjj3a7fcWKFXfffbd67KeffvrBBx/s3LkzLi6u+zGrVq06ceJEXV3dqVOnxo8fv3nzZu2xxcXFRqPxr3/9a3V1dWVl5YkTJ1xjReOL8qYlVpQ3LbGivPlSIaJY9fmrx4pyK4oVrZGWPItiFep5FsVqybMoP1pqUnSMlpr86quvysrKGhoaysvLH3/88czMTO25ChSiOauvtXrsihUr8vLyGv+nvb1de+wDDzzw5JNP2my2W7duzZgxY/369eqxonmK1k5LrN1ud04+KytryZIlrrGiuhKNqSVWlDctsaK8+WO/EuVWS6wot6JYUT615Fl0jJY8i2K15Fm0RlpqUhSrpSZF42vJlXf80irFxsa+++67yr9LSkqCg4Nv376tMVaW5e+++y41NfXrr7+WPGyVUlNT9+/f7zxvUFBQR0dHr7GFhYUqW+TWrVvnzJmjPTYzM/PFF1/UPufu44vypiVWFuRNS6wob/7YelTm32usKLfqsa5rpD3PbmtDY55dYz3Nsyg/6jXpeoxHNdnW1paTk/PII48oX3pak3qmPmf1fUAUu2LFCi9qUpblmzdvSpJUWlqqfHn48GGj0dja2tprrMo8e6ydR7G1tbWhoaFHjx5VmbPsriZdx9QSK8pbr7EqefPrftUjtx7F9siteqxojbTk2fUY7XnuEetFnt3uV73WpEqslpp0uy7aa1K7vn8Brrq6uqamZsqUKcqXU6dO7ejouHjxosbwzs7O3/3ud2+99VZ4eLinp87Ozi4sLKypqWloaHjnnXd+85vfDBs2zNNBejh58uTUqVM1HuxwOL766qvOzs477rgjKipq3rx53377rcbxvchb97l5mrfusf7Im6dz6JUXuXUrgOpTlB8teXMeoz1vBQUF8fHx4eHh58+f//vf/y75nKshoqCgYPz48ffff//OnTvb29s1Rjm3bye73e7R6zs95tBj7Tz13nvvJSUl/fznP1c/zKPvWfVYj/LmjO3bvGnRb7n1k36rT9f11Z43t3Wlnjff18UzvvRZbke4fPmyJEnl5eX/rx0LCvrss8+0xMqyvGvXrkWLFsmy/P3330se3lWy2Wxz585Vnr3jjjsqKyu1xKr8pLV169aUlJS6ujqNsVVVVZIkpaSkXLhwwW63b9iwISEhwW63i+bcfXyVvPUaK4vzpiVWlDdfKqTX2B5z6DVWJbfqsT3WyKM8u9aG9jy7xnqUZ1F+eq3JHsdor8nm5uZbt26dOHFiypQpq1at8jRX+qc+Z+/uKh05cuSLL764cuXKwYMHExIS8vLytMfef//9zhc4Zs6cKUnSl19+2Wus23m6rp32WEVaWtquXbvU5+y2JjX+BN8jVpQ3LbGivPlpv3KbW42xih65VY/t27tK2vPsGutRnl1rQ2NNuo1VqNekyrr4465S37dKytZ89uxZ5UvlfaCnTp3SEnvlypVx48ZVV1fLnrdKXV1d06ZNe/rpp+vr6+12+7Zt25KSkrz4r9Tp5ZdfNpvNFRUV2mMbGxslSdq5c6fyZUtLy7Bhw44fP+42tsf4KnnrNVYlb73GquTNT1uP6xy0xKrkVj3WbTurMc89Yj3Kc49Yj/Isyo+WmuxxjEc1qThx4oTBYGhqavIoV/qnPmfvWqXuDhw4EBsbqz322rVrWVlZcXFxKSkp27ZtkyTp8uXLvcaqz9O5dh7FHj16NCQkpLa2VuW8oprU8t+S+vd797xpiRXlzX/7laJ7brXHuuZWPbZvW6Xu1PPsGqs9z+rrq16TolgtNek6vuhafN+v+v4FuPj4+NjY2G+++Ub58syZM8HBwRkZGVpiT548WVdXd+edd5pMJqWNvfPOO9955x0tsT/++ON//vOf3NzcqKiosLCw559/vrKy8sKFC95dxebNmw8cOHD8+PHk5GTtUUajceLEic4PBlX5hFDX8bXnzTVWe95cY/s2b1r4O7fq9F+fovxoyZvrMd7lbdiwYcOGDfMlV0NQSEhIR0eH9uOTkpI+/vjj6urq8vLyxMTEhISEiRMn+j4NZe08Ctm3b19WVpbJZBId4N33rMZYlby5jfVT3rTwR277jZ/qU0ttiPKmEutR3rxYF4/50meJRsjPz09PTy8vL7dYLHPmzNH+G3BNTU3X/+fYsWOSJJ05c0bLnSFFcnLy6tWrbTZbS0vLK6+8YjQa6+vrVWI7OjpaWloKCgri4uJaWlocDofy+Lp16yZNmlReXt7S0tLS0uJ8r6WW2DfffNNsNl+6dKmlpeUPf/jD+PHjXbtp0fiivPUaq5I3LecV5c2XChHFiuagJVaUW1GsaI205NltrMY8i86rJc+i/GipSdExvdZkW1vbq6++WlpaarVav/766/vvvz87O1t7rgKFaM6i9eo1trOzc//+/RUVFVar9dixY6mpqc8884z2854+ffqHH36oq6s7dOhQTEzMe++9px7rdp4qa6elJmVZrqmpCQkJ6fGmYy11JRqz11iVvGk5ryhvfb5fqeS211iF29yKYkX51JJnt8dozLNofC15drtGGmtS5f8C9ZpUGV9Lrrzjl1apra1t3bp1kZGRRqNx6dKlNptNe6yTF+9VOnfu3Pz586OioiIiImbOnNnrbxzs3btX6iYsLEyW5du3b0v/v9TUVI2xsix3dXVt3bo1Li4uIiLiwQcfPH/+fI9YlfFFedMSK8qbllhR3nwpL7exWuavcl5RbkWxojXqNc8qsU4qL8CJYnvNsyg/WmpS5Zhea7K9vf2xxx6Li4sLCQmZMGHCxo0bnTnRkqtAIZpzr2stiu3s7HzooYeio6NDQkLMZvOWLVuam5u1n3f37t2xsbHDhw/PyMgoKCjodc5u56mydhrreefOnZMmTRKdV6WuRGP2GquSNy3nFeXNl5p0G6uS215jFW5zK4oV5bPXPIuO0ZJnlfF7zbNojbTUpPr/Beo1qTK+llx5x+AcxQuB+2fziCWW2IGKHSiBmCtiiSV2YGMV/GETAAAAIVolAAAAIVolAAAAIVolAAAAoT54WzcGN1/eRofBLRDf1o3Bjf0KIrytGwAAwC98uqsEAAAwuHFXCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQOj/AItyAftZS8fsAAAAAElFTkSuQmCC>"
-/>
+![cyclic:cyclic distribution](misc/mpi_cyclic_cyclic.png)
+{: align="center"}
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=16
-#SBATCH --cpus-per-task=1
+!!! example "Binding to cores and cyclic:cyclic distribution"
 
-srun --ntasks 32 --cpu_bind=cores --distribution=cyclic:cyclic
-```
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2
+    #SBATCH --tasks-per-node=16
+    #SBATCH --cpus-per-task=1
+
+    srun --ntasks 32 --cpu_bind=cores --distribution=cyclic:cyclic
+    ```
 
 #### Distribution: cyclic:block
 
@@ -90,104 +123,108 @@ The cyclic:block distribution will allocate the tasks of your job in
 alternation on node level, starting with first node filling the sockets
 linearly.
 
-\<img alt=""
-src="<data:;base64,iVBORw0KGgoAAAANSUhEUgAAAw4AAADeCAIAAAAb9sCoAAAABmJLR0QA/wD/AP+gvaeTAAAe3klEQVR4nO3de3BU9f3/8bMhJEA2ISGbCyQkZAMJxlsREQhSWrFQtdWacJHBAnZASdUIsSLORECZqQqjDkOnlIpWM0wTFcF2xjp0DAEG1LEURFEDmBjCJdkkht1kk2yu5/fHme5vv9l8dj+7m8vZ5Pn4i5w973M+57Wf/fDO2WUxqKqqAAAAoC8hQz0AAAAA/aJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEAoNpNhgMPTXOAAEHVVVh3oIPmC9AkayQNYr7ioBAAAIBXRXSRNcv1kCCFzw3qFhvQJGmsDXK+4qAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqjVw/+9nPDAbDp59+6tySmJj44Ycfyh/hyy+/NBqN8vsXFRVlZ2dHREQkJib6MFAAI97gr1cbN27MysoaN25cSkrKpk2bOjo6fBguhhdapREtNjb2mWeeGbTTmUymDRs2bNu2bdDOCGDYGOT1ym6379279/LlyyUlJSUlJVu3bh20U0NvaJVGtLVr11ZUVHzwwQfuD9XU1CxdujQ+Pj45OfmJJ55obW3Vtl++fHnx4sXR0dE33XTTyZMnnfs3NTXl5eVNnjw5Li7uoYceamhocD/mvffeu2zZssmTJw/Q5QAYxgZ5vXrjjTfmz58fGxubnZ39yCOPuJZjpKFVGtGMRuO2bduee+65zs7OXg/l5uaOHj26oqLi1KlTp0+fLigo0LYvXbo0OTm5trb2X//611/+8hfn/itXrrRYLGfOnKmurh4/fvyaNWsG7SoAjARDuF6dOHFi5syZ/Xo1CCpqAAI/AobQggULtm/f3tnZOX369N27d6uqmpCQcOjQIVVVy8vLFUWpq6vT9iwtLR0zZkx3d3d5ebnBYGhsbNS2FxUVRUREqKpaWVlpMBic+9tsNoPBYLVa+zxvcXFxQkLCQF8dBlQwvvaDccxwGqr1SlXVLVu2pKWlNTQ0DOgFYuAE/toPHezWDDoTGhr68ssvr1u3btWqVc6NV65ciYiIiIuL0340m80Oh6OhoeHKlSuxsbExMTHa9mnTpml/qKqqMhgMs2bNch5h/PjxV69eHT9+/GBdB4Dhb/DXqxdffHH//v1lZWWxsbEDdVXQPVolKA888MCrr7768ssvO7ckJye3tLTU19drq09VVVV4eLjJZEpKSrJare3t7eHh4Yqi1NbWavunpKQYDIazZ8/SGwEYUIO5Xm3evPngwYPHjh1LTk4esAtCEOCzSlAURdm5c+euXbuam5u1HzMyMubMmVNQUGC32y0WS2Fh4erVq0NCQqZPnz5jxozXX39dUZT29vZdu3Zp+6enpy9atGjt2rU1NTWKotTX1x84cMD9LN3d3Q6HQ/ucgcPhaG9vH6TLAzCMDM56lZ+ff/DgwcOHD5tMJofDwZcFjGS0SlAURZk9e/Z9993n/GcjBoPhwIEDra2taWlpM2bMuOWWW1577TXtoffff7+0tPS2226766677rrrLucRiouLJ02alJ2dHRkZOWfOnBMnTrif5Y033hg7duyqVassFsvYsWO5oQ3AD4OwXlmt1t27d1+8eNFsNo8dO3bs2LFZWVmDc3XQIYPzE0/+FBsMiqIEcgQAwSgYX/vBOGYAgQv8tc9dJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAAKHQoR4AAAyeysrKoR4CgCBjUFXV/2KDQVGUQI4AIBgF42tfGzOAkSmQ9aof7iqxAAHQP7PZPNRDABCU+uGuEoCRKbjuKgGAfwJqlQAAAIY3/gUcAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAUEBfQcn3Ko0E/n2dBHNjJAiurxphTo4ErFcQCWS94q4SAACAUD/8xybB9Zsl5AX+mxZzY7gK3t/CmZPDFesVRAKfG9xVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEBr+rdK3337761//2mQyjRs3bvr06c8++6wfB5k+ffqHH34oufNPfvKTkpKSPh8qKirKzs6OiIhITEz0YxjoX7qaGxs3bszKyho3blxKSsqmTZs6Ojr8GAyCna7mJOuVruhqboy09WqYt0o9PT2//OUvJ02a9PXXXzc0NJSUlJjN5iEcj8lk2rBhw7Zt24ZwDNDobW7Y7fa9e/devny5pKSkpKRk69atQzgYDAm9zUnWK/3Q29wYceuVGoDAjzDQLl++rCjKt99+6/7QtWvXlixZEhcXl5SU9Pjjj7e0tGjbr1+/npeXl5KSEhkZOWPGjPLyclVVMzMzDx06pD26YMGCVatWdXR02Gy29evXJycnm0ym5cuX19fXq6r6xBNPjB492mQypaamrlq1qs9RFRcXJyQkDNQ1959Anl/mhn9zQ7Nly5b58+f3/zX3H/0/v+70P2Z9zknWKz3Q59zQjIT1apjfVZo0aVJGRsb69evffffd6upq14dyc3NHjx5dUVFx6tSp06dPFxQUaNtXrFhx6dKlzz77zGq1vvPOO5GRkc6SS5cuzZs3784773znnXdGjx69cuVKi8Vy5syZ6urq8ePHr1mzRlGU3bt3Z2Vl7d69u6qq6p133hnEa4Vv9Dw3Tpw4MXPmzP6/Zuibnuckhpae58aIWK+GtlMbBBaLZfPmzbfddltoaOjUqVOLi4tVVS0vL1cUpa6uTtuntLR0zJgx3d3dFRUViqJcvXq110EyMzOff/755OTkvXv3alsqKysNBoPzCDabzWAwWK1WVVVvvfVW7Swi/JamEzqcG6qqbtmyJS0traGhoR+vtN8FxfPbS1CMWYdzkvVKJ3Q4N9QRs14N/1bJqbm5+dVXXw0JCfnqq68++eSTiIgI50M//PCDoigWi6W0tHTcuHHutZmZmQkJCbNnz3Y4HNqWI0eOhISEpLqIjo7+5ptvVJaegGsHn37mxgsvvGA2m6uqqvr1+vpfcD2/muAas37mJOuV3uhnboyc9WqYvwHnymg0FhQUjBkz5quvvkpOTm5paamvr9ceqqqqCg8P196UbW1trampcS/ftWtXXFzc/fff39raqihKSkqKwWA4e/Zs1f9cv349KytLUZSQkBGU6vCgk7mxefPm/fv3Hzt2LDU1dQCuEsFEJ3MSOqSTuTGi1qth/iKpra195plnzpw509LS0tjY+NJLL3V2ds6aNSsjI2POnDkFBQV2u91isRQWFq5evTokJCQ9PX3RokWPPvpoTU2Nqqrnzp1zTrXw8PCDBw9GRUXdc889zc3N2p5r167Vdqivrz9w4IC2Z2Ji4vnz5/scT3d3t8Ph6OzsVBTF4XC0t7cPSgzog97mRn5+/sGDBw8fPmwymRwOx7D/x7dwp7c5yXqlH3qbGyNuvRram1oDzWazrVu3btq0aWPHjo2Ojp43b95HH32kPXTlypWcnByTyTRx4sS8vDy73a5tb2xsXLduXVJSUmRk5G233Xb+/HnV5V8NdHV1/fa3v73jjjsaGxutVmt+fv6UKVOMRqPZbH7qqae0Ixw9enTatGnR0dG5ubm9xrNnzx7X8F1vnOpQIM8vc8OnuXH9+vVeL8z09PTBy8J3+n9+3el/zLqakyrrlZ7oam6MwPXK4DyKHwwGg3Z6v48APQvk+WVuDG/B+PwG45ghj/UKIoE/v8P8DTgAAIBA0CoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAIhQZ+CIPBEPhBMCwxN6A3zEmIMDcgwl0lAAAAIYOqqkM9BgAAAJ3irhIAAIAQrRIAAIAQrRIAAIAQrRIAAIAQrRIAAIAQrRIAAIAQrRIAAIBQQN/WzXebjgT+ffMWc2MkCK5vZWNOjgSsVxAJZL3irhIAAIBQP/wfcMH1myXkBf6bFnNjuAre38KZk8MV6xVEAp8b3FUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQGrat0smTJ++7774JEyZERETcfPPNhYWFLS0tg3Derq6u/Pz8CRMmREVFrVy5sqmpqc/djEajwUV4eHh7e/sgDG/EGqr5YLFYli1bZjKZoqOjFy9efP78+T53Kyoqys7OjoiISExMdN2+Zs0a13lSUlIyCGPG4GO9givWK70Znq3SP//5z4ULF956662fffZZXV3d/v376+rqzp49K1OrqmpnZ6ffp37hhRcOHz586tSp77///tKlS+vXr+9zN4vF0vw/OTk5Dz74YHh4uN8nhWdDOB/y8vKsVuuFCxeuXr06ceLEpUuX9rmbyWTasGHDtm3b3B8qKChwTpUlS5b4PRLoFusVXLFe6ZEagMCPMBC6u7uTk5MLCgp6be/p6VFV9dq1a0uWLImLi0tKSnr88cdbWlq0RzMzMwsLC++8886MjIyysjKbzbZ+/frk5GSTybR8+fL6+nptt9deey01NXX8+PETJ07cvn27+9nj4+Pfeust7c9lZWWhoaHXr1/3MNr6+vrw8PAjR44EeNUDIZDnVz9zY2jnQ3p6+r59+7Q/l5WVhYSEdHV1iYZaXFyckJDgumX16tXPPvusv5c+gPTz/MrT55hZr/oL6xXrlUg/dDtDe/qBoHXfZ86c6fPRuXPnrlixoqmpqaamZu7cuY899pi2PTMz86abbmpoaNB+/NWvfvXggw/W19e3trY++uij9913n6qq58+fNxqNFy9eVFXVarX+97//7XXwmpoa11Nrd7NPnjzpYbQ7d+6cNm1aAJc7gIbH0jOE80FV1U2bNi1cuNBisdhstocffjgnJ8fDUPtceiZOnJicnDxz5sxXXnmlo6PD9wAGhH6eX3n6HDPrVX9hvWK9EqFV6sMnn3yiKEpdXZ37Q+Xl5a4PlZaWjhkzpru7W1XVzMzMP/3pT9r2yspKg8Hg3M1msxkMBqvVWlFRMXbs2Pfee6+pqanPU1+4cEFRlMrKSueWkJCQjz/+2MNoMzIydu7c6ftVDobhsfQM4XzQdl6wYIGWxg033FBdXe1hqO5Lz+HDhz/99NOLFy8eOHAgKSnJ/XfNoaKf51eePsfMetVfWK+07axX7gJ/fofhZ5Xi4uIURbl69ar7Q1euXImIiNB2UBTFbDY7HI6Ghgbtx0mTJml/qKqqMhgMs2bNmjJlypQpU2655Zbx48dfvXrVbDYXFRX9+c9/TkxM/OlPf3rs2LFex4+MjFQUxWazaT82Nzf39PRERUW9/fbbzk+6ue5fVlZWVVW1Zs2a/rp2uBvC+aCq6t133202mxsbG+12+7Jly+68886WlhbRfHC3aNGiuXPnTp06NTc395VXXtm/f38gUUCHWK/givVKp4a2UxsI2nu9Tz/9dK/tPT09vbrysrKy8PBwZ1d+6NAhbfv3338/atQoq9UqOkVra+sf//jHmJgY7f1jV/Hx8X/729+0Px89etTze//Lly9/6KGHfLu8QRTI86ufuTGE86G+vl5xe4Pj888/Fx3H/bc0V++9996ECRM8Xeog0s/zK0+fY2a96i+sV9p21it3/dDtDO3pB8g//vGPMWPGPP/88xUVFQ6H49y5c3l5eSdPnuzp6ZkzZ87DDz/c3NxcW1s7b968Rx99VCtxnWqqqt5zzz1Lliy5du2aqqp1dXXvv/++qqrfffddaWmpw+FQVfWNN96Ij493X3oKCwszMzMrKystFsv8+fNXrFghGmRdXV1YWJg+PyCpGR5Ljzqk8yE1NXXdunU2m62tre3FF180Go2NjY3uI+zq6mpraysqKkpISGhra9OO2d3dvW/fvqqqKqvVevTo0fT0dOdHE4acrp5fSbodM+tVv2C9ch6B9aoXWiWhEydO3HPPPdHR0ePGjbv55ptfeukl7R8LXLlyJScnx2QyTZw4MS8vz263a/v3mmpWqzU/P3/KlClGo9FsNj/11FOqqp4+ffqOO+6IioqKiYmZPXv28ePH3c/b0dHx5JNPRkdHG43GFStW2Gw20Qh37Nih2w9IaobN0qMO3Xw4e/bsokWLYmJioqKi5s6dK/qbZs+ePa73eiMiIlRV7e7uvvvuu2NjY8PCwsxm83PPPdfa2trvyfhHb8+vDD2PmfUqcKxXznLWq14Cf34NzqP4QXvnMpAjQM8CeX6ZG8NbMD6/wThmyGO9gkjgz+8w/Fg3AABAf6FVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEAoN/BBe/7dhjFjMDegNcxIizA2IcFcJAABAKKD/Aw4AAGB4464SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAUEDf1s13m44E/n3zFnNjJAiub2VjTo4ErFcQCWS94q4SAACAUD/8H3CBdPHU6r82EMF4vdTK1wajYMyZWvnaQATj9VIrXxsI7ioBAAAI0SoBAAAI0SoBAAAIDUir1NXVlZ+fP2HChKioqJUrVzY1NcnXbty4MSsra9y4cSkpKZs2bero6PDj7DNmzDAYDLW1tT4V/vvf/549e/aYMWPi4uI2bdokX2ixWJYtW2YymaKjoxcvXnz+/HnP+xcVFWVnZ0dERCQmJvYaudfcRLUyuYlqnWf3LzevPJzXa+aiWpnMRZnI5CyqlcnZ8z6ec/ZQ6zUrUa1MVoWFhWlpaeHh4bGxsffff//3338vn1Ww8/y68EyUm4w1a9YYXJSUlMjXGo1G19rw8PD29nbJ2itXruTm5sbGxk6YMOH3v/+910JRPjK5ifaRyU1UG0huMkTnlclcVCuTuej1K5OzqFYmZ1GtTM6iWpmsRLUyWYmuK5DXshdqAERHKCwszMjIqKiosFgs8+bNW7FihXzt2rVrjx8/3tDQcPLkycmTJ2/evFm+VrN9+/aFCxcqilJTUyNfW1paajQa//rXv9bW1lZXVx8/fly+9sEHH/zFL37x448/2u321atX33zzzZ5rP/roo3fffXfHjh0JCQmu+4hyk6kV5SZTq3HPLZAZInNeUeYytaLMXWtFmcjkLKqVydnzHPacs6hWJitRrUxWn3/+eUVFRVNTU2Vl5QMPPJCdnS2fVbAQjdnz68JzrSg3mdrVq1cXFBQ0/09nZ6d8rd1udxbm5OQsX75cvvaOO+546KGHbDbbtWvX5syZ89RTT3muFeUj2i5TK8pNplaU20CvV6LMZWpFmcu8fmVyFtXK5CyqlclZVCuTlahWJivRdclk5Z8BaZXi4+Pfeust7c9lZWWhoaHXr1+XrHW1ZcuW+fPny59XVdVvvvkmPT39iy++UHxslbKzs5999lnP4xHVpqen79u3T/tzWVlZSEhIV1eX19ri4uJeT6coN5laV665Sdb2mVt/LT2i84oyl6kVZS4as2sm8jm714q2S9b6lLNrrXxW7rU+ZdXR0ZGXl3fvvfdqP/qalZ55HrPn15TX6+2Vm0zt6tWr/V5znOrr68PDw48cOSJZe/XqVUVRysvLtR8PHTpkNBrb29u91orycd/u03rVKzeZWlFuA71eOfXK3Guth8zl1xyZnEW1qkTO7rW+5tzneb1m1avW16z6fN3JZyWv/9+Aq62traurmzFjhvbjzJkzu7q6vv32Wz8OdeLEiZkzZ8rv393d/bvf/e7111+PjIz06UQOh+Pzzz/v7u6+4YYbYmJiFi5c+NVXX8mX5+bmFhcX19XVNTU1vfnmm7/5zW9GjRrl0wCU4MwtEIOcuTMTP3IW5SmTs+s+vubsrPUjK9fzSmZVVFSUmJgYGRn59ddf//3vf1f6dU4OY+65+VQ7efLk22+/fceOHZ2dnX6c/e23305JSfn5z38uub/zrw0nu93u0/uG/WVocwvEIGTu6xruodannN1r5XPuc8ySWTlr5bMKZP74I5A+q88jXLhwQVGUysrK/9+OhYR8/PHHMrWutmzZkpaW1tDQIHleVVV37ty5dOlSVVW/++47xZe7SjU1NYqipKWlnTt3zm63b9iwISkpyW63S57XZrMtWLBAe/SGG26orq6WOW+vztdDbl5rXfXKTaZWlFsgM8TreT1kLjNmUeZ9jtk1E59yVsXz0GvO7vv4lLNrrU9ZuZ9XMqvW1tZr164dP358xowZa9eu9SMrnfM8Zr/vKrnnJll7+PDhTz/99OLFiwcOHEhKSiooKPB1zKqqZmRk7Ny506cx33777c43OObOnasoymeffea1tt/vKvWZm0ytKLcBXa9c9cpcplaUufyaI3mnxL1WMmf3Wp9yFq2TXrNyr5XMysPrbiDuKvV/q6Qt62fOnNF+1D4HevLkSZlapxdeeMFsNldVVcmf9+LFi5MmTaqtrVV9b5Wam5sVRdmxY4f2Y1tb26hRo44dOyZT29PTM2vWrEceeaSxsdFut2/dujUlJUWmzeqzdegzN/mXsXtuXms95DagS4+HzL3WesjcvbZXJj7lLJqHMjn32sennHvV+pRVr1qfstIcP37cYDC0tLT4lJX+eR5zgG/AqS65+VG7f//++Ph4X8975MiRsLCw+vp6n8Z86dKlnJychISEtLS0rVu3Kopy4cIFr7UD9Aac+n9z87XWNbcBXa+c3DOXqRVlLr/myOTs+e9Nzzl7rvWcs6hWJiv3Wvms3K9LExxvwCUmJsbHx3/55Zfaj6dPnw4NDc3KypI/wubNm/fv33/s2LHU1FT5qhMnTjQ0NNx4440mk0lrRW+88cY333xTptZoNE6dOtX5hZ4+fbPnjz/++J///Cc/Pz8mJiYiIuLpp5+urq4+d+6c/BE0wZhbIAYnc/dM5HMW5SmTs/s+8jm718pn5V7r3/wcNWrUqFGjAp+TI42Wmx+FYWFhXV1dvlbt3bs3JyfHZDL5VJWSkvLBBx/U1tZWVlYmJycnJSVNnTrV11P3r0HOLRADmrl/a7h8rShnr7UecvZQ6zWrPmv9mJ9+zx8fBNJniY5QWFiYmZlZWVlpsVjmz5/v07+Ae/LJJ6dNm1ZZWdnW1tbW1ub+eUNRbUtLy+X/OXr0qKIop0+fln8T7bXXXjObzefPn29ra/vDH/4wefJk+d8OU1NT161bZ7PZ2traXnzxRaPR2NjY6KG2q6urra2tqKgoISGhra3N4XBo20W5ydSKcvNa6yG3QGaIzJhFmcvUijJ3rRVlIpOzqFYm5z73kcxZdHyZrES1XrPq6Oh46aWXysvLrVbrF198cfvtt+fm5spnFSxEYxbNMa+1HnLzWtvd3b1v376qqiqr1Xr06NH09PTHHntMfsyqqtbV1YWFhfX5gW7PtadOnfrhhx8aGhoOHjwYFxf39ttve64V5SPa7rXWQ25eaz3kNtDrlSrIXKZWlLnM61cm5z5rJXPus1YyZw9/X3vNSlTrNSsP1yWTlX8GpFXq6Oh48skno6OjjUbjihUrbDabZO3169eV/ys9PV3+vE6+vgGnqmpPT8+WLVsSEhKioqLuuuuur7/+Wr727NmzixYtiomJiYqKmjt3rtd/jbJnzx7Xa4yIiNC2i3LzWushN5nzinILZHrJnFeUuUytKHNnrYdMvOYsqpXJWWYOi3L2UOs1Kw+1XrPq7Oy8//77ExISwsLCpkyZsnHjRmcmMnMyWIjG7PV1Iar1kJvX2u7u7rvvvjs2NjYsLMxsNj/33HOtra3yY1ZVdceOHdOmTevzIc+1u3btio+PHz16dFZWVlFRkddaUT6i7V5rPeTmtdZDboHMSZnrVQWZy9SKMnfWenj9es1ZVCuTs6hWJmfPa53nrDzUes3Kw3XJzEn/GJxH8UPw/rd51FJL7VDVDpVgzIpaaqkd2loN/7EJAACAEK0SAACAEK0SAACAEK0SAACAUD98rBvDWyAfo8PwFowf68bwxnoFET7WDQAAMCACuqsEAAAwvHFXCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQOj/AVpnAfsg0n+oAAAAAElFTkSuQmCC>"
-/>
+![cyclic:block distribution](misc/mpi_cyclic_block.png)
+{: align="center"}
+
+!!! example "Binding to cores and cyclic:block distribution"
 
+    ```bash
     #!/bin/bash
     #SBATCH --nodes=2
     #SBATCH --tasks-per-node=16
     #SBATCH --cpus-per-task=1
 
     srun --ntasks 32 --cpu_bind=cores --distribution=cyclic:block ./application
+    ```
 
 ### Socket Bound
 
-Note: The general distribution onto the nodes and sockets stays the
-same. The mayor difference between socket and cpu bound lies within the
-ability of the tasks to "jump" from one core to another inside a socket
-while executing the application. These jumps can slow down the execution
-time of your application.
+The general distribution onto the nodes and sockets stays the same. The mayor difference between
+socket- and CPU-bound lies within the ability of the OS to move tasks from one core to another
+inside a socket while executing the application. These jumps can slow down the execution time of
+your application.
 
 #### Default Distribution
 
-The default distribution uses --cpu_bind=sockets with
---distribution=block:cyclic. The default allocation method (as well as
-block:cyclic) will fill up one node after another, while filling socket
-one and two in alternation. Resulting in only even ranks on the first
-socket of each node and odd on each second socket of each node.
+The default distribution uses `--cpu_bind=sockets` with `--distribution=block:cyclic`. The default
+allocation method (as well as `block:cyclic`) will fill up one node after another, while filling
+socket one and two in alternation. Resulting in only even ranks on the first socket of each node and
+odd on each second socket of each node.
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAvoAAADyCAIAAACzsfbGAAAABmJLR0QA/wD/AP+gvaeTAAAgAElEQVR4nO3daXQUVdrA8Wq27AukIQECCQkQCAoCIov44iAHFJARCJtggsqWIyLiCOggghsoisOAo4zoSE6cZGQTj8twDmGZAdzZiSAkhCVASITurJ2EpN4PNdMnk+7qrqR64+b/+5RU31t1763nPjypNB2DLMsSAACAuJp5ewAAAADuRbkDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAE10JPZ4PB4KpxALjtyLLs4SuSc4CmTE/O4ekOAAAQnK6nOwrP/4QHwLu8+5SFnAM0NfpzDk93AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3mq7777/fYDAcOnTIeiQqKurzzz/XfoajR48GBwdrb5+WljZkyJCgoKCoqKgGDBSAEDyfc5599tnExMTAwMDOnTsvXry4qqqqAcOFWCh3mrSIiIjnn3/eY5czGo0LFy5csWKFx64IwKd4OOeUlpZu3Ljx0qVLmZmZmZmZL7/8sscuDV9DudOkzZo1KycnZ9u2bbYvXb16ddKkSe3atYuOjp4/f355ebly/NKlS6NGjQoPD7/jjjsOHjxobV9cXJyamtqpU6e2bdtOnTq1qKjI9pyjR4+ePHlyp06d3DQdAD7Owznnww8/vO+++yIiIoYMGfL444/X7Y6mhnKnSQsODl6xYsULL7xQXV1d76WJEye2bNkyJyfnp59+Onz48KJFi5TjkyZNio6Ovnbt2tdff/3BBx9Y20+fPr2goODIkSMXL14MCwubOXOmx2YB4HbhxZxz4MCB/v37u3Q2uK3IOug/A7xo2LBhr776anV1dY8ePdavXy/LcmRk5I4dO2RZPn36tCRJ169fV1pmZWX5+/vX1NScPn3aYDDcuHFDOZ6WlhYUFCTLcm5ursFgsLY3m80Gg8FkMtm9bkZGRmRkpLtnB7fy1t4n59zWvJVzZFlevnx5ly5dioqK3DpBuI/+vd/C0+UVfEyLFi1Wr149e/bs5ORk68HLly8HBQW1bdtW+TYuLs5isRQVFV2+fDkiIqJ169bK8W7duilf5OXlGQyGAQMGWM8QFhaWn58fFhbmqXkAuD14Pue88sor6enpe/fujYiIcNes4PModyD9/ve/f+edd1avXm09Eh0dXVZWVlhYqGSfvLw8Pz8/o9HYsWNHk8lUWVnp5+cnSdK1a9eU9p07dzYYDMeOHaO+AeCUJ3PO0qVLt2/fvn///ujoaLdNCLcB3rsDSZKkNWvWrFu3rqSkRPm2e/fugwYNWrRoUWlpaUFBwbJly1JSUpo1a9ajR4++ffu+++67kiRVVlauW7dOaR8fHz9y5MhZs2ZdvXpVkqTCwsKtW7faXqWmpsZisSi/s7dYLJWVlR6aHgAf45mcs2DBgu3bt+/atctoNFosFv4jelNGuQNJkqSBAweOGTPG+l8hDAbD1q1by8vLu3Tp0rdv3969e69du1Z5acuWLVlZWf369Rs+fPjw4cOtZ8jIyOjQocOQIUNCQkIGDRp04MAB26t8+OGHAQEBycnJBQUFAQEBPFgGmiwP5ByTybR+/fqzZ8/GxcUFBAQEBAQkJiZ6ZnbwQQbrO4Aa09lgkCRJzxkA3I68tffJOUDTpH/v83QHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIroX+UxgMBv0nAQCNyDkAGoqnOwAAQHAGWZa9PQYAAAA34ukOAAAQHOUOAAAQHOUOAAAQHOUOAAAQHOUOAAAQHOUOAAAQnK6PGeTDvpqCxn1UAbHRFHj+YyyIq6aAnAM1enIOT3cAAIDgXPBHJPigQlHp/2mJ2BCVd3+SJq5ERc6BGv2xwdMdAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOPHLnezs7IcffthoNAYGBvbo0WPJkiWNOEmPHj0+//xzjY3vuuuuzMxMuy+lpaUNGTIkKCgoKiqqEcOAa/lUbDz77LOJiYmBgYGdO3devHhxVVVVIwYDX+BTcUXO8Sk+FRtNLecIXu7U1tY++OCDHTp0OHHiRFFRUWZmZlxcnBfHYzQaFy5cuGLFCi+OAQpfi43S0tKNGzdeunQpMzMzMzPz5Zdf9uJg0Gi+FlfkHN/ha7HR5HKOrIP+M7jbpUuXJEnKzs62fenKlStJSUlt27bt2LHjU089VVZWphy/efNmampq586dQ0JC+vbte/r0aVmWExISduzYobw6bNiw5OTkqqoqs9k8b9686Ohoo9E4ZcqUwsJCWZbnz5/fsmVLo9EYExOTnJxsd1QZGRmRkZHumrPr6Lm/xEbjYkOxfPny++67z/Vzdh1v3V/iipzjjr6e4ZuxoWgKOUfwpzsdOnTo3r37vHnz/vGPf1y8eLHuSxMnTmzZsmVOTs5PP/10+PDhRYsWKcenTZt24cKFb7/91mQybd68OSQkxNrlwoUL995779ChQzdv3tyyZcvp06cXFBQcOXLk4sWLYWFhM2fOlCRp/fr1iYmJ69evz8vL27x5swfniobx5dg4cOBA//79XT9nuJ8vxxW8y5djo0nkHO9WWx5QUFCwdOnSfv36tWjRomvXrhkZGbIsnz59WpKk69evK22ysrL8/f1rampycnIkScrPz693koSEhJdeeik6Onrjxo3KkdzcXIPBYD2D2Ww2GAwmk0mW5T59+ihXUcNPWj7CB2NDluXly5d36dKlqKjIhTN1OW/dX+KKnOOOvh7jg7EhN5mcI365Y1VSUvLOO+80a9bs+PHju3fvDgoKsr50/vx5SZIKCgqysrICAwNt+yYkJERGRg4cONBisShH9uzZ06xZs5g6wsPDT506JZN6dPf1PN+JjZUrV8bFxeXl5bl0fq5HuaOF78QVOcfX+E5sNJ2cI/gvs+oKDg5etGiRv7//8ePHo6Ojy8rKCgsLlZfy8vL8/PyUX3CWl5dfvXrVtvu6devatm07bty48vJySZI6d+5sMBiOHTuW9183b95MTEyUJKlZsya0qmLwkdhYunRpenr6/v37Y2Ji3DBLeJqPxBV8kI/ERpPKOYJvkmvXrj3//PNHjhwpKyu7cePGqlWrqqurBwwY0L1790GDBi1atKi0tLSgoGDZsmUpKSnNmjWLj48fOXLknDlzrl69KsvyyZMnraHm5+e3ffv20NDQhx56qKSkRGk5a9YspUFhYeHWrVuVllFRUWfOnLE7npqaGovFUl1dLUmSxWKprKz0yDLADl+LjQULFmzfvn3Xrl1Go9FisQj/n0JF5WtxRc7xHb4WG00u53j34ZK7mc3m2bNnd+vWLSAgIDw8/N577/3qq6+Uly5fvjxhwgSj0di+ffvU1NTS0lLl+I0bN2bPnt2xY8eQkJB+/fqdOXNGrvNO+Fu3bj322GP33HPPjRs3TCbTggULYmNjg4OD4+LinnnmGeUM+/bt69atW3h4+MSJE+uN5/3336+7+HUfYPogPfeX2GhQbNy8ebPexoyPj/fcWjSct+4vcUXOcUdfz/Cp2GiCOcdgPUsjGAwG5fKNPgN8mZ77S2yIzVv3l7gSGzkHavTfX8F/mQUAAEC5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABNdC/ymUP8sO2CI24A7EFdQQG1DD0x0AACA4gyzL3h4DAACAG/F0BwAACI5yBwAACI5yBwAACI5yBwAACI5yBwAACI5yBwAACI5yBwAACE7Xpyrz+ZVNQeM+mYnYaAo8/6ldxFVTQM6BGj05h6c7AABAcC74m1l8LrOo9P+0RGyIyrs/SRNXoiLnQI3+2ODpDgAAEBzlDgAAEBzlDgAAEBzlDgAAEBzlDgAAEBzlDgAAEBzlDgAAEBzlDgAAEJyw5c7BgwfHjBnTpk2boKCgO++8c9myZWVlZR647q1btxYsWNCmTZvQ0NDp06cXFxfbbRYcHGyow8/Pr7Ky0gPDa7K8FQ8FBQWTJ082Go3h4eGjRo06c+aM3WZpaWlDhgwJCgqKioqqe3zmzJl14yQzM9MDY0bjkHNQFznH14hZ7nzxxRcPPPBAnz59vv322+vXr6enp1+/fv3YsWNa+sqyXF1d3ehLr1y5cteuXT/99NO5c+cuXLgwb948u80KCgpK/mvChAnjx4/38/Nr9EXhmBfjITU11WQy/frrr/n5+e3bt580aZLdZkajceHChStWrLB9adGiRdZQSUpKavRI4FbkHNRFzvFFsg76z+AONTU10dHRixYtqne8trZWluUrV64kJSW1bdu2Y8eOTz31VFlZmfJqQkLCsmXLhg4d2r17971795rN5nnz5kVHRxuNxilTphQWFirN1q5dGxMTExYW1r59+1dffdX26u3atfv444+Vr/fu3duiRYubN286GG1hYaGfn9+ePXt0ztod9Nxf34kN78ZDfHz8pk2blK/37t3brFmzW7duqQ01IyMjMjKy7pGUlJQlS5Y0dupu5K376ztxVRc5x1XIOeQcNS6oWLx7eXdQKugjR47YfXXw4MHTpk0rLi6+evXq4MGD586dqxxPSEi44447ioqKlG/Hjh07fvz4wsLC8vLyOXPmjBkzRpblM2fOBAcHnz17VpZlk8n0888/1zv51atX615aeap88OBBB6Nds2ZNt27ddEzXjcRIPV6MB1mWFy9e/MADDxQUFJjN5hkzZkyYMMHBUO2mnvbt20dHR/fv3//NN9+sqqpq+AK4BeVOXeQcVyHnkHPUUO7YsXv3bkmSrl+/bvvS6dOn676UlZXl7+9fU1Mjy3JCQsKGDRuU47m5uQaDwdrMbDYbDAaTyZSTkxMQEPDZZ58VFxfbvfSvv/4qSVJubq71SLNmzb755hsHo+3evfuaNWsaPktPECP1eDEelMbDhg1TVqNnz54XL150MFTb1LNr165Dhw6dPXt269atHTt2tP150Vsod+oi57gKOUc5Ts6xpf/+CvjenbZt20qSlJ+fb/vS5cuXg4KClAaSJMXFxVkslqKiIuXbDh06KF/k5eUZDIYBAwbExsbGxsb27t07LCwsPz8/Li4uLS3tL3/5S1RU1P/93//t37+/3vlDQkIkSTKbzcq3JSUltbW1oaGhn3zyifWdX3Xb7927Ny8vb+bMma6aO2x5MR5kWR4xYkRcXNyNGzdKS0snT548dOjQsrIytXiwNXLkyMGDB3ft2nXixIlvvvlmenq6nqWAm5BzUBc5x0d5t9pyB+X3ps8991y947W1tfUq67179/r5+Vkr6x07dijHz50717x5c5PJpHaJ8vLyN954o3Xr1srvYutq167d3/72N+Xrffv2Of49+pQpU6ZOndqw6XmQnvvrO7HhxXgoLCyUbH7R8N1336mdx/Ynrbo+++yzNm3aOJqqB3nr/vpOXNVFznEVco5ynJxjywUVi3cv7yY7d+709/d/6aWXcnJyLBbLyZMnU1NTDx48WFtbO2jQoBkzZpSUlFy7du3ee++dM2eO0qVuqMmy/NBDDyUlJV25ckWW5evXr2/ZskWW5V9++SUrK8tisciy/OGHH7Zr18429SxbtiwhISE3N7egoOC+++6bNm2a2iCvX7/eqlUr33zDoEKM1CN7NR5iYmJmz55tNpsrKipeeeWV4ODgGzdu2I7w1q1bFRUVaWlpkZGRFRUVyjlramo2bdqUl5dnMpn27dsXHx9v/TW/11Hu1EPOcQlyjvUM5Jx6KHdUHThw4KGHHgoPDw8MDLzzzjtXrVqlvAH+8uXLEyZMMBqN7du3T01NLS0tVdrXCzWTybRgwYLY2Njg4OC4uLhnnnlGluXDhw/fc889oaGhrVu3Hjhw4L/+9S/b61ZVVT399NPh4eHBwcHTpk0zm81qI3zrrbd89g2DCmFSj+y9eDh27NjIkSNbt24dGho6ePBgtX9p3n///brPXIOCgmRZrqmpGTFiRERERKtWreLi4l544YXy8nKXr0zjUO7YIufoR86xdifn1KP//hqsZ2kE5beAes4AX6bn/hIbYvPW/SWuxEbOgRr991fAtyoDAADURbkDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAE10L/KZz+hVU0WcQG3IG4ghpiA2p4ugMAAASn629mAQAA+D6e7gAAAMFR7gAAAMFR7gAAAMFR7gAAAMFR7gAAAMFR7gAAAMFR7gAAAMHp+lRlPr+yKWjcJzMRG02B5z+1i7hqCsg5UKMn5/B0BwAACM4FfzOLz2UWlf6flogNUXn3J2niSlTkHKjRHxs83QEAAIKj3AEAAIKj3AEAAIKj3AEAAIKj3AEAAIKj3AEAAIKj3JEkSerRo4fBYDAYDJGRkSkpKaWlpY04idFoPHfunMvHBu8iNuAOxBXUEBtuQrnzH1u2bJFl+dChQz/++OPq1au9PRz4EGID7kBcQQ2x4Q6UO/8jPj5+7Nixx48fV7598cUXO3fuHBoaOmjQoMOHDysHjUbj22+/PXDgwK5duz799NO2J9m3b19MTMz333/vuXHD/YgNuANxBTXEhmtR7vwPs9mclZXVq1cv5ds777zz559/vnHjxqRJk6ZOnWr9vM6jR48eOnToxIkTu3fvzsrKqnuGr7/+Ojk5eefOnQMHDvT06OFOxAbcgbiCGmLDxWQd9J/BRyQkJISHh0dGRrZo0WL06NHl5eW2bcLDwy9fvizLckRExPfff68cnDt37po1a5SvIyIiXn/99ZiYmFOnTnls5G6l5/4SG8SGSNd1OeLKLnKOTGyo0H9/ebrzH2vWrDl8+HBaWtrBgwdzcnKUg5988kn//v07deoUGxtbUlJSVFSkHG/Tpo3yhb+/f933kf3pT3+aPHlyYmKihwcPtyI24A7EFdQQG+5AufMf4eHhHTp0mDZt2pNPPrl06VJJks6ePfvcc8+lp6dfunQpLy8vNDRUdvbH57Zs2bJt27YNGzZ4ZMjwEGID7kBcQQ2x4Q4u+Ivognn++ee7dOly7NixW7duBQUFxcfHS5KUnp5+8+ZNp307duyYlZU1bNiwwMDAJ554wv2DhUcRG3AH4gpqiA0XotypLyoqKiUlZfXq1RkZGRMnTuzTp0/btm0HDx7cqVMnLd1jY2OzsrLuv/9+f3//Rx991N2jhScRG3AH4gpqiA0XMjh9IOaos8EgSZKeM8CX6bm/xIbYvHV/iSuxkXOgRv/95b07AABAcJQ7AABAcJQ7AABAcJQ7AABAcJQ7AABAcD5R7pjN5qSkpKCgoJiYmE8//dS2gcVimTt3bpcuXYKCgu6+++49e/Y4ONvp06f9/PxmzJih1mDjxo3x8fF+fn49e/Y8ffq0bYNffvll+PDhoaGh0dHRr7zyit2TrF27tk+fPi1atFA+A0r7XNT6apyj2nU1zv2243i+irlz53bo0MHPz69Lly5qfz34wQcfNPxX165dbRtYLBbD/7J7qpMnTw4bNiwkJKR79+5ff/11g8asZS5qbbTMUa2v07nfphysp9M9rtbX6d53sE8bnce07H2nbRzsfcd9Ha+Vg75a8qRa3GrJk3qo5Vgte1wtNpzufQdr5XTvO+jrdO876Ksl76nFpNO1cnBdLXlSbV5a8mTj+ES5s3DhQovFkp+f//HHH8+ZMyc7O7teg8rKysDAwG3btl28eHHatGnjxo0rLCxUO9v8+fPvuecetVe3bNny2muv/fWvf7127VpaWlp4eLhtm+Tk5N69excVFWVlZb333ns7d+60bdOpU6fXX3993LhxDZ2LWl+Nc1S7rpa5344cz1eRnJz8ww8/FBUVbd68edWqVbt27bLbLC0traKioqKiwu5N8ff3r/iv/Pz8li1bjh8/vl6b6urqRx55ZMSIEb/99tv69eunTJly6dIl7WPWMhe1Nlrm6OD8jud+m1Kbr5Y97mCdHe99B/u00XlMy9532sbB3nfQ1+laOeirJU+qxa2WPKmH3furZY+r9dWy9x2sldO973idHe99x7HheO+r9dWyVmp9NeZJtXlpyZONpOcPbuk/gyzLFoslICDgxx9/VL6dMGHCiy++qHz95JNPPvnkk7ZdWrduvW/fPrttPv300xkzZixZsmT69OnWg3Xb9OrVKzMz0/acddsEBgZa/+ja+PHj33jjDbXxpKSkLFmypHFzqddX+xzV+tqdux567q9LYsPKdr52Y+PKlStRUVHffvutbZtRo0ZlZGTYntnueTZs2DBw4EDbNidOnGjZsmV1dbVy/He/+93q1avVzqN2f7XMxUFsOJijWl+1uevh2vur57q289Wyx9X6at/7Cus+1ZnH1I5r7Os076n11b5Wtn0btFZ149bBWrk25zjYR2p7XK1vg/a+wvb+asxjdvvKGva+bd8G5T216zpdq3p9G7pW9ealsF0r/TnH+5+qnJubW1FR0bt3b+Xb3r17HzlyRPl61KhRtu3PnTtXWlras2dP2zbFxcUrVqzYv3//unXr6naxtikrKzt16lR2dnb79u2bN2/+2GOPvfbaa82bN693nocffvjvf/977969z58//+OPPy5btszBePTMRY2DOapRm7uo6q3Jc889l5aWVlxcvG7dukGDBtlts2TJksWLFycmJq5cuXLgwIF22yg2b948c+ZMtWtZ1dbWnjx50nEbLTT21TJHNXbnLiSNe1xNg/Z+3X2qM4+pHdfS12neU+vb0LWqd12Na2Ubtw7WymM07nE1Tve+2v2tR2Nf7Xvftq/2vKc2Zi1r5WC+DtbK7rzcSE+tpP8Msiz/8MMPfn5+1m/Xrl37wAMPqDUuKysbMGDAihUr7L66YMGCt956S5ZltSccZ86ckSRpxIgRRUVFZ8+ejY+P//Of/2zbLC8vT/nTJJIkvfTSSw4GX68CbdBc1H7ycDxHtb5O594Ieu6vS2LDyvGTMFmWzWbzhQsXPvroo/Dw8FOnTtk2+PLLLw8fPpydnf3iiy+GhIRcuHBB7VTZ2dmtWrX67bffbF+qqqqKjY19+eWXy8vLv/zyy+bNm48fP76hY3Y6F7U2Tueo1lf73LVz7f3Vc91689W4x+32lRuy9+vtU5fkMS1737aN9r1fr2+D1sr2uhrXyjZuHayVa3OO2l5zsMfV+jZo76vdRy17325fjXvftq/2va82Zi1rVa+v9rVyMC93PN3x/nt3goODKysrq6qqlG+Li4uDg4PttqysrHzkkUd69eq1fPly21ePHTu2e/fuhQsXOrhWQECAJEl/+MMfIiIiunbtOnv2bNt3UVVWVg4fPnzu3LkWiyUnJ+eLL754//33XT4XNY7nqEbL3MUWGhrauXPnJ554YsSIEenp6bYNxowZ07dv3549e77++us9e/b85ptv1E61efPmsWPHtmnTxvalli1b7tixY/fu3ZGRkatWrRo3blx0dLQrp+GQ0zmq0T53AWjZ42q0733bfao/j2nZ+7ZttO99277a18q2r/a1so1b/XlSJwd7XI32vd+4HO64r5a9b7evxr3vYMxO18q2r/a1anROaxzv/zIrLi7O39//+PHjd999tyRJJ06c6NWrl22z6urqpKSk8PDwTZs2KX87o55///vfubm57du3lySpvLy8trY2Ozv78OHDddtER0eHh4dbu9s9T25ubm5u7tNPP+3n5xcXF5eUlJSVlZWamurCuahxOkc1WubedLRq1cppg5qaGrsv1dbWpqenv/fee2p977rrrgMHDihf9+vXb8KECY0epx5O5+igo9rcxaBlj6vRuPft7lOdeUzL3rfbRuPet9tX41rZ7du4PKnErc48qZPTPa5Gy95vdA7X3tfu3tfSV23vO+jrdK3U+jYiTzY6p2nn/ac7fn5+U6ZMWblypdls3rt37z//+c/p06crL82aNWvWrFmSJNXU1EyfPr26uvqjjz6qrq62WCy1tbX12jz++ONnz549evTo0aNHH3/88dGjR1t/UrG2MRgMycnJb7/9tslkunDhwqZNm8aOHVuvTWxsbFhY2AcffFBdXX3p0qVt27b16dOnXhtJkm7dumWxWGpqampqapQvNM5Fra+WOar1dTD3253d+Up11sRkMm3YsCEvL++3335LT0//6quvHn744XptSkpKMjMzr169WlhY+O677/78888jR46s10axe/fuysrK0aNH1x1D3TbffffdtWvX8vPzlyxZUlZWNnXqVNs2amN2Ohe1NlrmqNbXwdxvd3bnq2WPq/XVsvfV9qmePKZl76u10ZL31PpqWSu1vlrWSi1uHayVW2ND4XSPq/V1uvcd3Eene1+tr5a9r9ZXS95zMGana+Wgr9O1cjAvB/dOLz2/CdN/BoXJZJowYUJAQECnTp3S09Otx0eOHPnRRx/Jsnz+/Pl6w7a+29zapq56v8Ou26a8vDwlJSUkJKRDhw5//OMfa2pqbNvs2bNnwIABQUFBkZGR8+bNq6iosHfzt+cAAAHsSURBVG2zZMmSuuN59913Nc5Fra/GOapdV23ueui5v66KDbX5WtekuLh41KhRrVu3DgoK6t+//86dO619rW3MZvPQoUNDQ0ODg4MHDRq0e/du2zaKRx99dP78+fXGULfNCy+8EBYWFhAQMGbMmPPnz9ttozZmp3NRa6Nljmp9HcxdD1fdXz3XVVtPLXtcra/Tve9gnzY6j2nZ+w7aWKnlPQd9na6Vg75O18pB3KqtlZ640hIbsoY9rtbX6d53sFZO975aXy17X62vlrznOK4cr5WDvk7XysG81NZKT2z85wy6Ouu+vAPV1dW9evWqqqq6jdr4Wl+dXJV6XM7X7jux4fvXvR3vUVPrK3sp59yOa9XU+squyDkG61kaQfldnZ4zwJfpub/Ehti8dX+JK7GRc6BG//31/nt3AAAA3IpyBwAACI5yBwAACI5yBwAACI5yBwAACM4Fn6rc0M+ORNNBbMAdiCuoITaghqc7AABAcLo+dwcAAMD38XQHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAI7v8BE+cBiPwLm7cAAAAASUVORK5CYII="
-/>
+![Binding to sockets and block:cyclic distribution](misc/mpi_socket_block_cyclic.png)
+{: align="center"}
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=16
-#SBATCH --cpus-per-task=1
+!!! example "Binding to sockets and block:cyclic distribution"
 
-srun --ntasks 32 -cpu_bind=sockets ./application
-```
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2
+    #SBATCH --tasks-per-node=16
+    #SBATCH --cpus-per-task=1
+
+    srun --ntasks 32 -cpu_bind=sockets ./application
+    ```
 
 #### Distribution: block:block
 
 This method allocates the tasks linearly to the cores.
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAvoAAADyCAIAAACzsfbGAAAABmJLR0QA/wD/AP+gvaeTAAAdq0lEQVR4nO3deVRU5/3H8TsUHWAQRhlkcQQEFUWjUWNc8zPHpJq4tSruFjSN24lbSBRN3BMbjYk21RNrtWnkkELdTRNTzxExrZqmGhUTFaMQRFFZijOswzLc3x+35VCGQWWYxYf36y/m3ufeeS73y9fP3BnvqGRZlgAAAMTl5uwJAAAA2BdxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAATnbsvGKpWqueYB4Ikjy7KDn5GeA7RktvQcru4AAADB2XR1R+H4V3gAnMu5V1noOUBLY3vP4eoOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMSdluv5559XqVRnz56tXRIYGHjkyJFH38OlS5e8vb0ffXxCQsLgwYM1Gk1gYOBjTBSAEBzfc15//fWoqCgvL6+QkJDly5dXVlY+xnQhFuJOi+bn57ds2TKHPZ1Op1u6dOm6desc9owAXIqDe05JScmuXbtu376dnJycnJy8du1ahz01XA1xp0V79dVXMzIyDh48aLnq3r17kyZNat++vV6vX7hwYVlZmbL89u3bI0eO1Gq1PXv2PHPmTO34oqKiBQsWdOzY0d/ff+rUqQUFBZb7HDVq1OTJkzt27GinwwHg4hzcc3bv3v3cc8/5+fkNHjx49uzZdTdHS0PcadG8vb3XrVu3cuXKqqqqeqsmTpzYqlWrjIyM8+fPX7hwIS4uTlk+adIkvV5///79Y8eO/f73v68dP2PGjNzc3IsXL2ZnZ/v6+s6aNcthRwHgSeHEnnP69Ol+/fo169HgiSLbwPY9wImGDRv2zjvvVFVVdevWbfv27bIsBwQEHD58WJbl9PR0SZLy8vKUkSkpKR4eHmazOT09XaVSFRYWKssTEhI0Go0sy5mZmSqVqna80WhUqVQGg6HB501KSgoICLD30cGunPW3T895ojmr58iyvGbNmk6dOhUUFNj1AGE/tv/tuzs6XsHFuLu7b9q0ac6cOTExMbUL79y5o9Fo/P39lYfh4eEmk6mgoODOnTt+fn5t27ZVlnfp0kX5ISsrS6VS9e/fv3YPvr6+OTk5vr6+jjoOAE8Gx/ecDRs2JCYmpqam+vn52euo4PKIO5B+8YtffPjhh5s2bapdotfrS0tL8/Pzle6TlZWlVqt1Ol2HDh0MBkNFRYVarZYk6f79+8r4kJAQlUqVlpZGvgHwUI7sOStWrDh06NDXX3+t1+vtdkB4AvDZHUiSJG3ZsuWjjz4qLi5WHnbt2nXgwIFxcXElJSW5ubmrVq2KjY11c3Pr1q1bnz59tm3bJklSRUXFRx99pIyPiIgYMWLEq6++eu/ePUmS8vPzDxw4YPksZrPZZDIp79mbTKaKigoHHR4AF+OYnrN48eJDhw4dP35cp9OZTCb+I3pLRtyBJEnSgAEDRo8eXftfIVQq1YEDB8rKyjp16tSnT59evXpt3bpVWbV///6UlJS+ffsOHz58+PDhtXtISkoKDg4ePHhwmzZtBg4cePr0actn2b17t6enZ0xMTG5urqenJxeWgRbLAT3HYDBs3779xo0b4eHhnp6enp6eUVFRjjk6uCBV7SeAmrKxSiVJki17APAkctbfPj0HaJls/9vn6g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABCcu+27UKlUtu8EAB4RPQfA4+LqDgAAEJxKlmVnzwEAAMCOuLoDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA4m24zyM2+WoKm3aqA2mgJHH8bC+qqJaDnwBpbeg5XdwAAgOCa4UskuFGhqGx/tURtiMq5r6SpK1HRc2CN7bXB1R0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAghM/7ly9enXs2LE6nc7Ly6tbt27x8fFN2Em3bt2OHDnyiIOffvrp5OTkBlclJCQMHjxYo9EEBgY2YRpoXi5VG6+//npUVJSXl1dISMjy5csrKyubMBm4ApeqK3qOS3Gp2mhpPUfwuFNTU/PSSy8FBwd///33BQUFycnJ4eHhTpyPTqdbunTpunXrnDgHKFytNkpKSnbt2nX79u3k5OTk5OS1a9c6cTJoMlerK3qO63C12mhxPUe2ge17sLfbt29LknT16lXLVXfv3o2Ojvb39+/QocNrr71WWlqqLH/w4MGCBQtCQkLatGnTp0+f9PR0WZYjIyMPHz6srB02bFhMTExlZaXRaJw/f75er9fpdFOmTMnPz5dleeHCha1atdLpdKGhoTExMQ3OKikpKSAgwF7H3HxsOb/URtNqQ7FmzZrnnnuu+Y+5+Tjr/FJX9Bx7bOsYrlkbipbQcwS/uhMcHNy1a9f58+f/5S9/yc7Orrtq4sSJrVq1ysjIOH/+/IULF+Li4pTl06ZNu3Xr1jfffGMwGPbu3dumTZvaTW7dujVkyJChQ4fu3bu3VatWM2bMyM3NvXjxYnZ2tq+v76xZsyRJ2r59e1RU1Pbt27Oysvbu3evAY8XjceXaOH36dL9+/Zr/mGF/rlxXcC5Xro0W0XOcm7YcIDc3d8WKFX379nV3d+/cuXNSUpIsy+np6ZIk5eXlKWNSUlI8PDzMZnNGRoYkSTk5OfV2EhkZuXr1ar1ev2vXLmVJZmamSqWq3YPRaFSpVAaDQZbl3r17K89iDa+0XIQL1oYsy2vWrOnUqVNBQUEzHmmzc9b5pa7oOfbY1mFcsDbkFtNzxI87tYqLiz/88EM3N7fLly+fOHFCo9HUrvrpp58kScrNzU1JSfHy8rLcNjIyMiAgYMCAASaTSVly8uRJNze30Dq0Wu2VK1dkWo/N2zqe69TG+vXrw8PDs7KymvX4mh9x51G4Tl3Rc1yN69RGy+k5gr+ZVZe3t3dcXJyHh8fly5f1en1paWl+fr6yKisrS61WK29wlpWV3bt3z3Lzjz76yN/ff9y4cWVlZZIkhYSEqFSqtLS0rP968OBBVFSUJElubi3otyoGF6mNFStWJCYmfv3116GhoXY4Sjiai9QVXJCL1EaL6jmC/5Hcv39/2bJlFy9eLC0tLSwsfO+996qqqvr379+1a9eBAwfGxcWVlJTk5uauWrUqNjbWzc0tIiJixIgRc+fOvXfvnizLP/zwQ22pqdXqQ4cO+fj4vPzyy8XFxcrIV199VRmQn59/4MABZWRgYOD169cbnI/ZbDaZTFVVVZIkmUymiooKh/wa0ABXq43FixcfOnTo+PHjOp3OZDIJ/59CReVqdUXPcR2uVhstruc49+KSvRmNxjlz5nTp0sXT01Or1Q4ZMuTLL79UVt25c2fChAk6nS4oKGjBggUlJSXK8sLCwjlz5nTo0KFNmzZ9+/a9fv26XOeT8NXV1b/61a+effbZwsJCg8GwePHisLAwb2/v8PDwJUuWKHs4depUly5dtFrtxIkT681n586ddX/5dS9guiBbzi+18Vi18eDBg3p/mBEREY77XTw+Z51f6oqeY49tHcOlaqMF9hxV7V6aQKVSKU/f5D3AldlyfqkNsTnr/FJXYqPnwBrbz6/gb2YBAAAQdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIzt32XShfyw5YojZgD9QVrKE2YA1XdwAAgOBUsiw7ew4AAAB2xNUdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgbLqrMvevbAmadmcmaqMlcPxdu6irloCeA2ts6Tlc3QEAAIJrhu/M4r7MorL91RK1ISrnvpKmrkRFz4E1ttcGV3cAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMEJG3fOnDkzevTodu3aaTSap556atWqVaWlpQ543urq6sWLF7dr187Hx2fGjBlFRUUNDvP29lbVoVarKyoqHDC9FstZ9ZCbmzt58mSdTqfVakeOHHn9+vUGhyUkJAwePFij0QQGBtZdPmvWrLp1kpyc7IA5o2noOaiLnuNqxIw7n3/++QsvvNC7d+9vvvkmLy8vMTExLy8vLS3tUbaVZbmqqqrJT71+/frjx4+fP3/+5s2bt27dmj9/foPDcnNzi/9rwoQJ48ePV6vVTX5SNM6J9bBgwQKDwfDjjz/m5OQEBQVNmjSpwWE6nW7p0qXr1q2zXBUXF1dbKtHR0U2eCeyKnoO66DmuSLaB7XuwB7PZrNfr4+Li6i2vqamRZfnu3bvR0dH+/v4dOnR47bXXSktLlbWRkZGrVq0aOnRo165dU1NTjUbj/Pnz9Xq9TqebMmVKfn6+Mmzr1q2hoaG+vr5BQUHvvPOO5bO3b9/+k08+UX5OTU11d3d/8OBBI7PNz89Xq9UnT5608ajtwZbz6zq14dx6iIiI2LNnj/Jzamqqm5tbdXW1takmJSUFBATUXRIbGxsfH9/UQ7cjZ51f16mruug5zYWeQ8+xphkSi3Of3h6UBH3x4sUG1w4aNGjatGlFRUX37t0bNGjQvHnzlOWRkZE9e/YsKChQHo4ZM2b8+PH5+fllZWVz584dPXq0LMvXr1/39va+ceOGLMsGg+G7776rt/N79+7VfWrlqvKZM2came2WLVu6dOliw+HakRitx4n1IMvy8uXLX3jhhdzcXKPROHPmzAkTJjQy1QZbT1BQkF6v79ev3+bNmysrKx//F2AXxJ266DnNhZ5Dz7GGuNOAEydOSJKUl5dnuSo9Pb3uqpSUFA8PD7PZLMtyZGTkjh07lOWZmZkqlap2mNFoVKlUBoMhIyPD09Nz3759RUVFDT71jz/+KElSZmZm7RI3N7evvvqqkdl27dp1y5Ytj3+UjiBG63FiPSiDhw0bpvw2unfvnp2d3chULVvP8ePHz549e+PGjQMHDnTo0MHy9aKzEHfqouc0F3qOspyeY8n28yvgZ3f8/f0lScrJybFcdefOHY1GowyQJCk8PNxkMhUUFCgPg4ODlR+ysrJUKlX//v3DwsLCwsJ69erl6+ubk5MTHh6ekJDw8ccfBwYG/t///d/XX39db/9t2rSRJMloNCoPi4uLa2pqfHx8Pv3009pPftUdn5qampWVNWvWrOY6dlhyYj3Isvziiy+Gh4cXFhaWlJRMnjx56NChpaWl1urB0ogRIwYNGtS5c+eJEydu3rw5MTHRll8F7ISeg7roOS7KuWnLHpT3Td944416y2tqauol69TUVLVaXZusDx8+rCy/efPmz372M4PBYO0pysrKfvOb37Rt21Z5L7au9u3b/+lPf1J+PnXqVOPvo0+ZMmXq1KmPd3gOZMv5dZ3acGI95OfnSxZvNPzzn/+0th/LV1p17du3r127do0dqgM56/y6Tl3VRc9pLvQcZTk9x1IzJBbnPr2dHD161MPDY/Xq1RkZGSaT6YcffliwYMGZM2dqamoGDhw4c+bM4uLi+/fvDxkyZO7cucomdUtNluWXX345Ojr67t27sizn5eXt379fluVr166lpKSYTCZZlnfv3t2+fXvL1rNq1arIyMjMzMzc3Nznnntu2rRp1iaZl5fXunVr1/zAoEKM1iM7tR5CQ0PnzJljNBrLy8s3bNjg7e1dWFhoOcPq6ury8vKEhISAgIDy8nJln2azec+ePVlZWQaD4dSpUxEREbVv8zsdcaceek6zoOfU7oGeUw9xx6rTp0+//PLLWq3Wy8vrqaeeeu+995QPwN+5c2fChAk6nS4oKGjBggUlJSXK+HqlZjAYFi9eHBYW5u3tHR4evmTJElmWL1y48Oyzz/r4+LRt23bAgAF///vfLZ+3srJy0aJFWq3W29t72rRpRqPR2gzff/99l/3AoEKY1iM7rx7S0tJGjBjRtm1bHx+fQYMGWfuXZufOnXWvuWo0GlmWzWbziy++6Ofn17p16/Dw8JUrV5aVlTX7b6ZpiDuW6Dm2o+fUbk7Pqcf286uq3UsTKO8C2rIHuDJbzi+1ITZnnV/qSmz0HFhj+/kV8KPKAAAAdRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBudu+i4d+wypaLGoD9kBdwRpqA9ZwdQcAAAjOpu/MAgAAcH1c3QEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACM6muypz/8qWoGl3ZqI2WgLH37WLumoJ6Dmwxpaew9UdAAAguGb4zizuyywq218tURuicu4raepKVPQcWGN7bXB1BwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdSZKkbt26qVQqlUoVEBAQGxtbUlLShJ3odLqbN282+9zgXNQG7IG6gjXUhp0Qd/5j//79siyfPXv23LlzmzZtcvZ04EKoDdgDdQVrqA17IO78j4iIiDFjxly+fFl5+NZbb4WEhPj4+AwcOPDChQvKQp1O98EHHwwYMKBz586LFi2y3MmpU6dCQ0O//fZbx80b9kdtwB6oK1hDbTQv4s7/MBqNKSkpPXr0UB4+9dRT3333XWFh4aRJk6ZOnVp7v85Lly6dPXv2+++/P3HiREpKSt09HDt2LCYm5ujRowMGDHD07GFP1AbsgbqCNdRGM5NtYPseXERkZKRWqw0ICHB3dx81alRZWZnlGK1We+fOHVmW/fz8vv32W2XhvHnztmzZovzs5+e3cePG0NDQK1euOGzmdmXL+aU2qA2RnrfZUVcNoufI1IYVtp9fru78x5YtWy5cuJCQkHDmzJmMjAxl4aefftqvX7+OHTuGhYUVFxcXFBQoy9u1a6f84OHhUfdzZL/97W8nT54cFRXl4MnDrqgN2AN1BWuoDXsg7vyHVqsNDg6eNm3ar3/96xUrVkiSdOPGjTfeeCMxMfH27dtZWVk+Pj7yw758bv/+/QcPHtyxY4dDpgwHoTZgD9QVrKE27KEZvhFdMMuWLevUqVNaWlp1dbVGo4mIiJAkKTEx8cGDBw/dtkOHDikpKcOGDfPy8nrllVfsP1k4FLUBe6CuYA210YyIO/UFBgbGxsZu2rQpKSlp4sSJvXv39vf3HzRoUMeOHR9l87CwsJSUlOeff97Dw2P69On2ni0cidqAPVBXsIbaaEaqh14Qa2xjlUqSJFv2AFdmy/mlNsTmrPNLXYmNngNrbD+/fHYHAAAIjrgDAAAER9wBAACCI+4AAADBEXcaYDQao6OjNRpNaGjoZ599ZjnAZDKp/hff4gbgobZu3dq7d293d3flZip17dq1KyIiQq1Wd+/ePT09vd5ak8k0b968Tp06aTSaZ5555uTJk7Wr5s2bFxwcrFarO3XqRCN6QjVyfhXp6elqtXrmzJkNbm6tBhqptxaIuNOApUuXmkymnJycTz75ZO7cuVevXq03wMPDo/y/cnJyWrVqNX78eKdMFQ5w7dq14cOH+/j46PX6DRs2NDjGWlt56aWXajNx586dHTJfuK6OHTtu3Lhx3Lhx9Zbv37//3Xff/cMf/nD//v2EhAStVltvQEVFhZeX18GDB7Ozs6dNmzZu3Lj8/HxlVUxMzL/+9a+CgoK9e/e+9957x48fd8SRoFk1cn4VCxcufPbZZ61tbq0GrNVbC2XLN1DYvgcXZDKZPD09z507pzycMGHCW2+91cj4HTt2DBgwwCFTczRbzq9ItfHMM88sWbKkoqIiPT29ffv2R44csRyzb9++v/71r+PHj4+Pj6+7fOTIkQkJCUoyrqiocNSU7c5Z51eMuoqNja1XJz169EhOTn70PbRt2/bUqVP1Ft69ezcwMPCbb75phik6CT1HUe/8fvbZZzNnzoyPj58xY0bjGzZYA5b19iSy/fxydae+zMzM8vLyXr16KQ979ep15cqVRsbv3bs3JibGIVODc1y9enX69OmtW7eOjIwcMmSI5dU+SZImTZo0ZswYHx8fy1WtWrXy8PDw8PBo3bq1/SeLJ09paemVK1euXr0aFBSk1+tXrlxpNpsbGX/z5s2SkpLu3bvXLnnjjTf8/f3DwsLWrl07cOBA+08ZdlTv/BYVFa1bt+79999vfCtq4KGIO/WVlJSo1eraf5l8fHzqfulaPdeuXUtLS5s6daqjZgcnGDt27J///GeTyXTt2rVz586NHDnysTaPj48PCQl56aWXvv32WzvNEE+0nJwcSZLOnj37ww8/nDp1av/+/R9//LG1wWVlZdOnT3/77bfbt29fu3Dt2rXffffdzp07V65c2WAcx5PC8vyuXr16zpw5QUFBjW9IDTwUcac+b2/vioqKyspK5WFRUZG3t7ckSTt27FA+gTFmzJjawXv37h0zZkztF9JCSJs3b/7iiy88PT2joqJmz57dt2/fR9920aJFR44cOX78eL9+/X7+859nZ2fbb554Qnl6ekqS9Oabb/r5+XXu3HnOnDnHjh2TGuo5FRUVv/zlL3v06LFmzZq6e/Dx8QkJCXnllVdefPHFxMRExx8CmoXl+U1LSztx4sTSpUvrjbSsDWrgoYg79YWHh3t4eFy+fFl5+P333/fo0UOSpIULFyrv/33xxRfKqpqamsTERN7JEltFRcXw4cPnzZtnMpkyMjI+//zznTt3Slbir6XRo0f36dOne/fuGzdu7N69+1dffeWoieOJodfrtVqtco986b83y5csek5VVVV0dLRWq92zZ0/tGEu8Z/qEavD8/uMf/8jMzAwKCtLpdL/73e8OHDigvNyy/PeoLmqgQcSd+tRq9ZQpU9avX280GlNTU//2t7/NmDGjwZEnTpyoqKgYNWqUg2cIR8rMzMzMzFy0aJFarQ4PD4+Ojk5JSZEe1m4a1Lp168Y/kwHhVVdXm0wms9lsNpuVHyRJUqlUMTExH3zwgcFguHXr1p49eywztNlsnjFjRlVV1R//+MeqqiqTyVRTUyNJksFg2LFjR1ZW1r///e/ExMQvv/xy7NixTjgw2Mba+Z09e/aNGzcuXbp06dKl2bNnjxo1SrnyV1cjNdBgvbVctnzO2fY9uCaDwTBhwgRPT8+OHTsmJiZaGzZ9+vTaf/OEZMv5FaY2ysrKfH19t23bVllZmZ2d/fTTT2/YsMFyWFVVVXl5+cyZM998883y8vLq6mpZlouKipKSku7evZuXl7d161ZPT88bN244/Ajswlnn90mvq/j4+Lrtd9u2bcrysrKy2NjYNm3aBAcHv/3222azud6GP/30U73WnZSUJMtyUVHRyJEj27Ztq9Fo+vXrd/ToUUcfUrNqsT3H2vmty9r/zGqkBqzV25PI9vPLN6LDKr6dWJGamhofH3/16lVvb+/x48dv27bNw8Oj3pgVK1Zs3ry59uG2bduWLl1aVFQ0evToy5cv19TU9OzZ8913333hhRccO3d74RvRYQ/0HFhj+/kl7sAqWg+sIe7AHug5sMb288tndwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABOdu+y4auZ05WjhqA/ZAXcEaagPWcHUHAAAIzqbbDAIAALg+ru4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcP8PtzynrHMtHtYAAAAASUVORK5CYII="
-/>
+![Binding to sockets and block:block distribution](misc/mpi_socket_block_block.png)
+{: align="center"}
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=16
-#SBATCH --cpus-per-task=1
+!!! example "Binding to sockets and block:block distribution"
 
-srun --ntasks 32 --cpu_bind=sockets --distribution=block:block ./application
-```
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2
+    #SBATCH --tasks-per-node=16
+    #SBATCH --cpus-per-task=1
+
+    srun --ntasks 32 --cpu_bind=sockets --distribution=block:block ./application
+    ```
 
 #### Distribution: block:cyclic
 
-The block:cyclic distribution will allocate the tasks of your job in
+The `block:cyclic` distribution will allocate the tasks of your job in
 alternation between the first node and the second node while filling the
 sockets linearly.
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAvoAAADyCAIAAACzsfbGAAAABmJLR0QA/wD/AP+gvaeTAAAgAElEQVR4nO3daXQUVdrA8Wq27AukIQECCQkQCAoCIov44iAHFJARCJtggsqWIyLiCOggghsoisOAo4zoSE6cZGQTj8twDmGZAdzZiSAkhCVASITurJ2EpN4PNdMnk+7qrqR64+b/+5RU31t1763nPjypNB2DLMsSAACAuJp5ewAAAADuRbkDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAE10JPZ4PB4KpxALjtyLLs4SuSc4CmTE/O4ekOAAAQnK6nOwrP/4QHwLu8+5SFnAM0NfpzDk93AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3mq7777/fYDAcOnTIeiQqKurzzz/XfoajR48GBwdrb5+WljZkyJCgoKCoqKgGDBSAEDyfc5599tnExMTAwMDOnTsvXry4qqqqAcOFWCh3mrSIiIjnn3/eY5czGo0LFy5csWKFx64IwKd4OOeUlpZu3Ljx0qVLmZmZmZmZL7/8sscuDV9DudOkzZo1KycnZ9u2bbYvXb16ddKkSe3atYuOjp4/f355ebly/NKlS6NGjQoPD7/jjjsOHjxobV9cXJyamtqpU6e2bdtOnTq1qKjI9pyjR4+ePHlyp06d3DQdAD7Owznnww8/vO+++yIiIoYMGfL444/X7Y6mhnKnSQsODl6xYsULL7xQXV1d76WJEye2bNkyJyfnp59+Onz48KJFi5TjkyZNio6Ovnbt2tdff/3BBx9Y20+fPr2goODIkSMXL14MCwubOXOmx2YB4HbhxZxz4MCB/v37u3Q2uK3IOug/A7xo2LBhr776anV1dY8ePdavXy/LcmRk5I4dO2RZPn36tCRJ169fV1pmZWX5+/vX1NScPn3aYDDcuHFDOZ6WlhYUFCTLcm5ursFgsLY3m80Gg8FkMtm9bkZGRmRkpLtnB7fy1t4n59zWvJVzZFlevnx5ly5dioqK3DpBuI/+vd/C0+UVfEyLFi1Wr149e/bs5ORk68HLly8HBQW1bdtW+TYuLs5isRQVFV2+fDkiIqJ169bK8W7duilf5OXlGQyGAQMGWM8QFhaWn58fFhbmqXkAuD14Pue88sor6enpe/fujYiIcNes4PModyD9/ve/f+edd1avXm09Eh0dXVZWVlhYqGSfvLw8Pz8/o9HYsWNHk8lUWVnp5+cnSdK1a9eU9p07dzYYDMeOHaO+AeCUJ3PO0qVLt2/fvn///ujoaLdNCLcB3rsDSZKkNWvWrFu3rqSkRPm2e/fugwYNWrRoUWlpaUFBwbJly1JSUpo1a9ajR4++ffu+++67kiRVVlauW7dOaR8fHz9y5MhZs2ZdvXpVkqTCwsKtW7faXqWmpsZisSi/s7dYLJWVlR6aHgAf45mcs2DBgu3bt+/atctoNFosFv4jelNGuQNJkqSBAweOGTPG+l8hDAbD1q1by8vLu3Tp0rdv3969e69du1Z5acuWLVlZWf369Rs+fPjw4cOtZ8jIyOjQocOQIUNCQkIGDRp04MAB26t8+OGHAQEBycnJBQUFAQEBPFgGmiwP5ByTybR+/fqzZ8/GxcUFBAQEBAQkJiZ6ZnbwQQbrO4Aa09lgkCRJzxkA3I68tffJOUDTpH/v83QHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIroX+UxgMBv0nAQCNyDkAGoqnOwAAQHAGWZa9PQYAAAA34ukOAAAQHOUOAAAQHOUOAAAQHOUOAAAQHOUOAAAQHOUOAAAQnK6PGeTDvpqCxn1UAbHRFHj+YyyIq6aAnAM1enIOT3cAAIDgXPBHJPigQlHp/2mJ2BCVd3+SJq5ERc6BGv2xwdMdAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOPHLnezs7IcffthoNAYGBvbo0WPJkiWNOEmPHj0+//xzjY3vuuuuzMxMuy+lpaUNGTIkKCgoKiqqEcOAa/lUbDz77LOJiYmBgYGdO3devHhxVVVVIwYDX+BTcUXO8Sk+FRtNLecIXu7U1tY++OCDHTp0OHHiRFFRUWZmZlxcnBfHYzQaFy5cuGLFCi+OAQpfi43S0tKNGzdeunQpMzMzMzPz5Zdf9uJg0Gi+FlfkHN/ha7HR5HKOrIP+M7jbpUuXJEnKzs62fenKlStJSUlt27bt2LHjU089VVZWphy/efNmampq586dQ0JC+vbte/r0aVmWExISduzYobw6bNiw5OTkqqoqs9k8b9686Ohoo9E4ZcqUwsJCWZbnz5/fsmVLo9EYExOTnJxsd1QZGRmRkZHumrPr6Lm/xEbjYkOxfPny++67z/Vzdh1v3V/iipzjjr6e4ZuxoWgKOUfwpzsdOnTo3r37vHnz/vGPf1y8eLHuSxMnTmzZsmVOTs5PP/10+PDhRYsWKcenTZt24cKFb7/91mQybd68OSQkxNrlwoUL995779ChQzdv3tyyZcvp06cXFBQcOXLk4sWLYWFhM2fOlCRp/fr1iYmJ69evz8vL27x5swfniobx5dg4cOBA//79XT9nuJ8vxxW8y5djo0nkHO9WWx5QUFCwdOnSfv36tWjRomvXrhkZGbIsnz59WpKk69evK22ysrL8/f1rampycnIkScrPz693koSEhJdeeik6Onrjxo3KkdzcXIPBYD2D2Ww2GAwmk0mW5T59+ihXUcNPWj7CB2NDluXly5d36dKlqKjIhTN1OW/dX+KKnOOOvh7jg7EhN5mcI365Y1VSUvLOO+80a9bs+PHju3fvDgoKsr50/vx5SZIKCgqysrICAwNt+yYkJERGRg4cONBisShH9uzZ06xZs5g6wsPDT506JZN6dPf1PN+JjZUrV8bFxeXl5bl0fq5HuaOF78QVOcfX+E5sNJ2cI/gvs+oKDg5etGiRv7//8ePHo6Ojy8rKCgsLlZfy8vL8/PyUX3CWl5dfvXrVtvu6devatm07bty48vJySZI6d+5sMBiOHTuW9183b95MTEyUJKlZsya0qmLwkdhYunRpenr6/v37Y2Ji3DBLeJqPxBV8kI/ERpPKOYJvkmvXrj3//PNHjhwpKyu7cePGqlWrqqurBwwY0L1790GDBi1atKi0tLSgoGDZsmUpKSnNmjWLj48fOXLknDlzrl69KsvyyZMnraHm5+e3ffv20NDQhx56qKSkRGk5a9YspUFhYeHWrVuVllFRUWfOnLE7npqaGovFUl1dLUmSxWKprKz0yDLADl+LjQULFmzfvn3Xrl1Go9FisQj/n0JF5WtxRc7xHb4WG00u53j34ZK7mc3m2bNnd+vWLSAgIDw8/N577/3qq6+Uly5fvjxhwgSj0di+ffvU1NTS0lLl+I0bN2bPnt2xY8eQkJB+/fqdOXNGrvNO+Fu3bj322GP33HPPjRs3TCbTggULYmNjg4OD4+LinnnmGeUM+/bt69atW3h4+MSJE+uN5/3336+7+HUfYPogPfeX2GhQbNy8ebPexoyPj/fcWjSct+4vcUXOcUdfz/Cp2GiCOcdgPUsjGAwG5fKNPgN8mZ77S2yIzVv3l7gSGzkHavTfX8F/mQUAAEC5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABNdC/ymUP8sO2CI24A7EFdQQG1DD0x0AACA4gyzL3h4DAACAG/F0BwAACI5yBwAACI5yBwAACI5yBwAACI5yBwAACI5yBwAACI5yBwAACE7Xpyrz+ZVNQeM+mYnYaAo8/6ldxFVTQM6BGj05h6c7AABAcC74m1l8LrOo9P+0RGyIyrs/SRNXoiLnQI3+2ODpDgAAEBzlDgAAEBzlDgAAEBzlDgAAEBzlDgAAEBzlDgAAEBzlDgAAEBzlDgAAEJyw5c7BgwfHjBnTpk2boKCgO++8c9myZWVlZR647q1btxYsWNCmTZvQ0NDp06cXFxfbbRYcHGyow8/Pr7Ky0gPDa7K8FQ8FBQWTJ082Go3h4eGjRo06c+aM3WZpaWlDhgwJCgqKioqqe3zmzJl14yQzM9MDY0bjkHNQFznH14hZ7nzxxRcPPPBAnz59vv322+vXr6enp1+/fv3YsWNa+sqyXF1d3ehLr1y5cteuXT/99NO5c+cuXLgwb948u80KCgpK/mvChAnjx4/38/Nr9EXhmBfjITU11WQy/frrr/n5+e3bt580aZLdZkajceHChStWrLB9adGiRdZQSUpKavRI4FbkHNRFzvFFsg76z+AONTU10dHRixYtqne8trZWluUrV64kJSW1bdu2Y8eOTz31VFlZmfJqQkLCsmXLhg4d2r17971795rN5nnz5kVHRxuNxilTphQWFirN1q5dGxMTExYW1r59+1dffdX26u3atfv444+Vr/fu3duiRYubN286GG1hYaGfn9+ePXt0ztod9Nxf34kN78ZDfHz8pk2blK/37t3brFmzW7duqQ01IyMjMjKy7pGUlJQlS5Y0dupu5K376ztxVRc5x1XIOeQcNS6oWLx7eXdQKugjR47YfXXw4MHTpk0rLi6+evXq4MGD586dqxxPSEi44447ioqKlG/Hjh07fvz4wsLC8vLyOXPmjBkzRpblM2fOBAcHnz17VpZlk8n0888/1zv51atX615aeap88OBBB6Nds2ZNt27ddEzXjcRIPV6MB1mWFy9e/MADDxQUFJjN5hkzZkyYMMHBUO2mnvbt20dHR/fv3//NN9+sqqpq+AK4BeVOXeQcVyHnkHPUUO7YsXv3bkmSrl+/bvvS6dOn676UlZXl7+9fU1Mjy3JCQsKGDRuU47m5uQaDwdrMbDYbDAaTyZSTkxMQEPDZZ58VFxfbvfSvv/4qSVJubq71SLNmzb755hsHo+3evfuaNWsaPktPECP1eDEelMbDhg1TVqNnz54XL150MFTb1LNr165Dhw6dPXt269atHTt2tP150Vsod+oi57gKOUc5Ts6xpf/+CvjenbZt20qSlJ+fb/vS5cuXg4KClAaSJMXFxVkslqKiIuXbDh06KF/k5eUZDIYBAwbExsbGxsb27t07LCwsPz8/Li4uLS3tL3/5S1RU1P/93//t37+/3vlDQkIkSTKbzcq3JSUltbW1oaGhn3zyifWdX3Xb7927Ny8vb+bMma6aO2x5MR5kWR4xYkRcXNyNGzdKS0snT548dOjQsrIytXiwNXLkyMGDB3ft2nXixIlvvvlmenq6nqWAm5BzUBc5x0d5t9pyB+X3ps8991y947W1tfUq67179/r5+Vkr6x07dijHz50717x5c5PJpHaJ8vLyN954o3Xr1srvYutq167d3/72N+Xrffv2Of49+pQpU6ZOndqw6XmQnvvrO7HhxXgoLCyUbH7R8N1336mdx/Ynrbo+++yzNm3aOJqqB3nr/vpOXNVFznEVco5ynJxjywUVi3cv7yY7d+709/d/6aWXcnJyLBbLyZMnU1NTDx48WFtbO2jQoBkzZpSUlFy7du3ee++dM2eO0qVuqMmy/NBDDyUlJV25ckWW5evXr2/ZskWW5V9++SUrK8tisciy/OGHH7Zr18429SxbtiwhISE3N7egoOC+++6bNm2a2iCvX7/eqlUr33zDoEKM1CN7NR5iYmJmz55tNpsrKipeeeWV4ODgGzdu2I7w1q1bFRUVaWlpkZGRFRUVyjlramo2bdqUl5dnMpn27dsXHx9v/TW/11Hu1EPOcQlyjvUM5Jx6KHdUHThw4KGHHgoPDw8MDLzzzjtXrVqlvAH+8uXLEyZMMBqN7du3T01NLS0tVdrXCzWTybRgwYLY2Njg4OC4uLhnnnlGluXDhw/fc889oaGhrVu3Hjhw4L/+9S/b61ZVVT399NPh4eHBwcHTpk0zm81qI3zrrbd89g2DCmFSj+y9eDh27NjIkSNbt24dGho6ePBgtX9p3n///brPXIOCgmRZrqmpGTFiRERERKtWreLi4l544YXy8nKXr0zjUO7YIufoR86xdifn1KP//hqsZ2kE5beAes4AX6bn/hIbYvPW/SWuxEbOgRr991fAtyoDAADURbkDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAE10L/KZz+hVU0WcQG3IG4ghpiA2p4ugMAAASn629mAQAA+D6e7gAAAMFR7gAAAMFR7gAAAMFR7gAAAMFR7gAAAMFR7gAAAMFR7gAAAMHp+lRlPr+yKWjcJzMRG02B5z+1i7hqCsg5UKMn5/B0BwAACM4FfzOLz2UWlf6flogNUXn3J2niSlTkHKjRHxs83QEAAIKj3AEAAIKj3AEAAIKj3AEAAIKj3AEAAIKj3AEAAIKj3JEkSerRo4fBYDAYDJGRkSkpKaWlpY04idFoPHfunMvHBu8iNuAOxBXUEBtuQrnzH1u2bJFl+dChQz/++OPq1au9PRz4EGID7kBcQQ2x4Q6UO/8jPj5+7Nixx48fV7598cUXO3fuHBoaOmjQoMOHDysHjUbj22+/PXDgwK5duz799NO2J9m3b19MTMz333/vuXHD/YgNuANxBTXEhmtR7vwPs9mclZXVq1cv5ds777zz559/vnHjxqRJk6ZOnWr9vM6jR48eOnToxIkTu3fvzsrKqnuGr7/+Ojk5eefOnQMHDvT06OFOxAbcgbiCGmLDxWQd9J/BRyQkJISHh0dGRrZo0WL06NHl5eW2bcLDwy9fvizLckRExPfff68cnDt37po1a5SvIyIiXn/99ZiYmFOnTnls5G6l5/4SG8SGSNd1OeLKLnKOTGyo0H9/ebrzH2vWrDl8+HBaWtrBgwdzcnKUg5988kn//v07deoUGxtbUlJSVFSkHG/Tpo3yhb+/f933kf3pT3+aPHlyYmKihwcPtyI24A7EFdQQG+5AufMf4eHhHTp0mDZt2pNPPrl06VJJks6ePfvcc8+lp6dfunQpLy8vNDRUdvbH57Zs2bJt27YNGzZ4ZMjwEGID7kBcQQ2x4Q4u+Ivognn++ee7dOly7NixW7duBQUFxcfHS5KUnp5+8+ZNp307duyYlZU1bNiwwMDAJ554wv2DhUcRG3AH4gpqiA0XotypLyoqKiUlZfXq1RkZGRMnTuzTp0/btm0HDx7cqVMnLd1jY2OzsrLuv/9+f3//Rx991N2jhScRG3AH4gpqiA0XMjh9IOaos8EgSZKeM8CX6bm/xIbYvHV/iSuxkXOgRv/95b07AABAcJQ7AABAcJQ7AABAcJQ7AABAcJQ7AABAcD5R7pjN5qSkpKCgoJiYmE8//dS2gcVimTt3bpcuXYKCgu6+++49e/Y4ONvp06f9/PxmzJih1mDjxo3x8fF+fn49e/Y8ffq0bYNffvll+PDhoaGh0dHRr7zyit2TrF27tk+fPi1atFA+A0r7XNT6apyj2nU1zv2243i+irlz53bo0MHPz69Lly5qfz34wQcfNPxX165dbRtYLBbD/7J7qpMnTw4bNiwkJKR79+5ff/11g8asZS5qbbTMUa2v07nfphysp9M9rtbX6d53sE8bnce07H2nbRzsfcd9Ha+Vg75a8qRa3GrJk3qo5Vgte1wtNpzufQdr5XTvO+jrdO876Ksl76nFpNO1cnBdLXlSbV5a8mTj+ES5s3DhQovFkp+f//HHH8+ZMyc7O7teg8rKysDAwG3btl28eHHatGnjxo0rLCxUO9v8+fPvuecetVe3bNny2muv/fWvf7127VpaWlp4eLhtm+Tk5N69excVFWVlZb333ns7d+60bdOpU6fXX3993LhxDZ2LWl+Nc1S7rpa5344cz1eRnJz8ww8/FBUVbd68edWqVbt27bLbLC0traKioqKiwu5N8ff3r/iv/Pz8li1bjh8/vl6b6urqRx55ZMSIEb/99tv69eunTJly6dIl7WPWMhe1Nlrm6OD8jud+m1Kbr5Y97mCdHe99B/u00XlMy9532sbB3nfQ1+laOeirJU+qxa2WPKmH3furZY+r9dWy9x2sldO973idHe99x7HheO+r9dWyVmp9NeZJtXlpyZONpOcPbuk/gyzLFoslICDgxx9/VL6dMGHCiy++qHz95JNPPvnkk7ZdWrduvW/fPrttPv300xkzZixZsmT69OnWg3Xb9OrVKzMz0/acddsEBgZa/+ja+PHj33jjDbXxpKSkLFmypHFzqddX+xzV+tqdux567q9LYsPKdr52Y+PKlStRUVHffvutbZtRo0ZlZGTYntnueTZs2DBw4EDbNidOnGjZsmV1dbVy/He/+93q1avVzqN2f7XMxUFsOJijWl+1uevh2vur57q289Wyx9X6at/7Cus+1ZnH1I5r7Os076n11b5Wtn0btFZ149bBWrk25zjYR2p7XK1vg/a+wvb+asxjdvvKGva+bd8G5T216zpdq3p9G7pW9ealsF0r/TnH+5+qnJubW1FR0bt3b+Xb3r17HzlyRPl61KhRtu3PnTtXWlras2dP2zbFxcUrVqzYv3//unXr6naxtikrKzt16lR2dnb79u2bN2/+2GOPvfbaa82bN693nocffvjvf/977969z58//+OPPy5btszBePTMRY2DOapRm7uo6q3Jc889l5aWVlxcvG7dukGDBtlts2TJksWLFycmJq5cuXLgwIF22yg2b948c+ZMtWtZ1dbWnjx50nEbLTT21TJHNXbnLiSNe1xNg/Z+3X2qM4+pHdfS12neU+vb0LWqd12Na2Ubtw7WymM07nE1Tve+2v2tR2Nf7Xvftq/2vKc2Zi1r5WC+DtbK7rzcSE+tpP8Msiz/8MMPfn5+1m/Xrl37wAMPqDUuKysbMGDAihUr7L66YMGCt956S5ZltSccZ86ckSRpxIgRRUVFZ8+ejY+P//Of/2zbLC8vT/nTJJIkvfTSSw4GX68CbdBc1H7ycDxHtb5O594Ieu6vS2LDyvGTMFmWzWbzhQsXPvroo/Dw8FOnTtk2+PLLLw8fPpydnf3iiy+GhIRcuHBB7VTZ2dmtWrX67bffbF+qqqqKjY19+eWXy8vLv/zyy+bNm48fP76hY3Y6F7U2Tueo1lf73LVz7f3Vc91689W4x+32lRuy9+vtU5fkMS1737aN9r1fr2+D1sr2uhrXyjZuHayVa3OO2l5zsMfV+jZo76vdRy17325fjXvftq/2va82Zi1rVa+v9rVyMC93PN3x/nt3goODKysrq6qqlG+Li4uDg4PttqysrHzkkUd69eq1fPly21ePHTu2e/fuhQsXOrhWQECAJEl/+MMfIiIiunbtOnv2bNt3UVVWVg4fPnzu3LkWiyUnJ+eLL754//33XT4XNY7nqEbL3MUWGhrauXPnJ554YsSIEenp6bYNxowZ07dv3549e77++us9e/b85ptv1E61efPmsWPHtmnTxvalli1b7tixY/fu3ZGRkatWrRo3blx0dLQrp+GQ0zmq0T53AWjZ42q0733bfao/j2nZ+7ZttO99277a18q2r/a1so1b/XlSJwd7XI32vd+4HO64r5a9b7evxr3vYMxO18q2r/a1anROaxzv/zIrLi7O39//+PHjd999tyRJJ06c6NWrl22z6urqpKSk8PDwTZs2KX87o55///vfubm57du3lySpvLy8trY2Ozv78OHDddtER0eHh4dbu9s9T25ubm5u7tNPP+3n5xcXF5eUlJSVlZWamurCuahxOkc1WubedLRq1cppg5qaGrsv1dbWpqenv/fee2p977rrrgMHDihf9+vXb8KECY0epx5O5+igo9rcxaBlj6vRuPft7lOdeUzL3rfbRuPet9tX41rZ7du4PKnErc48qZPTPa5Gy95vdA7X3tfu3tfSV23vO+jrdK3U+jYiTzY6p2nn/ac7fn5+U6ZMWblypdls3rt37z//+c/p06crL82aNWvWrFmSJNXU1EyfPr26uvqjjz6qrq62WCy1tbX12jz++ONnz549evTo0aNHH3/88dGjR1t/UrG2MRgMycnJb7/9tslkunDhwqZNm8aOHVuvTWxsbFhY2AcffFBdXX3p0qVt27b16dOnXhtJkm7dumWxWGpqampqapQvNM5Fra+WOar1dTD3253d+Up11sRkMm3YsCEvL++3335LT0//6quvHn744XptSkpKMjMzr169WlhY+O677/78888jR46s10axe/fuysrK0aNH1x1D3TbffffdtWvX8vPzlyxZUlZWNnXqVNs2amN2Ohe1NlrmqNbXwdxvd3bnq2WPq/XVsvfV9qmePKZl76u10ZL31PpqWSu1vlrWSi1uHayVW2ND4XSPq/V1uvcd3Eene1+tr5a9r9ZXS95zMGana+Wgr9O1cjAvB/dOLz2/CdN/BoXJZJowYUJAQECnTp3S09Otx0eOHPnRRx/Jsnz+/Pl6w7a+29zapq56v8Ou26a8vDwlJSUkJKRDhw5//OMfa2pqbNvs2bNnwIABQUFBkZGR8+bNq6iosHfzt+cAAAHsSURBVG2zZMmSuuN59913Nc5Fra/GOapdV23ueui5v66KDbX5WtekuLh41KhRrVu3DgoK6t+//86dO619rW3MZvPQoUNDQ0ODg4MHDRq0e/du2zaKRx99dP78+fXGULfNCy+8EBYWFhAQMGbMmPPnz9ttozZmp3NRa6Nljmp9HcxdD1fdXz3XVVtPLXtcra/Tve9gnzY6j2nZ+w7aWKnlPQd9na6Vg75O18pB3KqtlZ640hIbsoY9rtbX6d53sFZO975aXy17X62vlrznOK4cr5WDvk7XysG81NZKT2z85wy6Ouu+vAPV1dW9evWqqqq6jdr4Wl+dXJV6XM7X7jux4fvXvR3vUVPrK3sp59yOa9XU+squyDkG61kaQfldnZ4zwJfpub/Ehti8dX+JK7GRc6BG//31/nt3AAAA3IpyBwAACI5yBwAACI5yBwAACI5yBwAACM4Fn6rc0M+ORNNBbMAdiCuoITaghqc7AABAcLo+dwcAAMD38XQHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAI7v8BE+cBiPwLm7cAAAAASUVORK5CYII="
-/>
+![Binding to sockets and block:cyclic distribution](misc/mpi_socket_block_cyclic.png)
+{: align="center"}
+
+!!! example "Binding to sockets and block:cyclic distribution"
 
+    ```bash
     #!/bin/bash
     #SBATCH --nodes=2
     #SBATCh --tasks-per-node=16
     #SBATCH --cpus-per-task=1
 
     srun --ntasks 32 --cpu_bind=sockets --distribution=block:cyclic ./application
+    ```
 
 ## Hybrid Strategies
 
 ### Default Binding and Distribution Pattern
 
-The default binding pattern of hybrid jobs will split the cores
-allocated to a rank between the sockets of a node. The example shows
-that Rank 0 has 4 cores at its disposal. Two of them on first socket
-inside the first node and two on the second socket inside the first
-node.
+The default binding pattern of hybrid jobs will split the cores allocated to a rank between the
+sockets of a node. The example shows that Rank 0 has 4 cores at its disposal. Two of them on first
+socket inside the first node and two on the second socket inside the first node.
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAvoAAADyCAIAAACzsfbGAAAABmJLR0QA/wD/AP+gvaeTAAAgAElEQVR4nO3de1iUdf7/8XsQA+SoDgdhZHA4CaUlpijmYdXooJvrsbxqzXa1dCsPbFlt5qF2O2xbXV52bdtlV25c7iVrhrVXWVaEupJ2gjxUYAIDgjgcZJCDIIf7+8f9a36zjCAwM/eMn3k+/oJ77rnf9z3z5u1r7hnn1siyLAEAAIjLy9U7AAAA4FzEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABCctz131mg0jtoPANccWZZVrsjMATyZPTOHszsAAEBwdp3dUaj/Cg+Aa7n2LAszB/A09s8czu4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9zxXDNmzNBoNF9++aVlSURExPvvv9/3LXz//fcBAQF9Xz8zMzMtLc3f3z8iIqIfOwpACOrPnPXr1ycnJw8ZMiQ6OnrDhg2XL1/ux+5CLMQdjzZ8+PDHH39ctXJarXbdunVbtmxRrSIAt6LyzGlqanrzzTfPnj2blZWVlZW1efNm1UrD3RB3PNqKFSuKi4vfe+8925uqqqoWL14cFham0+keeeSRlpYWZfnZs2dvu+22kJCQG264IS8vz7L+xYsXV69ePXLkyNDQ0Hvuuae2ttZ2m3feeeeSJUtGjhzppMMB4OZUnjk7duyYOnXq8OHD09LSHnjgAeu7w9MQdzxaQEDAli1bnnrqqfb29m43LVy4cPDgwcXFxd9++21+fn5GRoayfPHixTqd7vz58/v37//HP/5hWf/ee+81mUwFBQXl5eXBwcHLly9X7SgAXCtcOHOOHDkyfvx4hx4NrimyHezfAlxo+vTpzz33XHt7++jRo7dv3y7Lcnh4+L59+2RZLiwslCSpurpaWTMnJ8fX17ezs7OwsFCj0Vy4cEFZnpmZ6e/vL8tySUmJRqOxrN/Q0KDRaMxm8xXr7t69Ozw83NlHB6dy1d8+M+ea5qqZI8vypk2bRo0aVVtb69QDhPPY/7fvrXa8gpvx9vZ+8cUXV65cuWzZMsvCiooKf3//0NBQ5VeDwdDa2lpbW1tRUTF8+PChQ4cqy+Pj45UfjEajRqOZMGGCZQvBwcGVlZXBwcFqHQeAa4P6M+fZZ5/dtWtXbm7u8OHDnXVUcHvEHUjz5s175ZVXXnzxRcsSnU7X3NxcU1OjTB+j0ejj46PVaqOiosxmc1tbm4+PjyRJ58+fV9aPjo7WaDTHjx8n3wC4KjVnzpNPPpmdnX3o0CGdTue0A8I1gM/uQJIk6eWXX962bVtjY6Pya0JCwqRJkzIyMpqamkwm08aNG++//34vL6/Ro0ePGzfutddekySpra1t27ZtyvqxsbHp6ekrVqyoqqqSJKmmpmbv3r22VTo7O1tbW5X37FtbW9va2lQ6PABuRp2Zs2bNmuzs7AMHDmi12tbWVv4juicj7kCSJCk1NXXOnDmW/wqh0Wj27t3b0tIyatSocePGjR079tVXX1Vuevfdd3NyclJSUmbOnDlz5kzLFnbv3h0ZGZmWlhYYGDhp0qQjR47YVtmxY4efn9+yZctMJpOfnx8nlgGPpcLMMZvN27dv//nnnw0Gg5+fn5+fX3JysjpHBzeksXwCaCB31mgkSbJnCwCuRa7622fmAJ7J/r99zu4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBedu/CY1GY/9GAKCPmDkA+ouzOwAAQHAaWZZdvQ8AAABOxNkdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADB2fU1g3zZlycY2FcV0BueQP2vsaCvPAEzBz2xZ+ZwdgcAAAjOAReR4IsKRWX/qyV6Q1SufSVNX4mKmYOe2N8bnN0BAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjx486PP/7461//WqvVDhkyZPTo0U888cQANjJ69Oj333+/jyvfdNNNWVlZV7wpMzMzLS3N398/IiJiALsBx3Kr3li/fn1ycvKQIUOio6M3bNhw+fLlAewM3IFb9RUzx624VW942swRPO50dXXdfvvtkZGRJ0+erK2tzcrKMhgMLtwfrVa7bt26LVu2uHAfoHC33mhqanrzzTfPnj2blZWVlZW1efNmF+4MBszd+oqZ4z7crTc8bubIdrB/C8529uxZSZJ+/PFH25vOnTu3aNGi0NDQqKiohx9+uLm5WVleX1+/evXq6OjowMDAcePGFRYWyrKcmJi4b98+5dbp06cvW7bs8uXLDQ0Nq1at0ul0Wq327rvvrqmpkWX5kUceGTx4sFar1ev1y5Ytu+Je7d69Ozw83FnH7Dj2PL/0xsB6Q7Fp06apU6c6/pgdx1XPL33FzHHGfdXhnr2h8ISZI/jZncjIyISEhFWrVv373/8uLy+3vmnhwoWDBw8uLi7+9ttv8/PzMzIylOVLly4tKys7evSo2Wx+5513AgMDLXcpKyubMmXKLbfc8s477wwePPjee+81mUwFBQXl5eXBwcHLly+XJGn79u3Jycnbt283Go3vvPOOiseK/nHn3jhy5Mj48eMdf8xwPnfuK7iWO/eGR8wc16YtFZhMpieffDIlJcXb2zsuLm737t2yLBcWFkqSVF1drayTk5Pj6+vb2dlZXFwsSVJlZWW3jSQmJj7zzDM6ne7NN99UlpSUlGg0GssWGhoaNBqN2WyWZfnGG29UqvSEV1puwg17Q5blTZs2jRo1qra21oFH6nCuen7pK2aOM+6rGjfsDdljZo74cceisbHxlVde8fLyOnHixOeff+7v72+5qbS0VJIkk8mUk5MzZMgQ2/smJiaGh4enpqa2trYqS7744gsvLy+9lZCQkB9++EFm9Nh9X/W5T29s3brVYDAYjUaHHp/jEXf6wn36ipnjbtynNzxn5gj+Zpa1gICAjIwMX1/fEydO6HS65ubmmpoa5Saj0ejj46O8wdnS0lJVVWV7923btoWGht51110tLS2SJEVHR2s0muPHjxt/UV9fn5ycLEmSl5cHPapicJPeePLJJ3ft2nXo0CG9Xu+Eo4Ta3KSv4IbcpDc8auYI/kdy/vz5xx9/vKCgoLm5+cKFCy+88EJ7e/uECRMSEhImTZqUkZHR1NRkMpk2btx4//33e3l5xcbGpqenP/jgg1VVVbIsnzp1ytJqPj4+2dnZQUFBd9xxR2Njo7LmihUrlBVqamr27t2rrBkREVFUVHTF/ens7GxtbW1vb5ckqbW1ta2tTZWHAVfgbr2xZs2a7OzsAwcOaLXa1tZW4f9TqKjcra+YOe7D3XrD42aOa08uOVtDQ8PKlSvj4+P9/PxCQkKmTJny0UcfKTdVVFQsWLBAq9WOGDFi9erVTU1NyvILFy6sXLkyKioqMDAwJSWlqKhItvokfEdHx29/+9uJEydeuHDBbDavWbMmJiYmICDAYDCsXbtW2cLBgwfj4+NDQkIWLlzYbX/eeOMN6wff+gSmG7Ln+aU3+tUb9fX13f4wY2Nj1Xss+s9Vzy99xcxxxn3V4Va94YEzR2PZygBoNBql/IC3AHdmz/NLb4jNVc8vfSU2Zg56Yv/zK/ibWQAAAMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAILztn8TymXZAVv0BpyBvkJP6A30hLM7AABAcBpZll29DwAAAE7E2R0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgODs+lZlvr/SEwzsm5noDU+g/rd20VeegJmDntgzczi7AwAABOeAa2bxvcyisv/VEr0hKte+kqavRMXMQU/s7w3O7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAghM27uTl5c2ZM2fYsGH+/v5jxozZuHFjc3OzCnU7OjrWrFkzbNiwoKCge++99+LFi1dcLSAgQGPFx8enra1Nhd3zWK7qB5PJtGTJEq1WGxIScttttxUVFV1xtczMzLS0NH9//4iICOvly5cvt+6TrKwsFfYZA8PMgTVmjrsRM+785z//mTVr1o033nj06NHq6updu3ZVV1cfP368L/eVZbm9vX3Apbdu3XrgwIFvv/32zJkzZWVlq1atuuJqJpOp8RcLFiyYP3++j4/PgIuidy7sh9WrV5vN5tOnT1dWVo4YMWLx4sVXXE2r1a5bt27Lli22N2VkZFhaZdGiRQPeEzgVMwfWmDnuSLaD/Vtwhs7OTp1Ol5GR0W15V1eXLMvnzp1btGhRaGhoVFTUww8/3NzcrNyamJi4cePGW265JSEhITc3t6GhYdWqVTqdTqvV3n333TU1Ncpqr776ql6vDw4OHjFixHPPPWdbPSws7O2331Z+zs3N9fb2rq+v72Vva2pqfHx8vvjiCzuP2hnseX7dpzdc2w+xsbFvvfWW8nNubq6Xl1dHR0dPu7p79+7w8HDrJffff/8TTzwx0EN3Ilc9v+7TV9aYOY7CzGHm9MQBicW15Z1BSdAFBQVXvHXy5MlLly69ePFiVVXV5MmTH3roIWV5YmLiDTfcUFtbq/w6d+7c+fPn19TUtLS0PPjgg3PmzJFluaioKCAg4Oeff5Zl2Ww2f/fdd902XlVVZV1aOaucl5fXy96+/PLL8fHxdhyuE4kxelzYD7Isb9iwYdasWSaTqaGh4b777luwYEEvu3rF0TNixAidTjd+/PiXXnrp8uXL/X8AnIK4Y42Z4yjMHGZOT4g7V/D5559LklRdXW17U2FhofVNOTk5vr6+nZ2dsiwnJia+/vrryvKSkhKNRmNZraGhQaPRmM3m4uJiPz+/PXv2XLx48YqlT58+LUlSSUmJZYmXl9fHH3/cy94mJCS8/PLL/T9KNYgxelzYD8rK06dPVx6NpKSk8vLyXnbVdvQcOHDgyy+//Pnnn/fu3RsVFWX7etFViDvWmDmOwsxRljNzbNn//Ar42Z3Q0FBJkiorK21vqqio8Pf3V1aQJMlgMLS2ttbW1iq/RkZGKj8YjUaNRjNhwoSYmJiYmJixY8cGBwdXVlYaDIbMzMy///3vERER06ZNO3ToULftBwYGSpLU0NCg/NrY2NjV1RUUFPTPf/7T8skv6/Vzc3ONRuPy5csddeyw5cJ+kGV59uzZBoPhwoULTU1NS5YsueWWW5qbm3vqB1vp6emTJ0+Oi4tbuHDhSy+9tGvXLnseCjgJMwfWmDluyrVpyxmU903/+Mc/dlve1dXVLVnn5ub6+PhYkvW+ffuU5WfOnBk0aJDZbO6pREtLy/PPPz906FDlvVhrYWFhO3fuVH4+ePBg7++j33333ffcc0//Dk9F9jy/7tMbLuyHmpoayeaNhmPHjvW0HdtXWtb27NkzbNiw3g5VRa56ft2nr6wxcxyFmaMsZ+bYckBicW15J/nggw98fX2feeaZ4uLi1tbWU6dOrV69Oi8vr6ura9KkSffdd19jY+P58+enTJny4IMPKnexbjVZlu+4445FixadO3dOluXq6up3331XluWffvopJyentbVVluUdO3aEhYXZjp6NGzcmJiaWlJSYTKapU6cuXbq0p52srq6+7rrr3PMDgwoxRo/s0n7Q6/UrV65saGi4dOnSs88+GxAQcOHCBds97OjouHTpUmZmZnh4+KVLl5RtdnZ2vvXWW0aj0Ww2Hzx4MDY21vI2v8sRd7ph5jgEM8eyBWZON8SdHh05cuSOO+4ICQkZMmTImDFjXnjhBeUD8BUVFQsWLNBqtSNGjFi9enVTU5OyfrdWM5vNa9asiYmJCQgIMBgMa9eulWU5Pz9/4sSJQUFBQ4cOTU1NPXz4sG3dy5cvP/rooyEhIQEBAUuXLm1oaOhpD//617+67QcGFcKMHtl1/XD8+PH09PShQ4cGBQVNnjy5p39p3njjDetzrv7+/rIsd3Z2zp49e/jw4dddd53BYHjqqadaWloc/sgMDHHHFjPHfswcy92ZOd3Y//xqLFsZAOVdQHu2AHdmz/NLb4jNVc8vfSU2Zg56Yv/zK+BHlQEAAKwRdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwXnbv4mrXmEVHovegDPQV+gJvYGecHYHAAAIzq5rZgEAALg/zu4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARn17cq8/2VnmBg38xEb3gC9b+1i77yBMwc9MSemcPZHQAAIDgHXDPLVa/wqKtOXXt42mPlaXVdxdMeZ0+raw9Pe6w8ra49OLsDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABCcw+KO2Wz29vaOiYnR6/V/+MMf+v6f8o1G4+zZs3u69cMPPzQYDDExMZmZmWrWnT9/fkhIyKJFi3pawRl1S0tLZ86cGRUVlZSU9Mknn6hWt6WlJSUlRafT6fX6bdu29XGDfUdv2F9X1N6wh5OeX0mSWlpa9Hr9unXr1Kzr7++v0+l0Ot3ixYvVrHv27NmZM2eGhYUlJSW1traqU7egoED3C29v77y8vD5us4/oDYfUFa03ZDtYb6G+vj4qKkqW5dbW1gkTJnz88cd93EhpaemsWbOueFN7e7vBYDAajTU1NdHR0Q0NDerUlWU5Nzc3Ozt74cKF1gudXbe4uPjo0aOyLJ86dSo8PLyzs1Oduh0dHefPn5dlua6uLjIyUvm5W93+ojccW1ek3rCHCs+vLMsbN25cvHjx2rVr1ayr1+ttF6pQd/bs2Tt27JBluby8vL29XbW6ipqamhEjRnR0dNjW7S96w+F1hekNhePfzPLx8Zk4ceKZM2ckSWpra5s1a1ZKSsq4ceMOHTokSZLRaExNTX3ooYduvfXWRx991PqOeXl5kydPrqmpsSz5+uuvExIS9Hq9VqudMWNGTk6OOnUlSZoxY0ZgYKDKx2swGCZNmiRJ0vXXXy9JUnNzszp1Bw0aFB4eLklSR0dHQECAn59fXw58AOgNesMZHPv8lpSU/Pjjj3feeafKdV1yvKWlpUajccWKFZIkjRw50tu7t+/Zd8bxvvfee3fdddegQYMG9lBcFb1Bb/x/9mQl6y1YUt7FixfHjh2bm5sry3JnZ2d9fb0sy1VVVWlpabIsl5aWBgcH19TUyLI8bdq0kpISJeXl5eWlpqaaTCbr7b/77ru///3vlZ//9Kc/bd++XZ26is8++6wvr+AdXleW5U8//XTKlClq1m1oaIiOjh40aNAbb7xxxbr9RW84o64sRG/YQ4XjXbhwYWFh4c6dO6/6Ct6xdQMCAgwGw/jx4z/55BPV6n766aczZsyYP3/+TTfdtHnzZjWPVzFz5sycnJwr1u0vesOxdUXqjf+3Bbvu/L+HPWjQIL1ef9111y1btkxZ2NXV9fTTT6elpU2fPj04OFiW5dLS0mnTpim3rly5Mjc3t7S0VK/XjxkzxnKe3KKP/6Q5vK7iqv+kOaluWVlZUlLSTz/9pHJd5V6jRo0qLy+3rdtf9Aa94QzOPt5PPvlk/fr1siz3/k+aMx5no9Eoy3J+fn5kZGRdXZ06dT/++GNfX9/CwsJLly5NnTrV8maEOn1lMpkiIyMt71bI7j1z6A3Vjld2dG8oHPlmVkREhNFoLCsr++qrr3744QdJkvbv319cXHzo0KGDBw/6+voqqw0ePFj5wcvLq6OjQ5KksLAwPz+/EydOdNtgZGTkuXPnlJ8rKysjIyPVqeuq45UkyWw233XXXdu3bx89erSadRUxMTGpqamnTp3q/4NxFfQGveEMDj/eY8eO7dmzJyYm5rHHHnv77befffZZdepKkqTX6yVJGjduXHJy8unTp9WpGxUVlZiYmJiY6Ovre+utt548eVK145Uk6b333ps3b56T3smiN+iNbhz/2Z2IiIgtW7Zs3bpVkqT6+nqDweDt7f3111+bTKae7hIUFPTBBx889thj33zzjfXyiRMnFhUVlZeX19XV5ebm9v6BeQfW7RcH1r18+fKCBQvWr18/a9YsNetWVVUp0aGiouLYsWPJyclXrT4w9Aa94QwOPN7NmzdXVFQYjca//e1vv/vd7zZt2qRO3bq6ugsXLkiSVFRUdOrUqdjYWHXq3nDDDV1dXRUVFZ2dnf/973+TkpLUqavYs2fPkiVLeqloP3qD3rBwyvfuLF68+MSJE4WFhfPmzfv666+XLl36r3/9Kzo6upe7REREZGdnP/DAA0VFRZaF3t7er7322owZM1JSUrZu3RoUFKROXUmSbrvttqVLl+7fv1+n0xUUFKhT9/PPPz98+PDTTz+t/B88o9GoTt26urrZs2dHRUXNmjXrz3/+s/JKwknoDXrDGRz4/Lqk7tmzZ1NTU6Oion7zm9+8/vrroaGh6tTVaDTbtm1LT09PSkq6/vrr586dq05dSZJMJtPp06enTZvWe0X70Rv0hkJjeUtsIHfWaCRJsmcL1BW17rW4z9SlLnWv3brX4j5TV826fKsyAAAQHHEHAAAIjrgDAAAEp17c6eUKR1e9CNGAlfZ8pSEVLgbU09VVrnoBFHv0dJUTZ1+kxh5/+ctf4uPj4+Li1q9fb3tTQkJCQkLCvn377Kxi22b96skBd2m3O/bSk7Yr29OlV9zhnnrSdmWndqk6mDkWzJxumDk9rSzyzLHnS3v6vgXbKxw1NDR0dXUpt17xIkQOqWt7pSFL3Z4uBuSQugrrq6tYH+8VL4DiqLrdrnJiXVfR7UIkjqo74PuePXtWr9e3tLS0t7enpKR88803ln3+7rvvbrrppkuXLtXV1SmT1J663dqsvz3Ze5f2vW4vPWm78lW7tO91FT31pO3KvXep/dNjYJg5vWPm9GVNZo5nzhyVzu7YXuFo7NixlZWVyq19vwhRf9leachS19kXA+p2dRXr43WeUpurnNjWdfZFavorICDA19e3ra1NuQTd8OHDLftcWFiYmprq6+s7bNiwkSNHHj582J5C3dqsvz054C7tdsdeetJ2ZXu61HaHe+lJ5/0Nugozh5nTE2aOZ84cleLOuXPnoqKilJ91Ol1lZWVWVtZVvz/AgT777LO4uLjAwEDruhcvXtTr9ZGRkevXr7/qF7f014YNG55//nnLr9Z16+rqYmNjb7755gMHDji26JkzZ3Q63YIFC8aNG7dly5ZudRUqfLVXv4SEhGRkZERHR0dGRs6bN2/UqFGWfR4zZsyRI0caGxvPnz+fn5/v2Nntnj1py4Fd2ktP2nJel6rDPZ9fZo47YOZ45szp7RqnTqWETXWUl5evXbs2Ozu7W92goKCysjKj0Thz5sw5c+aMHDnSURUPHDgQHR2dmJh49OhRZYl13VOnTun1+oKCgrlz5548eXLYsGGOqtvZ2Xns2LHvv/9er9enp6dPmjTp9ttvt16hurq6sLBw+vTpjqpov/Ly8ldffbWkpMTX1/dXv/rV3LlzLY/VmDFjVq1aNX369IiIiLS0tN4vyWs/d+hJW47q0t570pbzutRV3OH5Zea4A2aOZ84clc7u9PEKR85w1SsNOeNiQL1fXaUvF0AZmKte5cSpF6kZmIKCgptvvlmr1QYEBMycOfOrr76yvvWRRx7Jz8/fv39/fX19XFycA+u6c0/asr9L+3jFHwvndak63Pn5Zea4FjOnL8SbOSrFHdsrHG3evNlsNju7ru2Vhix1nXoxINurq1jq9usCKP1le5WTbo+zu51VliQpPj7+m2++aWpqamtrO3z4cEJCgvU+l5WVSZL04Ycfms3m1NRUB9Z1w5605cAu7aUnbTm1S9Xhhs8vM8dNMHM8dObY8znnfm3hgw8+GDVqVHR09M6dO2VZHjlyZGNjo3JTenq6Vqv18/OLiorKz893YN2PPvpo0KBBUb8oLS211D158mRSUlJkZGRCQsKuXbv6srUBPGI7d+5UPpFuqVtQUBAXFxcZGTl69Oi9e/c6vO4XX3yRlJQUHx+/bt06+X8f5/Pnz0dGRnZ2dvZxU/Z0SL/u+/zzz8fFxcXGxmZkZMj/u88TJ04MCwu7+eabT506ZWdd2zbrV0/23qV9r9tLT9qufNUu7dfxKmx70nblq3ap/dNjYJg5V8XM6QtmjgfOHPXijrWioqJHH32UuqLWtee+nvZYeVpdO11zx0tdderac19Pe6w8ra4FlwilrlPqXov7TF3qUvfarXst7jN11azLRSQAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAME54GsGITZ7vvILYnPVV41BbMwc9ISvGQQAAOiRXWd3AAAA3B9ndwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACC4/wNeW27o5DoAAAACSURBVCEI/r8gawAAAABJRU5ErkJggg=="
-/>
+![sockets binding and block:block distribution](misc/hybrid.png)
+{: align="center"}
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=4
-#SBATCH --cpus-per-task=4
+!!! example "Binding to sockets and block:block distribution"
 
-export OMP_NUM_THREADS=4
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2
+    #SBATCH --tasks-per-node=4
+    #SBATCH --cpus-per-task=4
 
-srun --ntasks 8 --cpus-per-task $OMP_NUM_THREADS ./application
-```
+    export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
+    srun --ntasks 8 --cpus-per-task $OMP_NUM_THREADS ./application
+    ```
 
 ### Core Bound
 
@@ -195,36 +232,37 @@ srun --ntasks 8 --cpus-per-task $OMP_NUM_THREADS ./application
 
 This method allocates the tasks linearly to the cores.
 
-\<img alt=""
-src="<data:;base64,iVBORw0KGgoAAAANSUhEUgAAAvoAAADyCAIAAACzsfbGAAAABmJLR0QA/wD/AP+gvaeTAAAgAElEQVR4nO3df1RUdf7H8TuIIgy/lOGXICgoGPZjEUtByX54aK3VVhDStVXyqMlapLSZfsOf7UnUU2buycgs5XBytkzdPadadkWyg9nZTDHN/AEK/uLn6gy/HPl1v3/csxyOMIQMd2b4zPPxF3Pnzr2f952Pb19zZ+aORpZlCQAAQFxOth4AAACAuog7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDhnSx6s0Wj6ahwA+h1Zlq28R3oO4Mgs6Tmc3QEAAIKz6OyOwvqv8ADYlm3PstBzAEdjec/h7A4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB3H9dhjj2k0mu+++659SUBAwMGDB3u+haKiInd3956vn5OTExcXp9VqAwIC7mGgAIRg/Z6zfPnyqKgoNze3kJCQFStWNDU13cNwIRbijkPz8fF57bXXrLY7nU63bNmydevWWW2PAOyKlXtOfX19dnb21atX9Xq9Xq9fu3at1XYNe0PccWgLFy4sKSn54osvOt9VXl6enJzs5+cXHBz80ksvNTY2KsuvXr361FNPeXt733///UePHm1fv7a2Ni0tbfjw4b6+vrNnz66pqem8zaeffjolJWX48OEqlQPAzlm55+zcuTM+Pt7HxycuLu6FF17o+HA4GuKOQ3N3d1+3bt2qVauam5vvuispKWngwIElJSXHjx8/ceJERkaGsjw5OTk4OLiiouKrr7764IMP2tefO3duZWXlyZMnr1y54uXllZqaarUqAPQXNuw5hYWFMTExfVoN+hXZApZvATY0ZcqUN998s7m5ecyYMdu3b5dl2d/f/8CBA7Isnzt3TpKkqqoqZc38/PzBgwe3traeO3dOo9HcvHlTWZ6Tk6PVamVZvnTpkkajaV/faDRqNBqDwdDlfvfu3evv7692dVCVrf7t03P6NVv1HFmW16xZM3LkyJqaGlULhHos/7fvbO14BTvj7OyclZW1aNGiefPmtS+8du2aVqv19fVVboaFhZlMppqammvXrvn4+AwZMkRZPnr0aOWP0tJSjUbz8MMPt2/By8vr+vXrXl5e1qoDQP9g/Z6zYcOG3NzcgoICHx8ftaqC3SPuQHr22WfffvvtrKys9iXBwcENDQ3V1dVK9yktLXVxcdHpdEFBQQaD4c6dOy4uLpIkVVRUKOuHhIRoNJpTp06RbwD8Kmv2nJUrV+7fv//IkSPBwcGqFYR+gM/uQJIkacuWLdu2baurq1NuRkRETJw4MSMjo76+vrKyMjMzc/78+U5OTmPGjImOjt66daskSXfu3Nm2bZuyfnh4eEJCwsKFC8vLyyVJqq6u3rdvX+e9tLa2mkwm5T17k8l0584dK5UHwM5Yp+ekp6fv378/Ly9Pp9OZTCa+iO7IiDuQJEmaMGHCM8880/5VCI1Gs2/fvsbGxpEjR0ZHRz/44IPvvPOOctfnn3+en58/bty4J5544oknnmjfwt69e4cNGxYXF+fh4TFx4sTCwsLOe9m5c6erq+u8efMqKytdXV05sQw4LCv0HIPBsH379osXL4aFhbm6urq6ukZFRVmnOtghTfsngHrzYI1GkiRLtgCgP7LVv316DuCYLP+3z9kdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOGfLN6HRaCzfCAD0ED0HwL3i7A4AABCcRpZlW48BAABARZzdAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQnEWXGeRiX46gd5cqYG44AutfxoJ55QjoOTDHkp7D2R0AACC4PvgRCS5UKCrLXy0xN0Rl21fSzCtR0XNgjuVzg7M7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAAQnftw5e/bs9OnTdTqdm5vbmDFjXn/99V5sZMyYMQcPHuzhyr/5zW/0en2Xd+Xk5MTFxWm12oCAgF4MA33LrubG8uXLo6Ki3NzcQkJCVqxY0dTU1IvBwB7Y1byi59gVu5objtZzBI87bW1tv/3tb4cNG3b69Omamhq9Xh8WFmbD8eh0umXLlq1bt86GY4DC3uZGfX19dnb21atX9Xq9Xq9fu3atDQeDXrO3eUXPsR/2NjccrufIFrB8C2q7evWqJElnz57tfNeNGzdmzZrl6+sbFBS0dOnShoYGZfmtW7fS0tJCQkI8PDyio6PPnTsny3JkZOSBAweUe6dMmTJv3rympiaj0bhkyZLg4GCdTvfcc89VV1fLsvzSSy8NHDhQp9OFhobOmzevy1Ht3bvX399frZr7jiXPL3Ojd3NDsWbNmvj4+L6vue/Y6vllXtFz1Hisddjn3FA4Qs8R/OzOsGHDIiIilixZ8re//e3KlSsd70pKSho4cGBJScnx48dPnDiRkZGhLJ8zZ05ZWdmxY8cMBsOePXs8PDzaH1JWVjZp0qTJkyfv2bNn4MCBc+fOraysPHny5JUrV7y8vFJTUyVJ2r59e1RU1Pbt20tLS/fs2WPFWnFv7HluFBYWxsTE9H3NUJ89zyvYlj3PDYfoObZNW1ZQWVm5cuXKcePGOTs7jxo1au/evbIsnzt3TpKkqqoqZZ38/PzBgwe3traWlJRIknT9+vW7NhIZGbl69erg4ODs7GxlyaVLlzQaTfsWjEajRqMxGAyyLD/00EPKXszhlZadsMO5IcvymjVrRo4cWVNT04eV9jlbPb/MK3qOGo+1GjucG7LD9Bzx4067urq6t99+28nJ6aeffjp06JBWq22/6/Lly5IkVVZW5ufnu7m5dX5sZGSkv7//hAkTTCaTsuTw4cNOTk6hHXh7e//8888yrcfix1qf/cyN9evXh4WFlZaW9ml9fY+40xP2M6/oOfbGfuaG4/Qcwd/M6sjd3T0jI2Pw4ME//fRTcHBwQ0NDdXW1cldpaamLi4vyBmdjY2N5eXnnh2/bts3X13fGjBmNjY2SJIWEhGg0mlOnTpX+z61bt6KioiRJcnJyoKMqBjuZGytXrszNzT1y5EhoaKgKVcLa7GRewQ7ZydxwqJ4j+D+SioqK11577eTJkw0NDTdv3ty4cWNzc/PDDz8cERExceLEjIyM+vr6ysrKzMzM+fPnOzk5hYeHJyQkLF68uLy8XJblM2fOtE81FxeX/fv3e3p6Tps2ra6uTllz4cKFygrV1dX79u1T1gwICDh//nyX42ltbTWZTM3NzZIkmUymO3fuWOUwoAv2NjfS09P379+fl5en0+lMJpPwXwoVlb3NK3qO/bC3ueFwPce2J5fUZjQaFy1aNHr0aFdXV29v70mTJn355ZfKXdeuXUtMTNTpdIGBgWlpafX19crymzdvLlq0KCgoyMPDY9y4cefPn5c7fBK+paXlj3/84yOPPHLz5k2DwZCenj5ixAh3d/ewsLBXXnlF2cI333wzevRob2/vpKSku8azY8eOjge/4wlMO2TJ88vcuKe5cevWrbv+YYaHh1vvWNw7Wz2/zCt6jhqPtQ67mhsO2HM07VvpBY1Go+y+11uAPbPk+WVuiM1Wzy/zSmz0HJhj+fMr+JtZAAAAxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgnO2fBPKz7IDnTE3oAbmFcxhbsAczu4AAADBaWRZtvUYAAAAVMTZHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Cy6qjLXr3QEvbsyE3PDEVj/ql3MK0dAz4E5lvQczu4AAADB9cFvZnFdZlFZ/mqJuSEq276SZl6Jip4DcyyfG5zdAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAEJ2zcOXr06DPPPDN06FCtVvvAAw9kZmY2NDRYYb8tLS3p6elDhw719PScO3dubW1tl6u5u7trOnBxcblz544VhuewbDUfKisrU1JSdDqdt7f3U089df78+S5Xy8nJiYuL02q1AQEBHZenpqZ2nCd6vd4KY0bv0HPQET3H3ogZd/7xj388+eSTDz300LFjx6qqqnJzc6uqqk6dOtWTx8qy3Nzc3Otdr1+/Pi8v7/jx48XFxWVlZUuWLOlytcrKyrr/SUxMnDlzpouLS693iu7ZcD6kpaUZDIYLFy5cv349MDAwOTm5y9V0Ot2yZcvWrVvX+a6MjIz2qTJr1qxejwSqouegI3qOPZItYPkW1NDa2hocHJyRkXHX8ra2NlmWb9y4MWvWLF9f36CgoKVLlzY0NCj3RkZGZmZmTp48OSIioqCgwGg0LlmyJDg4WKfTPffcc9XV1cpq77zzTmhoqJeXV2Bg4Jtvvtl5735+fh9//LHyd0FBgbOz861bt7oZbXV1tYuLy+HDhy2sWg2WPL/2MzdsOx/Cw8M/+ugj5e+CggInJ6eWlhZzQ927d6+/v3/HJfPnz3/99dd7W7qKbPX82s+86oie01foOfQcc/ogsdh292pQEvTJkye7vDc2NnbOnDm1tbXl5eWxsbEvvviisjwyMvL++++vqalRbv7ud7+bOXNmdXV1Y2Pj4sWLn3nmGVmWz58/7+7ufvHiRVmWDQbDjz/+eNfGy8vLO+5aOat89OjRbka7ZcuW0aNHW1CuisRoPTacD7Isr1ix4sknn6ysrDQajc8//3xiYmI3Q+2y9QQGBgYHB8fExGzatKmpqeneD4AqiDsd0XP6Cj2HnmMOcacLhw4dkiSpqqqq813nzp3reFd+fv7gwYNbW1tlWY6MjPzrX/+qLL906ZJGo2lfzWg0ajQag8FQUlLi6ur62Wef1dbWdrnrCxcuSJJ06dKl9iVOTk5ff/11N6ONiIjYsmXLvVdpDWK0HhvOB2XlKVOmKEfjvvvuu3LlSjdD7dx68vLyvvvuu4sXL+7bty8oKKjz60VbIe50RM/pK/QcZTk9pzPLn18BP7vj6+srSdL169c733Xt2jWtVqusIElSWFiYyWSqqalRbg4bNkz5o7S0VKPRPPzwwyNGjBgxYsSDDz7o5eV1/fr1sLCwnJyc999/PyAg4NFHHz1y5Mhd2/fw8JAkyWg0Kjfr6ura2to8PT13797d/smvjusXFBSUlpampqb2Ve3ozIbzQZblqVOnhoWF3bx5s76+PiUlZfLkyQ0NDebmQ2cJCQmxsbGjRo1KSkratGlTbm6uJYcCKqHnoCN6jp2ybdpSg/K+6auvvnrX8ra2truSdUFBgYuLS3uyPnDggLK8uLh4wIABBoPB3C4aGxvfeuutIUOGKO/FduTn5/fJJ58of3/zzTfdv4/+3HPPzZ49+97KsyJLnl/7mRs2nA/V1dVSpzcavv/+e3Pb6fxKq6PPPvts6NCh3ZVqRbZ6fu1nXnVEz+kr9BxlOT2nsz5ILLbdvUr+/ve/Dx48ePXq1SUlJSaT6cyZM2lpaUePHm1ra5s4ceLzzz9fV1dXUVExadKkxYsXKw/pONVkWZ42bdqsWbNu3Lghy3JVVdXnn38uy/Ivv/ySn59vMplkWd65c6efn1/n1pOZmRkZGXnp0qXKysr4+Pg5c+aYG2RVVdWgQYPs8wODCjFaj2zT+RAaGrpo0SKj0Xj79u0NGza4u7vfvHmz8whbWlpu376dk5Pj7+9/+/ZtZZutra0fffRRaWmpwWD45ptvwsPD29/mtznizl3oOX2CntO+BXrOXYg7ZhUWFk6bNs3b29vNze2BBx7YuHGj8gH4a9euJSYm6nS6wMDAtLS0+vp6Zf27pprBYEhPTx8xYoS7u3tYWNgrr7wiy/KJEyceeeQRT0/PIUOGTJgw4dtvv+2836amppdfftnb29vd3X3OnDlGo9HcCDdv3my3HxhUCNN6ZNvNh1OnTiUkJAwZMsTT0zM2Ntbc/zQ7duzoeM5Vq9XKstza2jp16lQfH59BgwaFhYWtWrWqsbGxz49M7xB3OqPnWI6e0/5wes5dLH9+Ne1b6QXlXUBLtgB7Zsnzy9wQm62eX+aV2Og5MMfy51fAjyoDAAB0RNwBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAATnbPkmfvUXVuGwmBtQA/MK5jA3YA5ndwAAgOAs+s0sAAAA+8fZHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Cy6qjLXr3QEvbsyE3PDEVj/ql3MK0dAz4E5lvQczu4AAADB9cFvZjnOdZmVVw+OVq8lHO1YOVq9tuJox9nR6rWEox0rR6vXEpzdAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOJvFnccee0yj0Wg0Gg8Pj0ceeSQvL6/XmxozZszBgwe7WaGlpSU9PX3o0KGenp5z586tra3t9b56zZr15uTkxMXFabXagICAXu/Fhqx5rJYvXx4VFeXm5hYSErJixYqmpqZe76vXrFlvZmbmyJEjXVxcfHx8ZsyYUVxc3Ot99TvWPM6KlpaW6OhojUZTUVHR6331mjXrTU1N1XSg1+t7vS+bsPLc+Ne//jVhwoTBgwf7+vquWLGi1/vqNWvW6+7u3nFuuLi43Llzp9e7s4Qtz+783//9X3Nz89WrV6dOnTpz5syamhqVdrR+/fq8vLzjx48XFxeXlZUtWbJEpR11z2r16nS6ZcuWrVu3TqXtW4HVjlV9fX12dvbVq1f1er1er1+7dq1KO+qe1eqdPn16fn5+TU3N8ePHnZyc5s+fr9KO7JPVjrMiKyvLx8dH1V10z5r1ZmRk1P3PrFmz1NuRSqx2rA4fPpyUlLRw4cKysrITJ05Mnz5dpR11z2r1VlZWtk+MxMTEmTNnuri4qLSv7tky7mg0GmdnZ29v71deeeX27du//PKLsvydd96JjIz08PAYMWLEW2+91b7+mDFj1q1b9/jjj99///3jx48/ffr0XRs0GAyPPfbY/Pnzm5ubOy7/8MMPV65cGRYW5ufn95e//OXzzz83GAxqV9eZ1ep9+umnU1JShg8frnZF6rHasdq5c2d8fLyPj09cXNwLL7xw9OhRtUvrktXqnTBhQlhYmIeHR3Bw8LBhw7y9vdUuza5Y7ThLknT27Nndu3dv3LhR1Yq6Z816Bw4c6P4/zs59cEU3K7PasVq9evXSpUsXLVrk7+8/fPjw+Ph4tUvrktXq1Wq1yqwwmUxffvnliy++qHZp5tjFZ3f0er2rq2tkZKRyMzg4+J///Gdtbe2BAwfee++9L774on3NL7/88sCBA2fOnElOTl66dGnHjZSVlU2aNGny5Ml79uwZOHBg+/KKioqqqqro6GjlZkxMTEtLy9mzZ9UvyyxV6xWMNY9VYWFhTEyMSoX0kBXqzcnJCQgI8PDwOH369Keffqp2RfZJ7ePc2tq6YMGCrVu3enh4WKGcX2WdeTV8+PDx48dv3ry5cxjqR1Q9ViaT6fvvv29tbb3vvvuGDBny5JNP/vTTT9apyxyr9djdu3eHhIQ8/vjj6tXyK2QLWLKFKVOmaLVaf39/JfodPny4y9VWrFiRlpam/B0ZGblz507l77Nnz7q6urYvX716dXBwcHZ2ductXLhwQZKkS5cutS9xcnL6+uuvezHmflFvu7179/r7+/dutApL6u1fx0qW5TVr1owcObKmpqZ3Y+5H9TY2Nt64cePbb7+Njo5euHBh78Zsefew/n6teZy3bNmSnJwsy7Lyorm8vLx3Y+4v9ebl5X333XcXL17ct29fUFBQRkZG78YsfM8pLy+XJGnkyJFnzpypr69ftmxZUFBQfX19L8bcL+rtKCIiYsuWLb0bsNwXPceWZ3cWLVpUVFT07bffRkVFffLJJ+3LDx48+Oijj4aEhISGhn744YfV1dXtd+l0OuUPV1fX27dvt7S0KDc//PDDoKCgLj+IoLy6MhqNys26urq2tjZPT0+ViuqGdeoVg5WP1YYNG3JzcwsKCmz1SQtr1uvq6hoYGBgfH79t27Zdu3Y1NjaqU5M9ss5xLi4u3rp16/bt29UspUesNq8SEhJiY2NHjRqVlJS0adOm3Nxc1WpSi3WOlbu7uyRJaWlpY8eO1Wq1GzdurKio+PHHH1UszAwr99iCgoLS0tLU1NS+r6THbBl3lK8OjRs37tNPP9Xr9YWFhZIklZeXp6SkrF27tqysTPlYsdyD3wTZtm2br6/vjBkzOvfugIAAPz+/oqIi5eaJEyecnZ2joqL6vJxfZZ16xWDNY7Vy5crc3NwjR46Ehob2cRk9Zqu5MWDAgAEDBvRBAf2EdY5zYWFhTU3N2LFjdTpdbGysJEljx47dtWuXGhV1zybzatCgQe3/EfYj1jlW7u7uo0aNav/5Jxv+9pyV50Z2dnZiYmJ7YLIJu/jsTnh4+IIFC1avXi1JUl1dnSRJDz74oEajuXHjRg8/W+Di4rJ//35PT89p06YpW+ho8eLFWVlZly9frqqqWr16dXJysm0/oal2va2trSaTSXn73GQy2epbf31C7WOVnp6+f//+vLw8nU5nMpls8kX0jlStt7m5OSsr6/z580aj8YcffsjIyHj22Wdt9S0J21L1OKekpJSUlBQVFRUVFSnf0T106NDs2bNVqKOnVK23ra1t165dZWVlRqPxyJEjq1atSk5OVqMK61C75/zpT3/asWPHhQsXTCZTZmbmsGHDxo8f3+dV9Jza9UqSVF1dfeDAgcWLF/ftyO+VXcQdSZLeeOONY8eOHT58OCIiYu3atZMmTZo0adKSJUsSEhJ6uIWBAwfq9frQ0NCpU6feunWr411r1qxJSEgYN25ceHh4cHDwBx98oEIF90bVenfu3Onq6jpv3rzKykpXV1fbfhXWcuodK4PBsH379osXL4aFhbm6urq6utrktN9d1KtXo9EcO3ZsypQpfn5+KSkp8fHxH3/8sTpF9APqHWc3N7fg//H395ckKTAwUKvVqlJGj6nac/R6fUxMjJ+f34IFC1JSUrZu3apCBdaj6rFatmzZ888//+ijj/r7+584ceKrr75yc3NToYh7oGq9kiTt3r07NDTUlh9SliRJkjQ9OVVl9sEO+QP01Kv2Y/sj6hV7v7ZCvdZ5bH9EvffKXs7uAAAAqIS4AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcPYbd1paWtLT04cOHerp6Tl37tza2lpza2ZmZo4cOdLFxcXHx2fGjBnFxcXWHGffamlpiY6O1mg0FRUV5tZxd3fXdODi4tKvLyTYQ5WVlSkpKTqdztvb+6mnnjp//nyXq+Xk5MTFxSkXDO35XXbC3AiXL18eFRXl5uYWEhKyYsWKbq6FaG4LqampHeeMXq9Xq4b+jJ5jbh16Dj3nXrdghz3HfuPO+vXr8/Lyjh8/XlxcrFzN2tya06dPz8/Pr6mpOX78uJOTU7/+JamsrKxfvSpgZWVl3f8kJibOnDnTES6Mm5aWZjAYLly4cP369cDAQHOXbdXpdMuWLVu3bt093WUnzI2wvr4+Ozv76tWrer1er9evXbv2XrcgSVJGRkb7tJk1a1afDlwQ9Bxz6Dn0nHvdgmSHPceS3xe1fAvd8PPz+/jjj5W/CwoKnJ2db9261f1Dmpqa0tLSnn76aZWGpGq9siz//PPP4eHh//nPf6Se/YRydXW1i4uLuR+ztZwl9fb5sQoPD//oo4+UvwsKCpycnFpaWsyt3M2vwVv+Q/Fd6sN6ux/hmjVr4uPj73UL8+fPf/311/tkeAq1/y3YZL/0nF9dn55jbmV6jv33HDs9u1NRUVFVVRUdHa3cjImJaWlpOXv2rLn1c3JyAgICPDw8Tp8+3cOf+bA3ra2tCxYs2Lp1q/IT7j2xe/fukJAQm1+Z2zqSkpL27t1bVVVVW1u7a9eu3//+9w7125btCgsLY2JievHAnJyc4cOHjx8/fvPmzcrvqaEjek5P0HNsPSgbEKbn2GncUX5mzMvLS7np4eHh5OTUzVvpycnJJ0+e/Pe//93Q0PDnP//ZSqPsU1u3bg0JCZk+fXrPH7Jz506b/+ia1bzxxhstLS3+/v5eXl4//vjju+++a+sR2cDatWsvX76cmZl5rw/8wx/+8MUXXxQUFKxateq9995buXKlGsPr1+g5PUHPcTQi9Rw7jTvKqw2j0ajcrKura2tr8/T0lCRp9+7d7Z9+al/f1dU1MDAwPj5+27Ztu3bt6uZn6O1TcXHx1q1bt2/f3vmuLuuVJKmgoKC0tDQ1NdVKQ7QpWZanTp0aFhZ28+bN+vr6lJSUyZMnNzQ0mDs4QtqwYUNubm5BQUH7Jy16Xn5CQkJsbOyoUaOSkpI2bdqUm5ur/nj7GXpOO3qORM+RJEm4nmOncScgIMDPz6+oqEi5eeLECWdnZ+XXqlNTU+96M+8uAwYM6HenHAsLC2tqasaOHavT6WJjYyVJGjt27K5duyTz9WZnZycmJup0OtuM2Lr++9///vDDD+np6UOGDDMTiKgAAAKYSURBVNFqta+++uqVK1fOnDnzq5NBGCtXrszNzT1y5EhoaGj7wt6VP2jQoJaWFhXG2L/Rc+g5HdFzxOs5dhp3JElavHhxVlbW5cuXq6qqVq9enZyc7O3t3Xm15ubmrKys8+fPG43GH374ISMj49lnn+133xpISUkpKSkpKioqKio6ePCgJEmHDh2aPXu2ufWrq6sPHDjgOGeVdTpdaGjo+++/X1tbazKZ3n33XXd394iIiM5rtra2mkwm5X1ik8nU8euy3dxlJ8yNMD09ff/+/Xl5eTqdzmQydfOl0C630NbWtmvXrrKyMqPReOTIkVWrVpn7jomDo+fQc9rRcwTsOZZ8ztnyLXSjqanp5Zdf9vb2dnd3nzNnjtFo7HK15ubmGTNm+Pv7Dxo0aMSIEcuXLze3puVUrbfdL7/8Iv3atyQ2b948evRotUdiSb19fqxOnTqVkJAwZMgQT0/P2NhYc98N2bFjR8fprdVqe3KX5fqk3i5HeOvWrbv+zYaHh9/TFlpbW6dOnerj4zNo0KCwsLBVq1Y1NjZaOFTr/Fuw8n7pOd2sQ8+h5/R8C/bZczSyBWfklHfvLNlC/0K91nlsf0S9Yu/XVqjXOo/tj6j3Xtnvm1kAAAB9grgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwzpZvwhGupd2Ro9VrCUc7Vo5Wr6042nF2tHot4WjHytHqtQRndwAAgOAsuswgAACA/ePsDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAAT3/8Z/zKE559m+AAAAAElFTkSuQmCC>"
-/>
+![Binding to cores and block:block distribution](misc/hybrid_cores_block_block.png)
+{: align="center"}
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=4
-#SBATCH --cpus-per-task=4
+!!! example "Binding to cores and block:block distribution"
 
-export OMP_NUM_THREADS=4
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2
+    #SBATCH --tasks-per-node=4
+    #SBATCH --cpus-per-task=4
 
-srun --ntasks 8 --cpus-per-task $OMP_NUM_THREADS --cpu_bind=cores --distribution=block:block ./application
-```
+    export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
+    srun --ntasks 8 --cpus-per-task $OMP_NUM_THREADS --cpu_bind=cores --distribution=block:block ./application
+    ```
 
 #### Distribution: cyclic:block
 
-The cyclic:block distribution will allocate the tasks of your job in
-alternation between the first node and the second node while filling the
-sockets linearly.
+The `cyclic:block` distribution will allocate the tasks of your job in alternation between the first
+node and the second node while filling the sockets linearly.
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAvoAAADyCAIAAACzsfbGAAAABmJLR0QA/wD/AP+gvaeTAAAgAElEQVR4nO3df1RUdf7H8TuIIszwQxl+CYKCgmE/FrEUleyHh9ZabQUhXVslj5qsRUqb6Tf82Z5EPWXmnozMUg4nZ8vU3XOqZVckO5idzRTTzB+g4C9+rs7wy5Ff9/vHPTuHI0LIcGeGzzwffzl37tz7ed/5zNsXd2buaGRZlgAAAMTlYu8BAAAAqIu4AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIJztebBGo2mt8YBoM+RZdnGe6TnAM7Mmp7D2R0AACA4q87uKGz/Fx4A+7LvWRZ6DuBsrO85nN0BAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrjjvB577DGNRvPdd99ZlgQGBh44cKD7WygqKtLpdN1fPycnZ8KECVqtNjAw8B4GCkAItu85y5Yti46O9vDwCA0NXb58eVNT0z0MF2Ih7jg1X1/f1157zWa70+v1S5cuXbt2rc32CMCh2Ljn1NfXZ2dnX7lyxWAwGAyGNWvW2GzXcDTEHae2YMGCkpKSL774ouNd5eXlycnJ/v7+ISEhL730UmNjo7L8ypUrTz31lI+Pz/3333/kyBHL+rW1tWlpaUOHDvXz85s1a1ZNTU3HbT799NMpKSlDhw5VqRwADs7GPWfHjh3x8fG+vr4TJkx44YUX2j8czoa449R0Ot3atWtXrlzZ3Nx8x11JSUn9+/cvKSk5duzY8ePHMzIylOXJyckhISEVFRVfffXVBx98YFl/zpw5lZWVJ06cuHz5sre3d2pqqs2qANBX2LHnFBYWxsbG9mo16FNkK1i/BdjR5MmT33zzzebm5lGjRm3btk2W5YCAgP3798uyfPbsWUmSqqqqlDXz8/MHDhzY2tp69uxZjUZz48YNZXlOTo5Wq5Vl+eLFixqNxrK+yWTSaDRGo/Gu+92zZ09AQIDa1UFV9nrt03P6NHv1HFmWV69ePXz48JqaGlULhHqsf+272jpewcG4urpmZWUtXLhw7ty5loVXr17VarV+fn7KzfDwcLPZXFNTc/XqVV9f30GDBinLR44cqfyjtLRUo9E8/PDDli14e3tfu3bN29vbVnUA6Bts33PWr1+fm5tbUFDg6+urVlVweMQdSM8+++zbb7+dlZVlWRISEtLQ0FBdXa10n9LSUjc3N71eHxwcbDQab9++7ebmJklSRUWFsn5oaKhGozl58iT5BsCvsmXPWbFixb59+w4fPhwSEqJaQegD+OwOJEmSNm/evHXr1rq6OuVmZGTk+PHjMzIy6uvrKysrMzMz582b5+LiMmrUqJiYmC1btkiSdPv27a1btyrrR0REJCQkLFiwoLy8XJKk6urqvXv3dtxLa2ur2WxW3rM3m823b9+2UXkAHIxtek56evq+ffvy8vL0er3ZbOaL6M6MuANJkqRx48Y988wzlq9CaDSavXv3NjY2Dh8+PCYm5sEHH3znnXeUuz7//PP8/PwxY8Y88cQTTzzxhGULe/bsGTJkyIQJEzw9PcePH19YWNhxLzt27HB3d587d25lZaW7uzsnlgGnZYOeYzQat23bduHChfDwcHd3d3d39+joaNtUBweksXwCqCcP1mgkSbJmCwD6Inu99uk5gHOy/rXP2R0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA4V+s3odForN8IAHQTPQfAveLsDgAAEJxGlmV7jwEAAEBFnN0BAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABCcVZcZ5GJfzqBnlypgbjgD21/GgnnlDOg56Iw1PYezOwAAQHC98CMSXKhQVNb/tcTcEJV9/5JmXomKnoPOWD83OLsDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDix50zZ85MmzZNr9d7eHiMGjXq9ddf78FGRo0adeDAgW6u/Jvf/MZgMNz1rpycnAkTJmi12sDAwB4MA73LoebGsmXLoqOjPTw8QkNDly9f3tTU1IPBwBE41Lyi5zgUh5obztZzBI87bW1tv/3tb4cMGXLq1KmamhqDwRAeHm7H8ej1+qVLl65du9aOY4DC0eZGfX19dnb2lStXDAaDwWBYs2aNHQeDHnO0eUXPcRyONjecrufIVrB+C2q7cuWKJElnzpzpeNf169dnzpzp5+cXHBy8ZMmShoYGZfnNmzfT0tJCQ0M9PT1jYmLOnj0ry3JUVNT+/fuVeydPnjx37tympiaTybR48eKQkBC9Xv/cc89VV1fLsvzSSy/1799fr9eHhYXNnTv3rqPas2dPQECAWjX3HmueX+ZGz+aGYvXq1fHx8b1fc++x1/PLvKLnqPFY23DMuaFwhp4j+NmdIUOGREZGLl68+G9/+9vly5fb35WUlNS/f/+SkpJjx44dP348IyNDWT579uyysrKjR48ajcbdu3d7enpaHlJWVjZx4sRJkybt3r27f//+c+bMqaysPHHixOXLl729vVNTUyVJ2rZtW3R09LZt20pLS3fv3m3DWnFvHHluFBYWxsbG9n7NUJ8jzyvYlyPPDafoOfZNWzZQWVm5YsWKMWPGuLq6jhgxYs+ePbIsnz17VpKkqqoqZZ38/PyBAwe2traWlJRIknTt2rU7NhIVFbVq1aqQkJDs7GxlycWLFzUajWULJpNJo9EYjUZZlh966CFlL53hLy0H4YBzQ5bl1atXDx8+vKamphcr7XX2en6ZV/QcNR5rMw44N2Sn6Tnixx2Lurq6t99+28XF5aeffjp48KBWq7XcdenSJUmSKisr8/PzPTw8Oj42KioqICBg3LhxZrNZWXLo0CEXF5ewdnx8fH7++WeZ1mP1Y23PcebGunXrwsPDS0tLe7W+3kfc6Q7HmVf0HEfjOHPDeXqO4G9mtafT6TIyMgYOHPjTTz+FhIQ0NDRUV1crd5WWlrq5uSlvcDY2NpaXl3d8+NatW/38/KZPn97Y2ChJUmhoqEajOXnyZOn/3Lx5Mzo6WpIkFxcnOqpicJC5sWLFitzc3MOHD4eFhalQJWzNQeYVHJCDzA2n6jmCv0gqKipee+21EydONDQ03LhxY8OGDc3NzQ8//HBkZOT48eMzMjLq6+srKyszMzPnzZvn4uISERGRkJCwaNGi8vJyWZZPnz5tmWpubm779u3z8vKaOnVqXV2dsuaCBQuUFaqrq/fu3ausGRgYeO7cubuOp7W11Ww2Nzc3S5JkNptv375tk8OAu3C0uZGenr5v3768vDy9Xm82m4X/UqioHG1e0XMch6PNDafrOfY9uaQ2k8m0cOHCkSNHuru7+/j4TJw48csvv1Tuunr1amJiol6vDwoKSktLq6+vV5bfuHFj4cKFwcHBnp6eY8aMOXfunNzuk/AtLS1//OMfH3nkkRs3bhiNxvT09GHDhul0uvDw8FdeeUXZwjfffDNy5EgfH5+kpKQ7xrN9+/b2B7/9CUwHZM3zy9y4p7lx8+bNO16YERERtjsW985ezy/zip6jxmNtw6HmhhP2HI1lKz2g0WiU3fd4C3Bk1jy/zA2x2ev5ZV6JjZ6Dzlj//Ar+ZhYAAABxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgXK3fhPKz7EBHzA2ogXmFzjA30BnO7gAAAMFpZFm29xgAAABUxNkdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgrLqqMtevdAY9uzITc8MZ2P6qXcwrZ0DPQWes6Tmc3QEAAILrhd/M4rrMorL+ryXmhqjs+5c080pU9Bx0xvq5wdkdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwwsadI0eOPPPMM4MHD9ZqtQ888EBmZmZDQ4MN9tvS0pKenj548GAvL685c+bU1tbedTWdTqdpx83N7fbt2zYYntOy13yorKxMSUnR6/U+Pj5PPfXUuXPn7rpaTk7OhAkTtFptYGBg++Wpqant54nBYLDBmNEz9By0R89xNGLGnX/84x9PPvnkQw89dPTo0aqqqtzc3KqqqpMnT3bnsbIsNzc393jX69aty8vLO3bsWHFxcVlZ2eLFi++6WmVlZd3/JCYmzpgxw83Nrcc7RdfsOB/S0tKMRuP58+evXbsWFBSUnJx819X0ev3SpUvXrl3b8a6MjAzLVJk5c2aPRwJV0XPQHj3HEclWsH4LamhtbQ0JCcnIyLhjeVtbmyzL169fnzlzpp+fX3Bw8JIlSxoaGpR7o6KiMjMzJ02aFBkZWVBQYDKZFi9eHBISotfrn3vuuerqamW1d955JywszNvbOygo6M033+y4d39//48//lj5d0FBgaur682bN7sYbXV1tZub26FDh6ysWg3WPL+OMzfsOx8iIiI++ugj5d8FBQUuLi4tLS2dDXXPnj0BAQHtl8ybN+/111/vaekqstfz6zjzqj16Tm+h59BzOtMLicW+u1eDkqBPnDhx13vj4uJmz55dW1tbXl4eFxf34osvKsujoqLuv//+mpoa5ebvfve7GTNmVFdXNzY2Llq06JlnnpFl+dy5czqd7sKFC7IsG43GH3/88Y6Nl5eXt9+1clb5yJEjXYx28+bNI0eOtKJcFYnReuw4H2RZXr58+ZNPPllZWWkymZ5//vnExMQuhnrX1hMUFBQSEhIbG7tx48ampqZ7PwCqIO60R8/pLfQcek5niDt3cfDgQUmSqqqqOt519uzZ9nfl5+cPHDiwtbVVluWoqKi//vWvyvKLFy9qNBrLaiaTSaPRGI3GkpISd3f3zz77rLa29q67Pn/+vCRJFy9etCxxcXH5+uuvuxhtZGTk5s2b771KWxCj9dhxPigrT548WTka99133+XLl7sYasfWk5eX99133124cGHv3r3BwcEd/160F+JOe/Sc3kLPUZbTczqy/vkV8LM7fn5+kiRdu3at411Xr17VarXKCpIkhYeHm83mmpoa5eaQIUOUf5SWlmo0mocffnjYsGHDhg178MEHvb29r127Fh4enpOT8/777wcGBj766KOHDx++Y/uenp6SJJlMJuVmXV1dW1ubl5fXrl27LJ/8ar9+QUFBaWlpampqb9WOjuw4H2RZnjJlSnh4+I0bN+rr61NSUiZNmtTQ0NDZfOgoISEhLi5uxIgRSUlJGzduzM3NteZQQCX0HLRHz3FQ9k1balDeN3311VfvWN7W1nZHsi4oKHBzc7Mk6/379yvLi4uL+/XrZzQaO9tFY2PjW2+9NWjQIOW92Pb8/f0/+eQT5d/ffPNN1++jP/fcc7Nmzbq38mzImufXceaGHedDdXW11OGNhu+//76z7XT8S6u9zz77bPDgwV2VakP2en4dZ161R8/pLfQcZTk9p6NeSCz23b1K/v73vw8cOHDVqlUlJSVms/n06dNpaWlHjhxpa2sbP378888/X1dXV1FRMXHixEWLFikPaT/VZFmeOnXqzJkzr1+/LstyVVXV559/LsvyL7/8kp+fbzabZVnesWOHv79/x9aTmZkZFRV18eLFysrK+Pj42bNndzbIqqqqAQMGOOYHBhVitB7ZrvMhLCxs4cKFJpPp1q1b69ev1+l0N27c6DjClpaWW7du5eTkBAQE3Lp1S9lma2vrRx99VFpaajQav/nmm4iICMvb/HZH3LkDPadX0HMsW6Dn3IG406nCwsKpU6f6+Ph4eHg88MADGzZsUD4Af/Xq1cTERL1eHxQUlJaWVl9fr6x/x1QzGo3p6enDhg3T6XTh4eGvvPKKLMvHjx9/5JFHvLy8Bg0aNG7cuG+//bbjfpuaml5++WUfHx+dTjd79myTydTZCDdt2uSwHxhUCNN6ZPvNh5MnTyYkJAwaNMjLyysuLq6z/2m2b9/e/pyrVquVZbm1tXXKlCm+vr4DBgwIDw9fuXJlY2Njrx+ZniHudETPsR49x/Jwes4drH9+NZat9IDyLqA1W4Ajs+b5ZW6IzV7PL/NKbPQcdMb651fAjyoDAAC0R9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAATnav0mfvUXVuG0mBtQA/MKnWFuoDOc3QEAAIKz6jezAAAAHB9ndwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgrPqqspcv9IZ9OzKTMwNZ2D7q3Yxr5wBPQedsabncHYHAAAIrhd+M8t5rsus/PXgbPVaw9mOlbPVay/OdpydrV5rONuxcrZ6rcHZHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgrNb3Hnsscc0Go1Go/H09HzkkUfy8vJ6vKlRo0YdOHCgixVaWlrS09MHDx7s5eU1Z86c2traHu+rx2xZ77Jly6Kjoz08PEJDQ5cvX97U1NTjfdmFLY+VoqWlJSYmRqPRVFRU9HhfPWbjev/1r3+NGzdu4MCBfn5+y5cv7/G++hxbHuecnJwJEyZotdrAwMAe78VKtqw3MzNz+PDhbm5uvr6+06dPLy4u7vG+7MKWxyo1NVXTjsFg6PG+esyW9ep0uvb1urm53b59u8e7s4Y9z+783//9X3Nz85UrV6ZMmTJjxoyamhqVdrRu3bq8vLxjx44VFxeXlZUtXrxYpR11zWb11tfXZ2dnX7lyxWAwGAyGNWvWqLQj9djsWCmysrJ8fX1V3UXXbFbvoUOHkpKSFixYUFZWdvz48WnTpqm0I8dks+Os1+uXLl26du1albbfTTard9q0afn5+TU1NceOHXNxcZk3b55KO1KPLXtORkZG3f/MnDlTvR11wWb1VlZWWopNTEycMWOGm5ubSvvqmj3jjkajcXV19fHxeeWVV27duvXLL78oy995552oqChPT89hw4a99dZblvVHjRq1du3axx9//P777x87duypU6fu2KDRaHzsscfmzZvX3NzcfvmHH364YsWK8PBwf3//v/zlL59//rnRaFS7uo5sVu+OHTvi4+N9fX0nTJjwwgsvHDlyRO3Sep3NjpUkSWfOnNm1a9eGDRtUrahrNqt31apVS5YsWbhwYUBAwNChQ+Pj49UuzaHY7Dg//fTTKSkpQ4cOVbuirtms3nHjxoWHh3t6eoaEhAwZMsTHx0ft0nqdLXtO//79df/j6toLV7/rAZvVq9VqlUrNZvOXX3754osvql1aZxziszsGg8Hd3T0qKkq5GRIS8s9//rO2tnb//v3vvffeF198YVnzyy+/3L9//+nTp5OTk5csWdJ+I2VlZRMnTpw0adLu3bv79+9vWV5RUVFVVRUTE6PcjI2NbWlpOXPmjPpldUrVeu9QWFgYGxurUiE2oPaxam1tnT9//pYtWzw9PW1Qzq9StV6z2fz999+3trbed999gwYNevLJJ3/66Sfb1OVobPkadAQ2qDcnJycwMNDT0/PUqVOffvqp2hWpxzbHaujQoWPHjt20aVPHMGRjNnst7Nq1KzQ09PHHH1evll8hW8GaLUyePFmr1QYEBCjR79ChQ3ddbfny5Wlpacq/o6KiduzYofz7zJkz7u7uluWrVq0KCQnJzs7uuIXz589LknTx4kXLEhcXl6+//roHY+4T9ba3evXq4cOH19TU9GzM1tTbV47V5s2bk5OTZVlW/rgpLy/v2Zj7RL3l5eWSJA0fPvz06dP19fVLly4NDg6ur6/vwZit7x490yeOs8WePXsCAgJ6NlpFH6q3sbHx+vXr3377bUxMzIIFC3o2ZmfoOXl5ed99992FCxf27t0bHByckZHRszH3lXotIiMjN2/e3LMBy73Rc+x5dmfhwoVFRUXffvttdHT0J598Yll+4MCBRx99NDQ0NCws7MMPP6yurrbcpdfrlX+4u7vfunWrpaVFufnhhx8GBwff9Q1j5a92k8mk3Kyrq2tra/Py8lKpqC7Ypl6L9evX5+bmFhQU2PdTKT1jm2NVXFy8ZcuWbdu2qVlKt9imXp1OJ0lSWlra6NGjtVrthg0bKioqfvzxRxULczA2fg3anS3rdXd3DwoKio+P37p1686dOxsbG9WpSS02O1YJCQlxcXEjRoxISkrauHFjbm6uajV1xcavhYKCgtLS0tTU1N6vpNvsGXeUry2MGTPm008/NRgMhYWFkiSVl5enpKSsWbOmrKxM+Vix3I3fBNm6daufn9/06dM7vsYCAwP9/f2LioqUm8ePH3d1dY2Oju71cn6VbepVrFixIjc39/Dhw2FhYb1chk3Y5lgVFhbW1NSMHj1ar9fHxcVJkjR69OidO3eqUVHXbFOvTqcbMWKE5adnnPAXpG35GnQE9qq3X79+/fr164UCbMgux2rAgAGW0GBjNq43Ozs7MTHREpjswiE+uxMRETF//vxVq1ZJklRXVydJ0oMPPqjRaK5fv97N94Dd3Nz27dvn5eU1depUZQvtLVq0KCsr69KlS1VVVatWrUpOTrbvJ+nUrjc9PX3fvn15eXl6vd5sNve5L6K3p+qxSklJKSkpKSoqKioqUr5LefDgwVmzZqlQR3epPTf+9Kc/bd++/fz582azOTMzc8iQIWPHju31Khyf2se5tbXVbDYrH8swm832+uathar1Njc3Z2VlnTt3zmQy/fDDDxkZGc8++6y9vn1jPVWPVVtb286dO8vKykwm0+HDh1euXJmcnKxGFd2n9mtBkqTq6ur9+/cvWrSod0d+rxwi7kiS9MYbbxw9evTQoUORkZFr1qyZOHHixIkTFy9enJCQ0M0t9O/f32AwhIWFTZky5ebNm+3vWr16dUJCwpgxYyIiIkJCQj744AMVKrg36tVrNBq3bdt24cKF8PBwd3d3d3d3u5zK6kXqHSsPD4+Q/wkICJAkKSgoSKvVqlJGt6n6Wli6dOnzzz//6KOPBgQEHD9+/KuvvvLw8FChiD5A1eO8Y8cOd3f3uXPnVlZWuru7O8IbyurVq9Fojh49OnnyZH9//5SUlPj4+I8//lidImxE1blhMBhiY2P9/f3nz5+fkpKyZcsWFSq4N6rWK0nSrl27wsLC7PkhZUmSJEnTnVNVnT7YKX+AnnrVfmxfRL1i79deqNc2j+2LqPdeOcrZHQAAAJUQdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACM5x405LS0t6evrgwYO9vLzmzJlTW1vb2ZqZmZnDhw93c3Pz9fWdPn16cXGxLcfZu1paWmJiYjQaTUVFRWfr6HQ6TTtubm52v4iZDVRWVqakpOj1eh8fn6eeeurcuXN3XS0nJ2fChAnKBUO7f5eD6GyEy5Yti46O9vDwCA0NXb58eRfXjexsC6mpqe3njMFgUKuGvoye09k69Bx6zr1uwQF7juPGnXXr1uXl5R07dqy4uFi5mnVna06bNi0/P7+mpubYsWMuLi4O/is2XcvKyvrVK5JVVlbW/U9iYuKMGTP67gVMuy8tLc1oNJ4/f/7atWtBQUGdXYpUr9cvXbp07dq193SXg+hshPX19dnZ2VeuXDEYDAaDYc2aNfe6BUmSMjIyLNNm5syZvTpwQdBzOkPPoefc6xYkB+w51vy+qPVb6IK/v//HH3+s/LugoMDV1fXmzZtdP6SpqSktLe3pp59WaUiq1ivL8s8//xwREfGf//xH6t5Pc1dXV7u5uXX2Y7bWs6beXj9WERERH330kfLvgoICFxeXlpaWzlbu4peorf+R6rvqxXq7HuHq1avj4+PvdQvz5s17/fXXe2V4CrVfC3bZLz3nV9en53S2Mj3H8XuOg57dqaioqKqqiomJUW7Gxsa2tLScOXOms/VzcnICAwM9PT1PnTrVzZ/5cDStra3z58/fsmWL8hPu3bFr167Q0FC7X5nbNpKSkvbs2VNVVVVbW7tz587f//73fe43CHtFYWFhbGxsDx6Yk5MzdOjQsWPHbtq0SfktJ7RHz+kOeo69B2UHwvQcB407ys+MeXt7Kzc9PT1dXFy6eCs9OTn5xIkT//73vxsaGv785z/baJS9asuWLaGhodOmTev+Q3bs2GH3H12zmTfeeKOlpSUgIMDb2/vHH39899137T0iO1izZs2lS5cyMzPv9YF/+MMfvvjii4KCgpUrV7733nsrVqxQY3h9Gj2nO+g5zkaknuOgcUf5a8NkMik36+rq2travLy8JEnatWuX5dNPlvXd3d2DgoLi4+O3bt26c+fOLn6G3jEVFxdv2bJl27ZtHe+6a72SJBUUFJSWlqamptpoiHYly/KUKVPCw8Nv3LhRX1+fkpIyadKkhoaGzg6OkNavX5+bm1tQUGD5pEX3y09ISIiLixsxYkRSUtLGjRtzc3PVH28fQ8+xoOdI9BxJkoTrOQ4adwIDA/39/YuKipSbx48fd3V1VX7ZOzU19Y438+7Qr1+/PnfKsbCwsKamZvTo0Xq9Pi4uTpKk0aNH79y5U+q83uzs7MTERL1eb58R29Z///vfH374IWpri2kAAAKdSURBVD09fdCgQVqt9tVXX718+fLp06d/dTIIY8WKFbm5uYcPHw4LC7Ms7Fn5AwYMaGlpUWGMfRs9h57THj1HvJ7joHFHkqRFixZlZWVdunSpqqpq1apVycnJPj4+HVdrbm7Oyso6d+6cyWT64YcfMjIynn322T73rYGUlJSSkpKioqKioqIDBw5IknTw4MFZs2Z1tn51dfX+/fud56yyXq8PCwt7//33a2trzWbzu+++q9PpIiMjO67Z2tpqNpuV94nNZnP7r8t2cZeD6GyE6enp+/bty8vL0+v1ZrO5iy+F3nULbW1tO3fuLCsrM5lMhw8fXrlyZWffMXFy9Bx6jgU9R8CeY83nnK3fQheamppefvllHx8fnU43e/Zsk8l019Wam5unT58eEBAwYMCAYcOGLVu2rLM1radqvRa//PKL9Gvfkti0adPIkSPVHok19fb6sTp58mRCQsKgQYO8vLzi4uI6+27I9u3b209vrVbbnbus1yv13nWEN2/evOM1GxERcU9baG1tnTJliq+v74ABA8LDw1euXNnY2GjlUG3zWrDxfuk5XaxDz6HndH8LjtlzNLIVZ+SUd++s2ULfQr22eWxfRL1i79deqNc2j+2LqPdeOe6bWQAAAL2CuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHCu1m/CGa6l3Z6z1WsNZztWzlavvTjbcXa2eq3hbMfK2eq1Bmd3AACA4Ky6zCAAAIDj4+wOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABPf/Kg/MoRR2M6oAAAAASUVORK5CYII="
-/>
+![binding to cores and cyclic:block distribution](misc/hybrid_cores_cyclic_block.png)
+{: align="center"}
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=4
-#SBATCH --cpus-per-task=4
+!!! example "Binding to cores and cyclic:block distribution"
 
-export OMP_NUM_THREADS=4<br /><br />srun --ntasks 8 --cpus-per-task $OMP_NUM_THREADS --cpu_bind=cores --distribution=cyclic:block ./application
-```
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2
+    #SBATCH --tasks-per-node=4
+    #SBATCH --cpus-per-task=4
+
+    export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
+    srun --ntasks 8 --cpus-per-task $OMP_NUM_THREADS --cpu_bind=cores --distribution=cyclic:block ./application
+    ```
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/checkpoint_restart.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/checkpoint_restart.md
index ea3343fe1a5d21a296207fc374aa181e3ccc0855..38d6686d7a655c1c5d7161d6607be9d6f55d8b5c 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/checkpoint_restart.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/checkpoint_restart.md
@@ -12,6 +12,15 @@ from the very beginning, you should be familiar with the concept of checkpointin
 Another motivation is to use checkpoint/restart to split long running jobs into several shorter
 ones. This might improve the overall job throughput, since shorter jobs can "fill holes" in the job
 queue.
+Here is an extreme example from literature for the waste of large computing resources due to missing
+checkpoints:
+
+!!! cite "Adams, D. The Hitchhikers Guide Through the Galaxy"
+
+    Earth was a supercomputer constructed to find the question to the answer to the Life, the Universe,
+    and Everything by a race of hyper-intelligent pan-dimensional beings. Unfortunately 10 million years
+    later, and five minutes before the program had run to completion, the Earth was destroyed by
+    Vogons.
 
 If you wish to do checkpointing, your first step should always be to check if your application
 already has such capabilities built-in, as that is the most stable and safe way of doing it.
@@ -21,7 +30,7 @@ Abaqus, Amber, Gaussian, GROMACS, LAMMPS, NAMD, NWChem, Quantum Espresso, STAR-C
 
 In case your program does not natively support checkpointing, there are attempts at creating generic
 checkpoint/restart solutions that should work application-agnostic. One such project which we
-recommend is [Distributed MultiThreaded CheckPointing](http://dmtcp.sourceforge.net) (DMTCP).
+recommend is [Distributed Multi-Threaded Check-Pointing](http://dmtcp.sourceforge.net) (DMTCP).
 
 DMTCP is available on ZIH systems after having loaded the `dmtcp` module
 
@@ -47,8 +56,8 @@ checkpoint/restart bits transparently to your batch script. You just have to spe
 total runtime of your calculation and the interval in which you wish to do checkpoints. The latter
 (plus the time it takes to write the checkpoint) will then be the runtime of the individual jobs.
 This should be targeted at below 24 hours in order to be able to run on all
-[haswell64 partitions](../jobs_and_resources/system_taurus.md#run-time-limits). For increased
-fault-tolerance, it can be chosen even shorter.
+[partitions haswell64](../jobs_and_resources/partitions_and_limits.md#runtime-limits). For
+increased fault-tolerance, it can be chosen even shorter.
 
 To use it, first add a `dmtcp_launch` before your application call in your batch script. In the case
 of MPI applications, you have to add the parameters `--ib --rm` and put it between `srun` and your
@@ -85,7 +94,7 @@ about 2 days in total.
 
 !!! Hints
 
-    - If you see your first job running into the timelimit, that probably
+    - If you see your first job running into the time limit, that probably
     means the timeout for writing out checkpoint files does not suffice
     and should be increased. Our tests have shown that it takes
     approximately 5 minutes to write out the memory content of a fully
@@ -95,7 +104,7 @@ about 2 days in total.
     content is rather incompressible, it might be a good idea to disable
     the checkpoint file compression by setting: `export DMTCP_GZIP=0`
     - Note that all jobs the script deems necessary for your chosen
-    timelimit/interval values are submitted right when first calling the
+    time limit/interval values are submitted right when first calling the
     script. If your applications take considerably less time than what
     you specified, some of the individual jobs will be unnecessary. As
     soon as one job does not find a checkpoint to resume from, it will
@@ -115,7 +124,7 @@ What happens in your work directory?
 
 If you wish to restart manually from one of your checkpoints (e.g., if something went wrong in your
 later jobs or the jobs vanished from the queue for some reason), you have to call `dmtcp_sbatch`
-with the `-r, --resume` parameter, specifying a cpkt\_\* directory to resume from.  Then it will use
+with the `-r, --resume` parameter, specifying a `cpkt_` directory to resume from.  Then it will use
 the same parameters as in the initial run of this job chain. If you wish to adjust the time limit,
 for instance, because you realized that your original limit was too short, just use the `-t, --time`
 parameter again on resume.
@@ -126,7 +135,7 @@ If for some reason our automatic chain job script is not suitable for your use c
 just use DMTCP on its own. In the following we will give you step-by-step instructions on how to
 checkpoint your job manually:
 
-* Load the dmtcp module: `module load dmtcp`
+* Load the DMTCP module: `module load dmtcp`
 * DMTCP usually runs an additional process that
 manages the creation of checkpoints and such, the so-called `coordinator`. It must be started in
 your batch script before the actual start of your application. To help you with this process, we
@@ -138,9 +147,9 @@ first checkpoint has been created, which can be useful if you wish to implement
 chaining on your own.
 * In front of your program call, you have to add the wrapper
 script `dmtcp_launch`.  This will create a checkpoint automatically after 40 seconds and then
-terminate your application and with it the job. If the job runs into its timelimit (here: 60
+terminate your application and with it the job. If the job runs into its time limit (here: 60
 seconds), the time to write out the checkpoint was probably not long enough. If all went well, you
-should find cpkt\* files in your work directory together with a script called
+should find `cpkt` files in your work directory together with a script called
 `./dmtcp_restart_script.sh` that can be used to resume from the checkpoint.
 
 ???+ example
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md
new file mode 100644
index 0000000000000000000000000000000000000000..1b395644aa972113ac887c764c9a651f56826093
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md
@@ -0,0 +1,127 @@
+# ZIH Systems
+
+ZIH systems comprises the *High Performance Computing and Storage Complex* and its
+extension *High Performance Computing – Data Analytics*. In total it offers scientists
+about 60,000 CPU cores and a peak performance of more than 1.5 quadrillion floating point
+operations per second. The architecture specifically tailored to data-intensive computing, Big Data
+analytics, and artificial intelligence methods with extensive capabilities for energy measurement
+and performance monitoring provides ideal conditions to achieve the ambitious research goals of the
+users and the ZIH.
+
+## Login Nodes
+
+- Login-Nodes (`tauruslogin[3-6].hrsk.tu-dresden.de`)
+  - each with 2x Intel(R) Xeon(R) CPU E5-2680 v3 each with 12 cores
+    @ 2.50GHz, MultiThreading Disabled, 64 GB RAM, 128 GB SSD local disk
+  - IPs: 141.30.73.\[102-105\]
+- Transfer-Nodes (`taurusexport3/4.hrsk.tu-dresden.de`, DNS Alias
+  `taurusexport.hrsk.tu-dresden.de`)
+  - 2 Servers without interactive login, only available via file transfer protocols (`rsync`, `ftp`)
+  - IPs: 141.30.73.82/83
+- Direct access to these nodes is granted via IP whitelisting (contact
+  hpcsupport@zih.tu-dresden.de) - otherwise use TU Dresden VPN.
+
+## AMD Rome CPUs + NVIDIA A100
+
+- 32 nodes, each with
+  - 8 x NVIDIA A100-SXM4
+  - 2 x AMD EPYC CPU 7352 (24 cores) @ 2.3 GHz, MultiThreading disabled
+  - 1 TB RAM
+  - 3.5 TB local memory at NVMe device at `/tmp`
+- Hostnames: `taurusi[8001-8034]`
+- Slurm partition `alpha`
+- Dedicated mostly for ScaDS-AI
+
+## Island 7 - AMD Rome CPUs
+
+- 192 nodes, each with
+  - 2x AMD EPYC CPU 7702 (64 cores) @ 2.0GHz, MultiThreading
+    enabled,
+  - 512 GB RAM
+  - 200 GB /tmp on local SSD local disk
+- Hostnames: `taurusi[7001-7192]`
+- Slurm partition `romeo`
+- More information under [Rome Nodes](rome_nodes.md)
+
+## Large SMP System HPE Superdome Flex
+
+- 32 x Intel(R) Xeon(R) Platinum 8276M CPU @ 2.20GHz (28 cores)
+- 47 TB RAM
+- Currently configured as one single node
+  - Hostname: `taurussmp8`
+- Slurm partition `julia`
+- More information under [HPE SD Flex](sd_flex.md)
+
+## IBM Power9 Nodes for Machine Learning
+
+For machine learning, we have 32 IBM AC922 nodes installed with this configuration:
+
+- 2 x IBM Power9 CPU (2.80 GHz, 3.10 GHz boost, 22 cores)
+- 256 GB RAM DDR4 2666MHz
+- 6x NVIDIA VOLTA V100 with 32GB HBM2
+- NVLINK bandwidth 150 GB/s between GPUs and host
+- Slurm partition `ml`
+- Hostnames: `taurusml[1-32]`
+
+## Island 4 to 6 - Intel Haswell CPUs
+
+- 1456 nodes, each with 2x Intel(R) Xeon(R) CPU E5-2680 v3 (12 cores)
+  @ 2.50GHz, MultiThreading disabled, 128 GB SSD local disk
+- Hostname: `taurusi4[001-232]`, `taurusi5[001-612]`,
+  `taurusi6[001-612]`
+- Varying amounts of main memory (selected automatically by the batch
+  system for you according to your job requirements)
+  - 1328 nodes with 2.67 GB RAM per core (64 GB total):
+    `taurusi[4001-4104,5001-5612,6001-6612]`
+  - 84 nodes with 5.34 GB RAM per core (128 GB total):
+    `taurusi[4105-4188]`
+  - 44 nodes with 10.67 GB RAM per core (256 GB total):
+    `taurusi[4189-4232]`
+- Slurm Partition `haswell`
+
+??? hint "Node topology"
+
+    ![Node topology](misc/i4000.png)
+    {: align=center}
+
+### Extension of Island 4 with Broadwell CPUs
+
+* 32 nodes, each witch 2 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
+  (**14 cores**), MultiThreading disabled, 64 GB RAM, 256 GB SSD local disk
+* from the users' perspective: Broadwell is like Haswell
+* Hostname: `taurusi[4233-4264]`
+* Slurm partition `broadwell`
+
+## Island 2 Phase 2 - Intel Haswell CPUs + NVIDIA K80 GPUs
+
+* 64 nodes, each with 2x Intel(R) Xeon(R) CPU E5-E5-2680 v3 (12 cores)
+  @ 2.50GHz, MultiThreading Disabled, 64 GB RAM (2.67 GB per core),
+  128 GB SSD local disk, 4x NVIDIA Tesla K80 (12 GB GDDR RAM) GPUs
+* Hostname: `taurusi2[045-108]`
+* Slurm Partition `gpu`
+* Node topology, same as [island 4 - 6](#island-4-to-6-intel-haswell-cpus)
+
+## SMP Nodes - up to 2 TB RAM
+
+- 5 Nodes each with 4x Intel(R) Xeon(R) CPU E7-4850 v3 (14 cores) @
+  2.20GHz, MultiThreading Disabled, 2 TB RAM
+  - Hostname: `taurussmp[3-7]`
+  - Slurm partition `smp2`
+
+??? hint "Node topology"
+
+    ![Node topology](misc/smp2.png)
+    {: align=center}
+
+## Island 2 Phase 1 - Intel Sandybridge CPUs + NVIDIA K20x GPUs
+
+- 44 nodes, each with 2x Intel(R) Xeon(R) CPU E5-2450 (8 cores) @
+  2.10GHz, MultiThreading Disabled, 48 GB RAM (3 GB per core), 128 GB
+  SSD local disk, 2x NVIDIA Tesla K20x (6 GB GDDR RAM) GPUs
+- Hostname: `taurusi2[001-044]`
+- Slurm partition `gpu1`
+
+??? hint "Node topology"
+
+    ![Node topology](misc/i2000.png)
+    {: align=center}
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_taurus.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_taurus.md
deleted file mode 100644
index ff28e9b69d95496f299b80b45179f3787ad996cb..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_taurus.md
+++ /dev/null
@@ -1,110 +0,0 @@
-# Central Components
-
--   Login-Nodes (`tauruslogin[3-6].hrsk.tu-dresden.de`)
-    -   each with 2x Intel(R) Xeon(R) CPU E5-2680 v3 each with 12 cores
-        @ 2.50GHz, MultiThreading Disabled, 64 GB RAM, 128 GB SSD local
-        disk
-    -   IPs: 141.30.73.\[102-105\]
--   Transfer-Nodes (`taurusexport3/4.hrsk.tu-dresden.de`, DNS Alias
-    `taurusexport.hrsk.tu-dresden.de`)
-    -   2 Servers without interactive login, only available via file
-        transfer protocols (rsync, ftp)
-    -   IPs: 141.30.73.82/83
--   Direct access to these nodes is granted via IP whitelisting (contact
-    <hpcsupport@zih.tu-dresden.de>) - otherwise use TU Dresden VPN.
-
-## AMD Rome CPUs + NVIDIA A100
-
-- 32 nodes, each with
-  -   8 x NVIDIA A100-SXM4
-  -   2 x AMD EPYC CPU 7352 (24 cores) @ 2.3 GHz, MultiThreading
-      disabled
-  -   1 TB RAM
-  -   3.5 TB /tmp local NVMe device
-- Hostnames: taurusi\[8001-8034\]
-- SLURM partition `alpha`
-- dedicated mostly for ScaDS-AI
-
-## Island 7 - AMD Rome CPUs
-
--   192 nodes, each with
-    -   2x AMD EPYC CPU 7702 (64 cores) @ 2.0GHz, MultiThreading
-        enabled,
-    -   512 GB RAM
-    -   200 GB /tmp on local SSD local disk
--   Hostnames: taurusi\[7001-7192\]
--   SLURM partition `romeo`
--   more information under [RomeNodes](rome_nodes.md)
-
-## Large SMP System HPE Superdome Flex
-
--   32 x Intel(R) Xeon(R) Platinum 8276M CPU @ 2.20GHz (28 cores)
--   47 TB RAM
--   currently configured as one single node
-    -   Hostname: taurussmp8
--   SLURM partition `julia`
--   more information under [HPE SD Flex](sd_flex.md)
-
-## IBM Power9 Nodes for Machine Learning
-
-For machine learning, we have 32 IBM AC922 nodes installed with this
-configuration:
-
--   2 x IBM Power9 CPU (2.80 GHz, 3.10 GHz boost, 22 cores)
--   256 GB RAM DDR4 2666MHz
--   6x NVIDIA VOLTA V100 with 32GB HBM2
--   NVLINK bandwidth 150 GB/s between GPUs and host
--   SLURM partition `ml`
--   Hostnames: taurusml\[1-32\]
-
-## Island 4 to 6 - Intel Haswell CPUs
-
--   1456 nodes, each with 2x Intel(R) Xeon(R) CPU E5-2680 v3 (12 cores)
-    @ 2.50GHz, MultiThreading disabled, 128 GB SSD local disk
--   Hostname: taurusi4\[001-232\], taurusi5\[001-612\],
-    taurusi6\[001-612\]
--   varying amounts of main memory (selected automatically by the batch
-    system for you according to your job requirements)
-    -   1328 nodes with 2.67 GB RAM per core (64 GB total):
-        taurusi\[4001-4104,5001-5612,6001-6612\]
-    -   84 nodes with 5.34 GB RAM per core (128 GB total):
-        taurusi\[4105-4188\]
-    -   44 nodes with 10.67 GB RAM per core (256 GB total):
-        taurusi\[4189-4232\]
--   SLURM Partition `haswell`
--   [Node topology] **todo** %ATTACHURL%/i4000.png
-
-### Extension of Island 4 with Broadwell CPUs
-
--   32 nodes, eachs witch 2 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
-    (**14 cores**) , MultiThreading disabled, 64 GB RAM, 256 GB SSD
-    local disk
--   from the users' perspective: Broadwell is like Haswell
--   Hostname: taurusi\[4233-4264\]
--   SLURM partition `broadwell`
-
-## Island 2 Phase 2 - Intel Haswell CPUs + NVIDIA K80 GPUs
-
--   64 nodes, each with 2x Intel(R) Xeon(R) CPU E5-E5-2680 v3 (12 cores)
-    @ 2.50GHz, MultiThreading Disabled, 64 GB RAM (2.67 GB per core),
-    128 GB SSD local disk, 4x NVIDIA Tesla K80 (12 GB GDDR RAM) GPUs
--   Hostname: taurusi2\[045-108\]
--   SLURM Partition `gpu`
--   [Node topology] **todo %ATTACHURL%/i4000.png** (without GPUs)
-
-## SMP Nodes - up to 2 TB RAM
-
--   5 Nodes each with 4x Intel(R) Xeon(R) CPU E7-4850 v3 (14 cores) @
-    2.20GHz, MultiThreading Disabled, 2 TB RAM
-    -   Hostname: `taurussmp[3-7]`
-    -   SLURM Partition `smp2`
-    -   [Node topology] **todo** %ATTACHURL%/smp2.png
-
-## Island 2 Phase 1 - Intel Sandybridge CPUs + NVIDIA K20x GPUs
-
--   44 nodes, each with 2x Intel(R) Xeon(R) CPU E5-2450 (8 cores) @
-    2.10GHz, MultiThreading Disabled, 48 GB RAM (3 GB per core), 128 GB
-    SSD local disk, 2x NVIDIA Tesla K20x (6 GB GDDR RAM) GPUs
--   Hostname: `taurusi2[001-044]`
--   SLURM Partition `gpu1`
--   [Node topology] **todo** %ATTACHURL%/i2000.png (without GPUs)
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hpcda.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hpcda.md
deleted file mode 100644
index d7bdec9afe83de27488e712b07e5fd5bdbcfcd17..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hpcda.md
+++ /dev/null
@@ -1,67 +0,0 @@
-# HPC for Data Analytics
-
-With the HPC-DA system, the TU Dresden provides infrastructure for High-Performance Computing and
-Data Analytics (HPC-DA) for German researchers for computing projects with focus in one of the
-following areas:
-
-- machine learning scenarios for large systems
-- evaluation of various hardware settings for large machine learning
-  problems, including accelerator and compute node configuration and
-  memory technologies
-- processing of large amounts of data on highly parallel machine
-  learning infrastructure.
-
-Currently we offer 25 Mio core hours compute time per year for external computing projects.
-Computing projects have a duration of up to one year with the possibility of extensions, thus
-enabling projects to continue seamlessly. Applications for regular projects on HPC-DA can be
-submitted at any time via the
-[online web-based submission](https://tu-dresden.de/zih/hochleistungsrechnen/zugang/hpc-da)
-and review system. The reviews of the applications are carried out by experts in their respective
-scientific fields. Applications are evaluated only according to their scientific excellence.
-
-ZIH provides a portfolio of preinstalled applications and offers support for software
-installation/configuration of project-specific applications. In particular, we provide consulting
-services for all our users, and advise researchers on using the resources in an efficient way.
-
-\<img align="right" alt="HPC-DA Overview"
-src="%ATTACHURL%/bandwidth.png" title="bandwidth.png" width="250" />
-
-## Access
-
-- Application for access using this
-  [Online Web Form](https://tu-dresden.de/zih/hochleistungsrechnen/zugang/hpc-da)
-
-## Hardware Overview
-
-- [Nodes for machine learning (Power9)](../jobs_and_resources/power9.md)
-- [NVMe Storage](../jobs_and_resources/nvme_storage.md) (2 PB)
-- [Warm archive](../data_lifecycle/file_systems.md#warm-archive) (10 PB)
-- HPC nodes (x86) for DA (island 6)
-- Compute nodes with high memory bandwidth:
-  [AMD Rome Nodes](../jobs_and_resources/rome_nodes.md) (island 7)
-
-Additional hardware:
-
-- [Multi-GPU-Cluster](../jobs_and_resources/alpha_centauri.md) for projects of SCADS.AI
-
-## File Systems and Object Storage
-
-- Lustre
-- BeeGFS
-- Quobyte
-- S3
-
-## HOWTOS
-
-- [Get started with HPC-DA](../software/get_started_with_hpcda.md)
-- [IBM Power AI](../software/power_ai.md)
-- [Work with Singularity Containers on Power9]**todo** Cloud
-- [TensorFlow on HPC-DA (native)](../software/tensorflow.md)
-- [Tensorflow on Jupyter notebook](../software/tensorflow_on_jupyter_notebook.md)
-- Create and run your own TensorFlow container for HPC-DA (Power9) (todo: no link at all in old compendium)
-- [TensorFlow on x86](../software/deep_learning.md)
-- [PyTorch on HPC-DA (Power9)](../software/pytorch.md)
-- [Python on HPC-DA (Power9)](../software/python.md)
-- [JupyterHub](../access/jupyterhub.md)
-- [R on HPC-DA (Power9)](../software/data_analytics_with_r.md)
-- [Big Data frameworks: Apache Spark, Apache Flink, Apache Hadoop](../software/big_data_frameworks.md)
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/index.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/index.md
deleted file mode 100644
index 911449758f01a2fce79f5179b5d81f51c79abe84..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/index.md
+++ /dev/null
@@ -1,65 +0,0 @@
-# Batch System
-
-Applications on an HPC system can not be run on the login node. They have to be submitted to compute
-nodes with dedicated resources for user jobs. Normally a job can be submitted with these data:
-
-* number of CPU cores,
-* requested CPU cores have to belong on one node (OpenMP programs) or can distributed (MPI),
-* memory per process,
-* maximum wall clock time (after reaching this limit the process is killed automatically),
-* files for redirection of output and error messages,
-* executable and command line parameters.
-
-*Comment:* Please keep in mind that for a large runtime a computation may not reach its end. Try to
-create shorter runs (4...8 hours) and use checkpointing. Here is an extreme example from literature
-for the waste of large computing resources due to missing checkpoints:
-
->Earth was a supercomputer constructed to find the question to the answer to the Life, the Universe,
->and Everything by a race of hyper-intelligent pan-dimensional beings. Unfortunately 10 million years
->later, and five minutes before the program had run to completion, the Earth was destroyed by
->Vogons.
-
-(Adams, D. The Hitchhikers Guide Through the Galaxy)
-
-## Slurm
-
-The HRSK-II systems are operated with the batch system [Slurm](https://slurm.schedmd.com). Just
-specify the resources you need in terms of cores, memory, and time and your job will be placed on
-the system.
-
-### Job Submission
-
-Job submission can be done with the command: `srun [options] <command>`
-
-However, using `srun` directly on the shell will be blocking and launch an interactive job. Apart
-from short test runs, it is recommended to launch your jobs into the background by using batch jobs.
-For that, you can conveniently put the parameters directly in a job file which you can submit using
-`sbatch [options] <job file>`
-
-Some options of srun/sbatch are:
-
-| Slurm Option | Description |
-|------------|-------|
-| `-n <N>` or `--ntasks <N>`         | set a number of tasks to N(default=1). This determines how many processes will be spawned by srun (for MPI jobs). |
-| `-N <N>` or `--nodes <N>`          | set number of nodes that will be part of a job, on each node there will be --ntasks-per-node processes started, if the option --ntasks-per-node is not given, 1 process per node will be started |
-| `--ntasks-per-node <N>`            | how many tasks per allocated node to start, as stated in the line before |
-| `-c <N>` or `--cpus-per-task <N>`  | this option is needed for multithreaded (e.g. OpenMP) jobs, it tells SLURM to allocate N cores per task allocated; typically N should be equal to the number of threads you program spawns, e.g. it should be set to the same number as OMP_NUM_THREADS |
-| `-p <name>` or `--partition <name>`| select the type of nodes where you want to execute your job, on Taurus we currently have haswell, smp, sandy, west, ml and gpu available |
-| `--mem-per-cpu <name>`             | specify the memory need per allocated CPU in MB |
-| `--time <HH:MM:SS>`                | specify the maximum runtime of your job, if you just put a single number in, it will be interpreted as minutes |
-| `--mail-user <your email>`         | tell the batch system your email address to get updates about the status of the jobs |
-| `--mail-type ALL`                  | specify for what type of events you want to get a mail; valid options beside ALL are: BEGIN, END, FAIL, REQUEUE |
-| `-J <name> or --job-name <name>`   | give your job a name which is shown in the queue, the name will also be included in job emails (but cut after 24 chars within emails) |
-| `--exclusive`                      | tell SLURM that only your job is allowed on the nodes allocated to this job; please be aware that you will be charged for all CPUs/cores on the node |
-| `-A <project>`                     | Charge resources used by this job to the specified project, useful if a user belongs to multiple projects. |
-| `-o <filename>` or `--output <filename>` | specify a file name that will be used to store all normal output (stdout), you can use %j (job id) and %N (name of first node) to automatically adopt the file name to the job, per default stdout goes to "slurm-%j.out" |
-
-<!--NOTE: the target path of this parameter must be writeable on the compute nodes, i.e. it may not point to a read-only mounted file system like /projects.-->
-<!---e <filename> or --error <filename>-->
-
-<!--specify a file name that will be used to store all error output (stderr), you can use %j (job id) and %N (name of first node) to automatically adopt the file name to the job, per default stderr goes to "slurm-%j.out" as well-->
-
-<!--NOTE: the target path of this parameter must be writeable on the compute nodes, i.e. it may not point to a read-only mounted file system like /projects.-->
-<!---a or --array 	submit an array job, see the extra section below-->
-<!---w <node1>,<node2>,... 	restrict job to run on specific nodes only-->
-<!---x <node1>,<node2>,... 	exclude specific nodes from job-->
diff --git a/Compendium_attachments/Slurm/hdfview_memory.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hdfview_memory.png
similarity index 100%
rename from Compendium_attachments/Slurm/hdfview_memory.png
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hdfview_memory.png
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hybrid.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hybrid.png
new file mode 100644
index 0000000000000000000000000000000000000000..116e03dd0785492be3f896cda69959a025f5ac49
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hybrid.png differ
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hybrid_cores_block_block.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hybrid_cores_block_block.png
new file mode 100644
index 0000000000000000000000000000000000000000..4c196df91b2fe410609a8e76505eca95f283ce29
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hybrid_cores_block_block.png differ
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hybrid_cores_cyclic_block.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hybrid_cores_cyclic_block.png
new file mode 100644
index 0000000000000000000000000000000000000000..dfccaf451553c710fcddd648ae9721866668f9e8
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hybrid_cores_cyclic_block.png differ
diff --git a/Compendium_attachments/HardwareTaurus/i2000.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/i2000.png
similarity index 100%
rename from Compendium_attachments/HardwareTaurus/i2000.png
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/i2000.png
diff --git a/Compendium_attachments/HardwareTaurus/i4000.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/i4000.png
similarity index 100%
rename from Compendium_attachments/HardwareTaurus/i4000.png
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/i4000.png
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi.png
new file mode 100644
index 0000000000000000000000000000000000000000..82087209059e535401724c493fff74d743da58e4
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi.png differ
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_block_block.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_block_block.png
new file mode 100644
index 0000000000000000000000000000000000000000..0c6e9bbfa0e7f0614ede7e89f292e2d5f1a74316
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_block_block.png differ
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_cyclic_block.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_cyclic_block.png
new file mode 100644
index 0000000000000000000000000000000000000000..dab17e83ed4930b253818e15bc42ef1b1b2c9918
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_cyclic_block.png differ
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_cyclic_cyclic.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_cyclic_cyclic.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b9361dd1f0a2b76b063ad64652844c425aacbdf
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_cyclic_cyclic.png differ
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_default.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_default.png
new file mode 100644
index 0000000000000000000000000000000000000000..82087209059e535401724c493fff74d743da58e4
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_default.png differ
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_socket_block_block.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_socket_block_block.png
new file mode 100644
index 0000000000000000000000000000000000000000..be12c78d1a85297cd60161a1808462941def94fb
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_socket_block_block.png differ
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_socket_block_cyclic.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_socket_block_cyclic.png
new file mode 100644
index 0000000000000000000000000000000000000000..08f2a90100ed88175f7ef6fa3d867a70ad0880d7
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_socket_block_cyclic.png differ
diff --git a/Compendium_attachments/NvmeStorage/nvme.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/nvme.png
similarity index 100%
rename from Compendium_attachments/NvmeStorage/nvme.png
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/nvme.png
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/openmp.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/openmp.png
new file mode 100644
index 0000000000000000000000000000000000000000..0cf284368f10bdd8c4a3b4c97530151e0142aad6
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/openmp.png differ
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/part.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/part.png
new file mode 100644
index 0000000000000000000000000000000000000000..e2b5418f622d3fa32ba2c6ce44889e84e4d1cddd
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/part.png differ
diff --git a/Compendium_attachments/HardwareTaurus/smp2.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/smp2.png
similarity index 100%
rename from Compendium_attachments/HardwareTaurus/smp2.png
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/smp2.png
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md
index 40a0d6af3e6f62fe69a76fc01e806b63fa8dc9df..78b8175ccbba3fb0eee8be7b946ebe2bee31219b 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md
@@ -1,6 +1,5 @@
 # NVMe Storage
 
-**TODO image nvme.png**
 90 NVMe storage nodes, each with
 
 -   8x Intel NVMe Datacenter SSD P4610, 3.2 TB
@@ -11,3 +10,6 @@
 -   64 GB RAM
 
 NVMe cards can saturate the HCAs
+
+![Configuration](misc/nvme.png)
+{: align=center}
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/overview.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/overview.md
index 14266272720761b66b817d9805c28f1079397e73..5240db14cb506d8719b9e46fe3feb89aede4a95f 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/overview.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/overview.md
@@ -1,57 +1,53 @@
-# Jobs and Resources
+# HPC Resources and Jobs
 
-When log in to ZIH systems, you are placed on a *login node* **TODO** link to login nodes section
-where you can [manage data life cycle](../data_lifecycle/overview.md),
-[setup experiments](../data_lifecycle/experiments.md), execute short tests and compile moderate
-projects. The login nodes cannot be used for real experiments and computations. Long and extensive
-computational work and experiments have to be encapsulated into so called **jobs** and scheduled to
-the compute nodes.
+ZIH operates a high performance computing (HPC) system with more than 60.000 cores, 720 GPUs, and a
+flexible storage hierarchy with about 16 PB total capacity. The HPC system provides an optimal
+research environment especially in the area of data analytics and machine learning as well as for
+processing extremely large data sets. Moreover it is also a perfect platform for highly scalable,
+data-intensive and compute-intensive applications.
 
-<!--Login nodes which are using for login can not be used for your computations.-->
-<!--To run software, do calculations and experiments, or compile your code compute nodes have to be used.-->
+With shared [login nodes](#login-nodes) and [filesystems](../data_lifecycle/file_systems.md) our
+HPC system enables users to easily switch between [the components](hardware_overview.md), each
+specialized for different application scenarios.
 
-ZIH uses the batch system Slurm for resource management and job scheduling.
-<!--[HPC Introduction]**todo link** is a good resource to get started with it.-->
-
-??? note "Batch Job"
-
-    In order to allow the batch scheduler an efficient job placement it needs these
-    specifications:
-
-    * **requirements:** cores, memory per core, (nodes), additional resources (GPU),
-    * maximum run-time,
-    * HPC project (normally use primary group which gives id),
-    * who gets an email on which occasion,
-
-    The runtime environment (see [here](../software/overview.md)) as well as the executable and
-    certain command-line arguments have to be specified to run the computational work.
-
-??? note "Batch System"
-
-    The batch system is the central organ of every HPC system users interact with its compute
-    resources. The batch system finds an adequate compute system (partition/island) for your compute
-    jobs. It organizes the queueing and messaging, if all resources are in use. If resources are
-    available for your job, the batch system allocates and connects to these resources, transfers
-    run-time environment, and starts the job.
+When log in to ZIH systems, you are placed on a login node where you can
+[manage data life cycle](../data_lifecycle/overview.md),
+[setup experiments](../data_lifecycle/experiments.md),
+execute short tests and compile moderate projects. The login nodes cannot be used for real
+experiments and computations. Long and extensive computational work and experiments have to be
+encapsulated into so called **jobs** and scheduled to the compute nodes.
 
 Follow the page [Slurm](slurm.md) for comprehensive documentation using the batch system at
 ZIH systems. There is also a page with extensive set of [Slurm examples](slurm_examples.md).
 
 ## Selection of Suitable Hardware
 
-### What do I need a CPU or GPU?
+### What do I need, a CPU or GPU?
+
+If an application is designed to run on GPUs this is normally announced unmistakable since the
+efforts of adapting an existing software to make use of a GPU can be overwhelming.
+And even if the software was listed in [NVIDIA's list of GPU-Accelerated Applications](https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/tesla-product-literature/gpu-applications-catalog.pdf)
+only certain parts of the computations may run on the GPU.
+
+To answer the question: The easiest way is to compare a typical computation
+on a normal node and on a GPU node. (Make sure to eliminate the influence of different
+CPU types and different number of cores.) If the execution time with GPU is better
+by a significant factor then this might be the obvious choice.
+
+??? note "Difference in Architecture"
 
-The main difference between CPU and GPU architecture is that a CPU is designed to handle a wide
-range of tasks quickly, but are limited in the concurrency of tasks that can be running. While GPUs
-can process data much faster than a CPU due to massive parallelism (but the amount of data which
-a single GPU's core can handle is small), GPUs are not as versatile as CPUs.
+    The main difference between CPU and GPU architecture is that a CPU is designed to handle a wide
+    range of tasks quickly, but are limited in the concurrency of tasks that can be running.
+    While GPUs can process data much faster than a CPU due to massive parallelism
+    (but the amount of data which
+    a single GPU's core can handle is small), GPUs are not as versatile as CPUs.
 
 ### Available Hardware
 
 ZIH provides a broad variety of compute resources ranging from normal server CPUs of different
-manufactures, to large shared memory nodes, GPU-assisted nodes up to highly specialized resources for
+manufactures, large shared memory nodes, GPU-assisted nodes up to highly specialized resources for
 [Machine Learning](../software/machine_learning.md) and AI.
-The page [Hardware Taurus](hardware_taurus.md) holds a comprehensive overview.
+The page [ZIH Systems](hardware_overview.md) holds a comprehensive overview.
 
 The desired hardware can be specified by the partition `-p, --partition` flag in Slurm.
 The majority of the basic tasks can be executed on the conventional nodes like a Haswell. Slurm will
@@ -60,19 +56,19 @@ automatically select a suitable partition depending on your memory and GPU requi
 ### Parallel Jobs
 
 **MPI jobs:** For MPI jobs typically allocates one core per task. Several nodes could be allocated
-if it is necessary. Slurm will automatically find suitable hardware. Normal compute nodes are
-perfect for this task.
+if it is necessary. The batch system [Slurm](slurm.md) will automatically find suitable hardware.
+Normal compute nodes are perfect for this task.
 
 **OpenMP jobs:** SMP-parallel applications can only run **within a node**, so it is necessary to
-include the options `-N 1` and `-n 1`. Using `--cpus-per-task N` Slurm will start one task and you
-will have N CPUs. The maximum number of processors for an SMP-parallel program is 896 on Taurus
-([SMP]**todo link** island).
+include the [batch system](slurm.md) options `-N 1` and `-n 1`. Using `--cpus-per-task N` Slurm will
+start one task and you will have `N` CPUs. The maximum number of processors for an SMP-parallel
+program is 896 on partition `julia`, see [partitions](partitions_and_limits.md).
 
-**GPUs** partitions are best suited for **repetitive** and **highly-parallel** computing tasks. If
-you have a task with potential [data parallelism]**todo link** most likely that you need the GPUs.
-Beyond video rendering, GPUs excel in tasks such as machine learning, financial simulations and risk
-modeling. Use the gpu2 and ml partition only if you need GPUs! Otherwise using the x86 partitions
-(e.g Haswell) most likely would be more beneficial.
+Partitions with GPUs are best suited for **repetitive** and **highly-parallel** computing tasks. If
+you have a task with potential [data parallelism](../software/gpu_programming.md) most likely that
+you need the GPUs.  Beyond video rendering, GPUs excel in tasks such as machine learning, financial
+simulations and risk modeling. Use the partitions `gpu2` and `ml` only if you need GPUs! Otherwise
+using the x86-based partitions most likely would be more beneficial.
 
 **Interactive jobs:** Slurm can forward your X11 credentials to the first node (or even all) for a job
 with the `--x11` option. To use an interactive job you have to specify `-X` flag for the ssh login.
@@ -91,5 +87,31 @@ projects. The quality of this work influence on the computations. However, pre-
 in many cases can be done completely or partially on a local system and then transferred to ZIH
 systems. Please use ZIH systems primarily for the computation-intensive tasks.
 
-<!--Useful links: [Batch Systems]**todo link**, [Hardware Taurus]**todo link**, [HPC-DA]**todo link**,-->
-<!--[Slurm]**todo link**-->
+## Exclusive Reservation of Hardware
+
+If you need for some special reasons, e.g., for benchmarking, a project or paper deadline, parts of
+our machines exclusively, we offer the opportunity to request and reserve these parts for your
+project.
+
+Please send your request **7 working days** before the reservation should start (as that's our
+maximum time limit for jobs and it is therefore not guaranteed that resources are available on
+shorter notice) with the following information to the
+[HPC support](mailto:hpcsupport@zih.tu-dresden.de?subject=Request%20for%20a%20exclusive%20reservation%20of%20hardware&body=Dear%20HPC%20support%2C%0A%0AI%20have%20the%20following%20request%20for%20a%20exclusive%20reservation%20of%20hardware%3A%0A%0AProject%3A%0AReservation%20owner%3A%0ASystem%3A%0AHardware%20requirements%3A%0ATime%20window%3A%20%3C%5Byear%5D%3Amonth%3Aday%3Ahour%3Aminute%20-%20%5Byear%5D%3Amonth%3Aday%3Ahour%3Aminute%3E%0AReason%3A):
+
+- `Project:` *Which project will be credited for the reservation?*
+- `Reservation owner:` *Who should be able to run jobs on the
+  reservation? I.e., name of an individual user or a group of users
+  within the specified project.*
+- `System:` *Which machine should be used?*
+- `Hardware requirements:` *How many nodes and cores do you need? Do
+  you have special requirements, e.g., minimum on main memory,
+  equipped with a graphic card, special placement within the network
+  topology?*
+- `Time window:` *Begin and end of the reservation in the form
+  `year:month:dayThour:minute:second` e.g.: 2020-05-21T09:00:00*
+- `Reason:` *Reason for the reservation.*
+
+!!! hint
+
+    Please note that your project CPU hour budget will be credited for the reserved hardware even if
+    you don't use it.
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/partitions_and_limits.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/partitions_and_limits.md
new file mode 100644
index 0000000000000000000000000000000000000000..edf5bae8582cff37ba5dca68d70c70a35438f341
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/partitions_and_limits.md
@@ -0,0 +1,78 @@
+# Partitions, Memory and Run Time Limits
+
+There is no such thing as free lunch at ZIH systems. Since, compute nodes are operated in multi-user
+node by default, jobs of several users can run at the same time at the very same node sharing
+resources, like memory (but not CPU). On the other hand, a higher throughput can be achieved by
+smaller jobs. Thus, restrictions w.r.t. [memory](#memory-limits) and
+[runtime limits](#runtime-limits) have to be respected when submitting jobs.
+
+## Runtime Limits
+
+!!! note "Runtime limits are enforced."
+
+    This means, a job will be canceled as soon as it exceeds its requested limit. Currently, the
+    maximum run time is 7 days.
+
+Shorter jobs come with multiple advantages:
+
+- lower risk of loss of computing time,
+- shorter waiting time for scheduling,
+- higher job fluctuation; thus, jobs with high priorities may start faster.
+
+To bring down the percentage of long running jobs we restrict the number of cores with jobs longer
+than 2 days to approximately 50% and with jobs longer than 24 to 75% of the total number of cores.
+(These numbers are subject to changes.) As best practice we advise a run time of about 8h.
+
+!!! hint "Please always try to make a good estimation of your needed time limit."
+
+    For this, you can use a command line like this to compare the requested timelimit with the
+    elapsed time for your completed jobs that started after a given date:
+
+    ```console
+    marie@login$ sacct -X -S 2021-01-01 -E now --format=start,JobID,jobname,elapsed,timelimit -s COMPLETED
+    ```
+
+Instead of running one long job, you should split it up into a chain job. Even applications that are
+not capable of checkpoint/restart can be adapted. Please refer to the section
+[Checkpoint/Restart](../jobs_and_resources/checkpoint_restart.md) for further documentation.
+
+![Partitions](misc/part.png)
+{: align="center"}
+
+## Memory Limits
+
+!!! note "Memory limits are enforced."
+
+    This means that jobs which exceed their per-node memory limit will be killed automatically by
+    the batch system.
+
+Memory requirements for your job can be specified via the `sbatch/srun` parameters:
+
+`--mem-per-cpu=<MB>` or `--mem=<MB>` (which is "memory per node"). The **default limit** is quite
+low at **300 MB** per CPU.
+
+ZIH systems comprises different sets of nodes with different amount of installed memory which affect
+where your job may be run. To achieve the shortest possible waiting time for your jobs, you should
+be aware of the limits shown in the following table.
+
+??? hint "Partitions and memory limits"
+
+    | Partition          | Nodes                                    | # Nodes | Cores per Node  | MB per Core | MB per Node | GPUs per Node     |
+    |:-------------------|:-----------------------------------------|:--------|:----------------|:------------|:------------|:------------------|
+    | `haswell64`        | `taurusi[4001-4104,5001-5612,6001-6612]` | `1328`  | `24`            | `2541`       | `61000`    | `-`               |
+    | `haswell128`       | `taurusi[4105-4188]`                     | `84`    | `24`            | `5250`       | `126000`   | `-`               |
+    | `haswell256`       | `taurusi[4189-4232]`                     | `44`    | `24`            | `10583`      | `254000`   | `-`               |
+    | `broadwell`        | `taurusi[4233-4264]`                     | `32`    | `28`            | `2214`       | `62000`    | `-`               |
+    | `smp2`             | `taurussmp[3-7]`                         | `5`     | `56`            | `36500`      | `2044000`  | `-`               |
+    | `gpu2`             | `taurusi[2045-2106]`                     | `62`    | `24`            | `2583`       | `62000`    | `4 (2 dual GPUs)` |
+    | `gpu2-interactive` | `taurusi[2045-2108]`                     | `64`    | `24`            | `2583`       | `62000`    | `4 (2 dual GPUs)` |
+    | `hpdlf`            | `taurusa[3-16]`                          | `14`    | `12`            | `7916`       | `95000`    | `3`               |
+    | `ml`               | `taurusml[1-32]`                         | `32`    | `44 (HT: 176)`  | `1443*`      | `254000`   | `6`               |
+    | `romeo`            | `taurusi[7001-7192]`                     | `192`   | `128 (HT: 256)` | `1972*`      | `505000`   | `-`               |
+    | `julia`            | `taurussmp8`                             | `1`     | `896`           | `27343*`     | `49000000` | `-`               |
+
+!!! note
+
+    The ML nodes have 4way-SMT, so for every physical core allocated (,e.g., with
+    `SLURM_HINT=nomultithread`), you will always get 4*1443 MB because the memory of the other
+    threads is allocated implicitly, too.
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md
index a6cdfba8bd47659bc3a14473cad74c10b73089d0..57ab511938f3eb515b9e38ca831e91cede692418 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md
@@ -2,50 +2,48 @@
 
 ## Hardware
 
-- Slurm partiton: romeo
-- Module architecture: rome
-- 192 nodes taurusi[7001-7192], each:
-    - 2x AMD EPYC CPU 7702 (64 cores) @ 2.0GHz, MultiThreading
+- Slurm partition: `romeo`
+- Module architecture: `rome`
+- 192 nodes `taurusi[7001-7192]`, each:
+    - 2x AMD EPYC CPU 7702 (64 cores) @ 2.0GHz, Simultaneous Multithreading (SMT)
     - 512 GB RAM
-    - 200 GB SSD disk mounted on /tmp
+    - 200 GB SSD disk mounted on `/tmp`
 
 ## Usage
 
-There is a total of 128 physical cores in each
-node. SMT is also active, so in total, 256 logical cores are available
-per node.
+There is a total of 128 physical cores in each node. SMT is also active, so in total, 256 logical
+cores are available per node.
 
 !!! note
-    Multithreading is disabled per default in a job. To make use of it
-    include the Slurm parameter `--hint=multithread` in your job script
-    or command line, or set
-    the environment variable `SLURM_HINT=multithread` before job submission.
 
-Each node brings 512 GB of main memory, so you can request roughly
-1972MB per logical core (using --mem-per-cpu). Note that you will always
-get the memory for the logical core sibling too, even if you do not
-intend to use SMT.
+    Multithreading is disabled per default in a job. To make use of it include the Slurm parameter
+    `--hint=multithread` in your job script or command line, or set the environment variable
+    `SLURM_HINT=multithread` before job submission.
+
+Each node brings 512 GB of main memory, so you can request roughly 1972 MB per logical core (using
+`--mem-per-cpu`). Note that you will always get the memory for the logical core sibling too, even if
+you do not intend to use SMT.
 
 !!! note
-    If you are running a job here with only ONE process (maybe
-    multiple cores), please explicitly set the option `-n 1` !
 
-Be aware that software built with Intel compilers and `-x*` optimization
-flags will not run on those AMD processors! That's why most older
-modules built with intel toolchains are not available on **romeo**.
+    If you are running a job here with only ONE process (maybe multiple cores), please explicitly
+    set the option `-n 1`!
+
+Be aware that software built with Intel compilers and `-x*` optimization flags will not run on those
+AMD processors! That's why most older modules built with Intel toolchains are not available on
+partition `romeo`.
 
-We provide the script: `ml_arch_avail` that you can use to check if a
-certain module is available on rome architecture.
+We provide the script `ml_arch_avail` that can be used to check if a certain module is available on
+`rome` architecture.
 
 ## Example, running CP2K on Rome
 
 First, check what CP2K modules are available in general:
 `module load spider CP2K` or `module avail CP2K`.
 
-You will see that there are several different CP2K versions avail, built
-with different toolchains. Now let's assume you have to decided you want
-to run CP2K version 6 at least, so to check if those modules are built
-for rome, use:
+You will see that there are several different CP2K versions avail, built with different toolchains.
+Now let's assume you have to decided you want to run CP2K version 6 at least, so to check if those
+modules are built for rome, use:
 
 ```console
 marie@login$ ml_arch_avail CP2K/6
@@ -55,13 +53,11 @@ CP2K/6.1-intel-2018a: sandy, haswell
 CP2K/6.1-intel-2018a-spglib: haswell
 ```
 
-There you will see that only the modules built with **foss** toolchain
-are available on architecture "rome", not the ones built with **intel**.
-So you can load e.g. `ml CP2K/6.1-foss-2019a`.
+There you will see that only the modules built with toolchain `foss` are available on architecture
+`rome`, not the ones built with `intel`. So you can load, e.g. `ml CP2K/6.1-foss-2019a`.
 
-Then, when writing your batch script, you have to specify the **romeo**
-partition. Also, if e.g. you wanted to use an entire ROME node (no SMT)
-and fill it with MPI ranks, it could look like this:
+Then, when writing your batch script, you have to specify the partition `romeo`. Also, if e.g. you
+wanted to use an entire ROME node (no SMT) and fill it with MPI ranks, it could look like this:
 
 ```bash
 #!/bin/bash
@@ -73,27 +69,26 @@ and fill it with MPI ranks, it could look like this:
 srun cp2k.popt input.inp
 ```
 
-## Using the Intel toolchain on Rome
+## Using the Intel Toolchain on Rome
 
-Currently, we have only newer toolchains starting at `intel/2019b`
-installed for the Rome nodes. Even though they have AMD CPUs, you can
-still use the Intel compilers on there and they don't even create
-bad-performing code. When using the MKL up to version 2019, though,
-you should set the following environment variable to make sure that AVX2
-is used:
+Currently, we have only newer toolchains starting at `intel/2019b` installed for the Rome nodes.
+Even though they have AMD CPUs, you can still use the Intel compilers on there and they don't even
+create bad-performing code. When using the Intel Math Kernel Library (MKL) up to version 2019,
+though, you should set the following environment variable to make sure that AVX2 is used:
 
 ```bash
 export MKL_DEBUG_CPU_TYPE=5
 ```
 
-Without it, the MKL does a CPUID check and disables AVX2/FMA on
-non-Intel CPUs, leading to much worse performance.
+Without it, the MKL does a CPUID check and disables AVX2/FMA on non-Intel CPUs, leading to much
+worse performance.
+
 !!! note
-    In version 2020, Intel has removed this environment variable and added separate Zen
-    codepaths to the library. However, they are still incomplete and do not
-    cover every BLAS function. Also, the Intel AVX2 codepaths still seem to
-    provide somewhat better performance, so a new workaround would be to
-    overwrite the `mkl_serv_intel_cpu_true` symbol with a custom function:
+
+    In version 2020, Intel has removed this environment variable and added separate Zen codepaths to
+    the library. However, they are still incomplete and do not cover every BLAS function. Also, the
+    Intel AVX2 codepaths still seem to provide somewhat better performance, so a new workaround
+    would be to overwrite the `mkl_serv_intel_cpu_true` symbol with a custom function:
 
 ```c
 int mkl_serv_intel_cpu_true() {
@@ -108,13 +103,11 @@ marie@login$ gcc -shared -fPIC -o libfakeintel.so fakeintel.c
 marie@login$ export LD_PRELOAD=libfakeintel.so
 ```
 
-As for compiler optimization flags, `-xHOST` does not seem to produce
-best-performing code in every case on Rome. You might want to try
-`-mavx2 -fma` instead.
+As for compiler optimization flags, `-xHOST` does not seem to produce best-performing code in every
+case on Rome. You might want to try `-mavx2 -fma` instead.
 
 ### Intel MPI
 
-We have seen only half the theoretical peak bandwidth via Infiniband
-between two nodes, whereas OpenMPI got close to the peak bandwidth, so
-you might want to avoid using Intel MPI on romeo if your application
-heavily relies on MPI communication until this issue is resolved.
+We have seen only half the theoretical peak bandwidth via Infiniband between two nodes, whereas
+OpenMPI got close to the peak bandwidth, so you might want to avoid using Intel MPI on partition
+`rome` if your application heavily relies on MPI communication until this issue is resolved.
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
index 04624da4e55fe3a32e3d41842622b38b3e176315..c09260cf8d814a6a6835f981a25d1e8700c71df2 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
@@ -1,24 +1,23 @@
-# Large shared-memory node - HPE Superdome Flex
+# Large Shared-Memory Node - HPE Superdome Flex
 
--   Hostname: taurussmp8
--   Access to all shared file systems
--   Slurm partition `julia`
--   32 x Intel(R) Xeon(R) Platinum 8276M CPU @ 2.20GHz (28 cores)
--   48 TB RAM (usable: 47 TB - one TB is used for cache coherence
-    protocols)
--   370 TB of fast NVME storage available at `/nvme/<projectname>`
+- Hostname: `taurussmp8`
+- Access to all shared filesystems
+- Slurm partition `julia`
+- 32 x Intel(R) Xeon(R) Platinum 8276M CPU @ 2.20GHz (28 cores)
+- 48 TB RAM (usable: 47 TB - one TB is used for cache coherence protocols)
+- 370 TB of fast NVME storage available at `/nvme/<projectname>`
 
-## Local temporary NVMe storage
+## Local Temporary NVMe Storage
 
 There are 370 TB of NVMe devices installed. For immediate access for all projects, a volume of 87 TB
-of fast NVMe storage is available at `/nvme/1/<projectname>`. For testing, we have set a quota of 100
-GB per project on this NVMe storage.This is
+of fast NVMe storage is available at `/nvme/1/<projectname>`. For testing, we have set a quota of
+100 GB per project on this NVMe storage.
 
 With a more detailed proposal on how this unique system (large shared memory + NVMe storage) can
 speed up their computations, a project's quota can be increased or dedicated volumes of up to the
 full capacity can be set up.
 
-## Hints for usage
+## Hints for Usage
 
 - granularity should be a socket (28 cores)
 - can be used for OpenMP applications with large memory demands
@@ -35,5 +34,5 @@ full capacity can be set up.
   this unique system (large shared memory + NVMe storage) can speed up
   their computations, we will gladly increase this limit, for selected
   projects.
-- Test users might have to clean-up their /nvme storage within 4 weeks
+- Test users might have to clean-up their `/nvme` storage within 4 weeks
   to make room for large projects.
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md
index 0c4d3d92a25de40aa7ec887feeb08086081a5af3..d7c3530fad85643c4f814a02c6e3250df427af38 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md
@@ -1,589 +1,405 @@
-# Slurm
+# Batch System Slurm
 
-The HRSK-II systems are operated with the batch system Slurm. Just specify the resources you need
-in terms of cores, memory, and time and your job will be placed on the system.
+When log in to ZIH systems, you are placed on a login node. There you can manage your
+[data life cycle](../data_lifecycle/overview.md),
+[setup experiments](../data_lifecycle/experiments.md), and
+edit and prepare jobs. The login nodes are not suited for computational work!  From the login nodes,
+you can interact with the batch system, e.g., submit and monitor your jobs.
 
-## Job Submission
+??? note "Batch System"
 
-Job submission can be done with the command: `srun [options] <command>`
-
-However, using srun directly on the shell will be blocking and launch an interactive job. Apart from
-short test runs, it is recommended to launch your jobs into the background by using batch jobs. For
-that, you can conveniently put the parameters directly in a job file which you can submit using
-`sbatch [options] <job file>`
-
-Some options of `srun/sbatch` are:
-
-| slurm option                           | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                |
-|:---------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| -n \<N> or --ntasks \<N>               | set a number of tasks to N(default=1). This determines how many processes will be spawned by srun (for MPI jobs).                                                                                                                                                                                                                                                                                                                                          |
-| -N \<N> or --nodes \<N>                | set number of nodes that will be part of a job, on each node there will be --ntasks-per-node processes started, if the option --ntasks-per-node is not given, 1 process per node will be started                                                                                                                                                                                                                                                           |
-| --ntasks-per-node \<N>                 | how many tasks per allocated node to start, as stated in the line before                                                                                                                                                                                                                                                                                                                                                                                   |
-| -c \<N> or --cpus-per-task \<N>        | this option is needed for multithreaded (e.g. OpenMP) jobs, it tells Slurm to allocate N cores per task allocated; typically N should be equal to the number of threads you program spawns, e.g. it should be set to the same number as OMP_NUM_THREADS                                                                                                                                                                                                    |
-| -p \<name> or --partition \<name>      | select the type of nodes where you want to execute your job, on Taurus we currently have haswell, `smp`, `sandy`, `west`, ml and `gpu` available                                                                                                                                                                                                                                                                                                           |
-| --mem-per-cpu \<name>                  | specify the memory need per allocated CPU in MB                                                                                                                                                                                                                                                                                                                                                                                                            |
-| --time \<HH:MM:SS>                     | specify the maximum runtime of your job, if you just put a single number in, it will be interpreted as minutes                                                                                                                                                                                                                                                                                                                                             |
-| --mail-user \<your email>              | tell the batch system your email address to get updates about the status of the jobs                                                                                                                                                                                                                                                                                                                                                                       |
-| --mail-type ALL                        | specify for what type of events you want to get a mail; valid options beside ALL are: BEGIN, END, FAIL, REQUEUE                                                                                                                                                                                                                                                                                                                                            |
-| -J \<name> or --job-name \<name>       | give your job a name which is shown in the queue, the name will also be included in job emails (but cut after 24 chars within emails)                                                                                                                                                                                                                                                                                                                      |
-| --no-requeue                           | At node failure, jobs are requeued automatically per default. Use this flag to disable requeueing.                                                                                                                                                                                                                                                                                                                                                         |
-| --exclusive                            | tell Slurm that only your job is allowed on the nodes allocated to this job; please be aware that you will be charged for all CPUs/cores on the node                                                                                                                                                                                                                                                                                                       |
-| -A \<project>                          | Charge resources used by this job to the specified project, useful if a user belongs to multiple projects.                                                                                                                                                                                                                                                                                                                                                 |
-| -o \<filename> or --output \<filename> | \<p>specify a file name that will be used to store all normal output (stdout), you can use %j (job id) and %N (name of first node) to automatically adopt the file name to the job, per default stdout goes to "slurm-%j.out"\</p> \<p>%RED%NOTE:<span class="twiki-macro ENDCOLOR"></span> the target path of this parameter must be writeable on the compute nodes, i.e. it may not point to a read-only mounted file system like /projects.\</p>        |
-| -e \<filename> or --error \<filename>  | \<p>specify a file name that will be used to store all error output (stderr), you can use %j (job id) and %N (name of first node) to automatically adopt the file name to the job, per default stderr goes to "slurm-%j.out" as well\</p> \<p>%RED%NOTE:<span class="twiki-macro ENDCOLOR"></span> the target path of this parameter must be writeable on the compute nodes, i.e. it may not point to a read-only mounted file system like /projects.\</p> |
-| -a or --array                          | submit an array job, see the extra section below                                                                                                                                                                                                                                                                                                                                                                                                           |
-| -w \<node1>,\<node2>,...               | restrict job to run on specific nodes only                                                                                                                                                                                                                                                                                                                                                                                                                 |
-| -x \<node1>,\<node2>,...               | exclude specific nodes from job                                                                                                                                                                                                                                                                                                                                                                                                                            |
-
-The following example job file shows how you can make use of sbatch
-
-```Bash
-#!/bin/bash
-#SBATCH --time=01:00:00
-#SBATCH --output=simulation-m-%j.out
-#SBATCH --error=simulation-m-%j.err
-#SBATCH --ntasks=512
-#SBATCH -A myproject
-
-echo Starting Program
-```
+    The batch system is the central organ of every HPC system users interact with its compute
+    resources. The batch system finds an adequate compute system (partition) for your compute jobs.
+    It organizes the queueing and messaging, if all resources are in use. If resources are available
+    for your job, the batch system allocates and connects to these resources, transfers runtime
+    environment, and starts the job.
 
-During runtime, the environment variable SLURM_JOB_ID will be set to the id of your job.
+??? note "Batch Job"
 
-You can also use our [Slurm Batch File Generator]**todo** Slurmgenerator, which could help you create
-basic Slurm job scripts.
+    At HPC systems, computational work and resource requirements are encapsulated into so-called
+    jobs. In order to allow the batch system an efficient job placement it needs these
+    specifications:
 
-Detailed information on [memory limits on Taurus]**todo**
+    * requirements: number of nodes and cores, memory per core, additional resources (GPU)
+    * maximum run-time
+    * HPC project for accounting
+    * who gets an email on which occasion
 
-### Interactive Jobs
+    Moreover, the [runtime environment](../software/overview.md) as well as the executable and
+    certain command-line arguments have to be specified to run the computational work.
 
-Interactive activities like editing, compiling etc. are normally limited to the login nodes. For
-longer interactive sessions you can allocate cores on the compute node with the command "salloc". It
-takes the same options like `sbatch` to specify the required resources.
+ZIH uses the batch system Slurm for resource management and job scheduling.
+Just specify the resources you need in terms
+of cores, memory, and time and your Slurm will place your job on the system.
 
-The difference to LSF is, that `salloc` returns a new shell on the node, where you submitted the
-job. You need to use the command `srun` in front of the following commands to have these commands
-executed on the allocated resources. If you allocate more than one task, please be aware that srun
-will run the command on each allocated task!
+This pages provides a brief overview on
 
-An example of an interactive session looks like:
+* [Slurm options](#options) to specify resource requirements,
+* how to submit [interactive](#interactive-jobs) and [batch jobs](#batch-jobs),
+* how to [write job files](#job-files),
+* how to [manage and control your jobs](#manage-and-control-jobs).
 
-```Shell Session
-tauruslogin3 /home/mark; srun --pty -n 1 -c 4 --time=1:00:00 --mem-per-cpu=1700 bash<br />srun: job 13598400 queued and waiting for resources<br />srun: job 13598400 has been allocated resources
-taurusi1262 /home/mark;   # start interactive work with e.g. 4 cores.
-```
+If you are are already familiar with Slurm, you might be more interested in our collection of
+[job examples](slurm_examples.md).
+There is also a ton of external resources regarding Slurm. We recommend these links for detailed
+information:
 
-**Note:** A dedicated partition `interactive` is reserved for short jobs (< 8h) with not more than
-one job per user. Please check the availability of nodes there with `sinfo -p interactive` .
+- [slurm.schedmd.com](https://slurm.schedmd.com/) provides the official documentation comprising
+   manual pages, tutorials, examples, etc.
+- [Comparison with other batch systems](https://www.schedmd.com/slurmdocs/rosetta.html)
 
-### Interactive X11/GUI Jobs
+## Job Submission
 
-Slurm will forward your X11 credentials to the first (or even all) node
-for a job with the (undocumented) --x11 option. For example, an
-interactive session for 1 hour with Matlab using eight cores can be
-started with:
+There are three basic Slurm commands for job submission and execution:
 
-```Shell Session
-module load matlab
-srun --ntasks=1 --cpus-per-task=8 --time=1:00:00 --pty --x11=first matlab
-```
+1. `srun`: Submit a job for execution or initiate job steps in real time.
+1. `sbatch`: Submit a batch script to Slurm for later execution.
+1. `salloc`: Obtain a Slurm job allocation (a set of nodes), execute a command, and then release the
+   allocation when the command is finished.
 
-**Note:** If you are getting the error:
+Using `srun` directly on the shell will be blocking and launch an
+[interactive job](#interactive-jobs). Apart from short test runs, it is recommended to submit your
+jobs to Slurm for later execution by using [batch jobs](#batch-jobs). For that, you can conveniently
+put the parameters directly in a [job file](#job-files) which you can submit using `sbatch [options]
+<job file>`.
 
-```Bash
-srun: error: x11: unable to connect node taurusiXXXX
-```
+During runtime, the environment variable `SLURM_JOB_ID` will be set to the id of your job. The job
+id is unique. The id allows you to [manage and control](#manage-and-control-jobs) your jobs.
 
-that probably means you still have an old host key for the target node in your `\~/.ssh/known_hosts`
-file (e.g. from pre-SCS5). This can be solved either by removing the entry from your known_hosts or
-by simply deleting the known_hosts file altogether if you don't have important other entries in it.
-
-### Requesting an Nvidia K20X / K80 / A100
-
-Slurm will allocate one or many GPUs for your job if requested. Please note that GPUs are only
-available in certain partitions, like `gpu2`, `gpu3` or `gpu2-interactive`. The option
-for sbatch/srun in this case is `--gres=gpu:[NUM_PER_NODE]` (where `NUM_PER_NODE` can be `1`, 2 or
-4, meaning that one, two or four of the GPUs per node will be used for the job). A sample job file
-could look like this
-
-```Bash
-#!/bin/bash
-#SBATCH -A Project1            # account CPU time to Project1
-#SBATCH --nodes=2              # request 2 nodes<br />#SBATCH --mincpus=1            # allocate one task per node...<br />#SBATCH --ntasks=2             # ...which means 2 tasks in total (see note below)
-#SBATCH --cpus-per-task=6      # use 6 threads per task
-#SBATCH --gres=gpu:1           # use 1 GPU per node (i.e. use one GPU per task)
-#SBATCH --time=01:00:00        # run for 1 hour
-srun ./your/cuda/application   # start you application (probably requires MPI to use both nodes)
-```
+## Options
 
-Please be aware that the partitions `gpu`, `gpu1` and `gpu2` can only be used for non-interactive
-jobs which are submitted by `sbatch`.  Interactive jobs (`salloc`, `srun`) will have to use the
-partition `gpu-interactive`. Slurm will automatically select the right partition if the partition
-parameter (-p) is omitted.
+The following table holds the most important options for `srun/sbatch/salloc` to specify resource
+requirements and control communication.
 
-**Note:** Due to an unresolved issue concerning the Slurm job scheduling behavior, it is currently
-not practical to use `--ntasks-per-node` together with GPU jobs.  If you want to use multiple nodes,
-please use the parameters `--ntasks` and `--mincpus` instead. The values of mincpus \* nodes has to
-equal ntasks in this case.
+??? tip "Options Table"
 
-### Limitations of GPU job allocations
+    | Slurm Option               | Description |
+    |:---------------------------|:------------|
+    | `-n, --ntasks=<N>`         | number of (MPI) tasks (default: 1) |
+    | `-N, --nodes=<N>`          | number of nodes; there will be `--ntasks-per-node` processes started on each node |
+    | `--ntasks-per-node=<N>`    | number of tasks per allocated node to start (default: 1) |
+    | `-c, --cpus-per-task=<N>`  | number of CPUs per task; needed for multithreaded (e.g. OpenMP) jobs; typically `N` should be equal to `OMP_NUM_THREADS` |
+    | `-p, --partition=<name>`   | type of nodes where you want to execute your job (refer to [partitions](partitions_and_limits.md)) |
+    | `--mem-per-cpu=<size>`     | memory need per allocated CPU in MB |
+    | `-t, --time=<HH:MM:SS>`    | maximum runtime of the job |
+    | `--mail-user=<your email>` | get updates about the status of the jobs |
+    | `--mail-type=ALL`          | for what type of events you want to get a mail; valid options: `ALL`, `BEGIN`, `END`, `FAIL`, `REQUEUE` |
+    | `-J, --job-name=<name>`    | name of the job shown in the queue and in mails (cut after 24 chars) |
+    | `--no-requeue`             | disable requeueing of the job in case of node failure (default: enabled) |
+    | `--exclusive`              | exclusive usage of compute nodes; you will be charged for all CPUs/cores on the node |
+    | `-A, --account=<project>`  | charge resources used by this job to the specified project |
+    | `-o, --output=<filename>`  | file to save all normal output (stdout) (default: `slurm-%j.out`) |
+    | `-e, --error=<filename>`   | file to save all error output (stderr)  (default: `slurm-%j.out`) |
+    | `-a, --array=<arg>`        | submit an array job ([examples](slurm_examples.md#array-jobs)) |
+    | `-w <node1>,<node2>,...`   | restrict job to run on specific nodes only |
+    | `-x <node1>,<node2>,...`   | exclude specific nodes from job |
 
-The number of cores per node that are currently allowed to be allocated for GPU jobs is limited
-depending on how many GPUs are being requested.  On the K80 nodes, you may only request up to 6
-cores per requested GPU (8 per on the K20 nodes). This is because we do not wish that GPUs remain
-unusable due to all cores on a node being used by a single job which does not, at the same time,
-request all GPUs.
+!!! note "Output and Error Files"
 
-E.g., if you specify `--gres=gpu:2`, your total number of cores per node (meaning: ntasks \*
-cpus-per-task) may not exceed 12 (on the K80 nodes)
+    When redirecting stderr and stderr into a file using `--output=<filename>` and
+    `--stderr=<filename>`, make sure the target path is writeable on the
+    compute nodes, i.e., it may not point to a read-only mounted
+    [filesystem](../data_lifecycle/overview.md) like `/projects.`
 
-Note that this also has implications for the use of the --exclusive parameter. Since this sets the
-number of allocated cores to 24 (or 16 on the K20X nodes), you also **must** request all four GPUs
-by specifying --gres=gpu:4, otherwise your job will not start. In the case of --exclusive, it won't
-be denied on submission, because this is evaluated in a later scheduling step. Jobs that directly
-request too many cores per GPU will be denied with the error message:
+!!! note "No free lunch"
 
-```Shell Session
-Batch job submission failed: Requested node configuration is not available
-```
+    Runtime and memory limits are enforced. Please refer to the section on [partitions and
+    limits](partitions_and_limits.md) for a detailed overview.
 
-### Parallel Jobs
+### Host List
 
-For submitting parallel jobs, a few rules have to be understood and followed. In general, they
-depend on the type of parallelization and architecture.
-
-#### OpenMP Jobs
-
-An SMP-parallel job can only run within a node, so it is necessary to include the options `-N 1` and
-`-n 1`. The maximum number of processors for an SMP-parallel program is 488 on Venus and 56 on
-taurus (smp island). Using --cpus-per-task N Slurm will start one task and you will have N CPUs
-available for your job. An example job file would look like:
-
-```Bash
-#!/bin/bash
-#SBATCH -J Science1
-#SBATCH --nodes=1
-#SBATCH --tasks-per-node=1
-#SBATCH --cpus-per-task=8
-#SBATCH --mail-type=end
-#SBATCH --mail-user=your.name@tu-dresden.de
-#SBATCH --time=08:00:00
+If you want to place your job onto specific nodes, there are two options for doing this. Either use
+`-p, --partion=<name>` to specify a host group aka. [partition](partitions_and_limits.md) that fits
+your needs. Or, use `-w, --nodelist=<host1,host2,..>`) with a list of hosts that will work for you.
 
-export OMP_NUM_THREADS=8
-./path/to/binary
-```
+## Interactive Jobs
 
-#### MPI Jobs
+Interactive activities like editing, compiling, preparing experiments etc. are normally limited to
+the login nodes. For longer interactive sessions you can allocate cores on the compute node with the
+command `salloc`. It takes the same options like `sbatch` to specify the required resources.
 
-For MPI jobs one typically allocates one core per task that has to be started. **Please note:**
-There are different MPI libraries on Taurus and Venus, so you have to compile the binaries
-specifically for their target.
+`salloc` returns a new shell on the node, where you submitted the job. You need to use the command
+`srun` in front of the following commands to have these commands executed on the allocated
+resources. If you allocate more than one task, please be aware that `srun` will run the command on
+each allocated task!
 
-```Bash
-#!/bin/bash
-#SBATCH -J Science1
-#SBATCH --ntasks=864
-#SBATCH --mail-type=end
-#SBATCH --mail-user=your.name@tu-dresden.de
-#SBATCH --time=08:00:00
+The syntax for submitting a job is
 
-srun ./path/to/binary
+```
+marie@login$ srun [options] <command>
 ```
 
-#### Multiple Programs Running Simultaneously in a Job
-
-In this short example, our goal is to run four instances of a program concurrently in a **single**
-batch script. Of course we could also start a batch script four times with sbatch but this is not
-what we want to do here. Please have a look at [Running Multiple GPU Applications Simultaneously in
-a Batch Job] todo Compendium.RunningNxGpuAppsInOneJob in case you intend to run GPU programs
-simultaneously in a **single** job.
-
-```Bash
-#!/bin/bash
-#SBATCH -J PseudoParallelJobs
-#SBATCH --ntasks=4
-#SBATCH --cpus-per-task=1
-#SBATCH --mail-type=end
-#SBATCH --mail-user=your.name@tu-dresden.de
-#SBATCH --time=01:00:00 
+An example of an interactive session looks like:
 
-# The following sleep command was reported to fix warnings/errors with srun by users (feel free to uncomment).
-#sleep 5
-srun --exclusive --ntasks=1 ./path/to/binary &
+```console
+marie@login$ srun --pty -n 1 -c 4 --time=1:00:00 --mem-per-cpu=1700 bash
+marie@login$ srun: job 13598400 queued and waiting for resources
+marie@login$ srun: job 13598400 has been allocated resources
+marie@compute$ # Now, you can start interactive work with e.g. 4 cores
+```
 
-#sleep 5
-srun --exclusive --ntasks=1 ./path/to/binary &
+!!! note "Partition `interactive`"
 
-#sleep 5
-srun --exclusive --ntasks=1 ./path/to/binary &
+    A dedicated partition `interactive` is reserved for short jobs (< 8h) with not more than one job
+    per user. Please check the availability of nodes there with `sinfo -p interactive`.
 
-#sleep 5
-srun --exclusive --ntasks=1 ./path/to/binary &
+### Interactive X11/GUI Jobs
 
-echo "Waiting for parallel job steps to complete..."
-wait
-echo "All parallel job steps completed!"
-```
+Slurm will forward your X11 credentials to the first (or even all) node for a job with the
+(undocumented) `--x11` option. For example, an interactive session for one hour with Matlab using
+eight cores can be started with:
 
-### Exclusive Jobs for Benchmarking
-
-Jobs on taurus run, by default, in shared-mode, meaning that multiple jobs can run on the same
-compute nodes. Sometimes, this behaviour is not desired (e.g. for benchmarking purposes), in which
-case it can be turned off by specifying the Slurm parameter: `--exclusive` .
-
-Setting `--exclusive` **only** makes sure that there will be **no other jobs running on your nodes**.
-It does not, however, mean that you automatically get access to all the resources which the node
-might provide without explicitly requesting them, e.g. you still have to request a GPU via the
-generic resources parameter (gres) to run on the GPU partitions, or you still have to request all
-cores of a node if you need them. CPU cores can either to be used for a task (`--ntasks`) or for
-multi-threading within the same task (--cpus-per-task). Since those two options are semantically
-different (e.g., the former will influence how many MPI processes will be spawned by 'srun' whereas
-the latter does not), Slurm cannot determine automatically which of the two you might want to use.
-Since we use cgroups for separation of jobs, your job is not allowed to use more resources than
-requested.*
-
-If you just want to use all available cores in a node, you have to
-specify how Slurm should organize them, like with \<span>"-p haswell -c
-24\</span>" or "\<span>-p haswell --ntasks-per-node=24". \</span>
-
-Here is a short example to ensure that a benchmark is not spoiled by
-other jobs, even if it doesn't use up all resources in the nodes:
-
-```Bash
-#!/bin/bash
-#SBATCH -J Benchmark
-#SBATCH -p haswell
-#SBATCH --nodes=2
-#SBATCH --ntasks-per-node=2
-#SBATCH --cpus-per-task=8
-#SBATCH --exclusive    # ensure that nobody spoils my measurement on 2 x 2 x 8 cores
-#SBATCH --mail-user=your.name@tu-dresden.de
-#SBATCH --time=00:10:00
-
-srun ./my_benchmark
+```console
+marie@login$ module load matlab
+marie@login$ srun --ntasks=1 --cpus-per-task=8 --time=1:00:00 --pty --x11=first matlab
 ```
 
-### Array Jobs
-
-Array jobs can be used to create a sequence of jobs that share the same executable and resource
-requirements, but have different input files, to be submitted, controlled, and monitored as a single
-unit. The arguments `-a` or `--array` take an additional parameter that specify the array indices.
-Within the job you can read the environment variables `SLURM_ARRAY_JOB_ID`, which will be set to the
-first job ID of the array, and `SLURM_ARRAY_TASK_ID`, which will be set individually for each step.
-
-Within an array job, you can use %a and %A in addition to %j and %N
-(described above) to make the output file name specific to the job. %A
-will be replaced by the value of SLURM_ARRAY_JOB_ID and %a will be
-replaced by the value of SLURM_ARRAY_TASK_ID.
-
-Here is an example of how an array job can look like:
-
-```Bash
-#!/bin/bash
-#SBATCH -J Science1
-#SBATCH --array 0-9
-#SBATCH -o arraytest-%A_%a.out
-#SBATCH -e arraytest-%A_%a.err
-#SBATCH --ntasks=864
-#SBATCH --mail-type=end
-#SBATCH --mail-user=your.name@tu-dresden.de
-#SBATCH --time=08:00:00
-
-echo "Hi, I am step $SLURM_ARRAY_TASK_ID in this array job $SLURM_ARRAY_JOB_ID"
-```
+!!! hint "X11 error"
 
-**Note:** If you submit a large number of jobs doing heavy I/O in the Lustre file systems you should
-limit the number of your simultaneously running job with a second parameter like:
+    If you are getting the error:
 
-```Bash
-#SBATCH --array=1-100000%100
-```
+    ```Bash
+    srun: error: x11: unable to connect node taurusiXXXX
+    ```
 
-For further details please read the Slurm documentation at
-(https://slurm.schedmd.com/sbatch.html)
-
-### Chain Jobs
-
-You can use chain jobs to create dependencies between jobs. This is often the case if a job relies
-on the result of one or more preceding jobs. Chain jobs can also be used if the runtime limit of the
-batch queues is not sufficient for your job. Slurm has an option `-d` or "--dependency" that allows
-to specify that a job is only allowed to start if another job finished.
-
-Here is an example of how a chain job can look like, the example submits 4 jobs (described in a job
-file) that will be executed one after each other with different CPU numbers:
-
-```Bash
-#!/bin/bash
-TASK_NUMBERS="1 2 4 8"
-DEPENDENCY=""
-JOB_FILE="myjob.slurm"
-
-for TASKS in $TASK_NUMBERS ; do
-    JOB_CMD="sbatch --ntasks=$TASKS"
-    if [ -n "$DEPENDENCY" ] ; then
-        JOB_CMD="$JOB_CMD --dependency afterany:$DEPENDENCY"
-    fi
-    JOB_CMD="$JOB_CMD $JOB_FILE"
-    echo -n "Running command: $JOB_CMD  "
-    OUT=`$JOB_CMD`
-    echo "Result: $OUT"
-    DEPENDENCY=`echo $OUT | awk '{print $4}'`
-done
-```
+    that probably means you still have an old host key for the target node in your
+    `~.ssh/known_hosts` file (e.g. from pre-SCS5). This can be solved either by removing the entry
+    from your known_hosts or by simply deleting the `known_hosts` file altogether if you don't have
+    important other entries in it.
 
-### Binding and Distribution of Tasks
+## Batch Jobs
 
-The Slurm provides several binding strategies to place and bind the tasks and/or threads of your job
-to cores, sockets and nodes. Note: Keep in mind that the distribution method has a direct impact on
-the execution time of your application. The manipulation of the distribution can either speed up or
-slow down your application. More detailed information about the binding can be found
-[here](binding_and_distribution_of_tasks.md).
+Working interactively using `srun` and `salloc` is a good starting point for testing and compiling.
+But, as soon as you leave the testing stage, we highly recommend you to use batch jobs.
+Batch jobs are encapsulated within [job files](#job-files) and submitted to the batch system using
+`sbatch` for later execution. A job file is basically a script holding the resource requirements,
+environment settings and the commands for executing the application. Using batch jobs and job files
+has multiple advantages:
 
-The default allocation of the tasks/threads for OpenMP, MPI and Hybrid (MPI and OpenMP) are as
-follows.
+* You can reproduce your experiments and work, because it's all steps are saved in a file.
+* You can easily share your settings and experimental setup with colleagues.
+* Submit your job file to the scheduling system for later execution. In the meanwhile, you can grab
+  a coffee and proceed with other work (,e.g., start writing a paper).
 
-#### OpenMP
+!!! hint "The syntax for submitting a job file to Slurm is"
 
-The illustration below shows the default binding of a pure OpenMP-job on 1 node with 16 cpus on
-which 16 threads are allocated.
+    ```console
+    marie@login$ sbatch [options] <job_file>
+    ```
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=1
-#SBATCH --tasks-per-node=1
-#SBATCH --cpus-per-task=16
+### Job Files
 
-export OMP_NUM_THREADS=16
+Job files have to be written with the following structure.
 
-srun --ntasks 1 --cpus-per-task $OMP_NUM_THREADS ./application
-```
+```bash
+#!/bin/bash                           # Batch script starts with shebang line
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAX4AAADeCAIAAAC10/zxAAAABmJLR0QA/wD/AP+gvaeTAAASvElEQVR4nO3de1BU5ePH8XMIBN0FVllusoouCuZ0UzMV7WtTDqV2GRU0spRm1GAqtG28zaBhNmU62jg2WWkXGWegNLVmqnFGQhsv/WEaXQxLaFEQdpfBXW4ul+X8/jgTQ1z8KQd4luX9+mv3Oc8+5zl7nv3wnLNnObKiKBIA9C8/0R0AMBgRPQAEIHoACED0ABCA6AEgANEDQACiB4AARA8AAYgeAAIQPQAEIHoACED0ABCA6AEgANEDQACiB4AARA8AAYgeAAIQPQAEIHoACOCv5cWyLPdWPwAMOFr+szuzHgACaJr1qLinBTDYaD/iYdYDQACiB4AARA8AAYgeAAIQPQAEIHoACED0ABCA6AEgANEDQACiB4AARA8AAYgeAAIQPQAEIHoACED0ABCA6AEgANEDQACiB4AARA8AAYgeAAIQPQAEIHoACED0ABCA6Bm8HnnkEVmWz54921YSFRV17Nix22/hl19+0ev1t18/JycnMTFRp9NFRUXdQUfhi4ieQS0sLGzt2rX9tjqj0bhmzZrs7Ox+WyO8FtEzqK1YsaK4uPirr77qvKiioiIlJSUiIsJkMr3yyisNDQ1q+bVr1x5//HGDwXDPPfecOXOmrX5NTU1GRsaoUaPCw8OfffbZqqqqzm3Omzdv8eLFo0aN6qPNwQBC9Axqer0+Ozt748aNzc3NHRYtWrQoICCguLj4/PnzFy5csFgsanlKSorJZKqsrPzuu+8+/PDDtvpLly612WwXL168evVqaGhoWlpav20FBiRFA+0tQKDZs2dv3bq1ubl5woQJe/bsURQlMjLy6NGjiqIUFRVJkmS329Wa+fn5QUFBHo+nqKhIluXq6mq1PCcnR6fTKYpSUlIiy3JbfZfLJcuy0+nscr25ubmRkZF9vXXoU9o/+/7CMg/ewd/ff9u2bStXrly2bFlbYVlZmU6nCw8PV5+azWa3211VVVVWVhYWFjZ8+HC1fPz48eoDq9Uqy/LUqVPbWggNDS0vLw8NDe2v7cAAQ/RAeuaZZ3bu3Llt27a2EpPJVF9f73A41PSxWq2BgYFGozEmJsbpdDY2NgYGBkqSVFlZqdYfPXq0LMuFhYVkDW4T53ogSZK0Y8eO3bt319bWqk/j4+OnT59usVjq6upsNltWVtby5cv9/PwmTJgwadKk9957T5KkxsbG3bt3q/Xj4uKSkpJWrFhRUVEhSZLD4Th8+HDntXg8HrfbrZ5XcrvdjY2N/bR58D5EDyRJkqZNmzZ//vy2r7FkWT58+HBDQ8PYsWMnTZp033337dq1S1106NCh/Pz8yZMnP/roo48++mhbC7m5uSNHjkxMTAwODp4+ffrp06c7r2Xfvn1Dhw5dtmyZzWYbOnRoWFhYP2wavJPcdsaoJy+WZUmStLQAYCDS/tln1gNAAKIHgABEDwABiB4AAhA9AAQgegAIQPQAEIDoASAA0QNAAKIHgABEDwABiB4AAhA9AAQgegAIQPQAEIDoASAA0QNAAKIHgABEDwABiB4AAhA9AAQgegAIQPQAEIDoASAA0QNAAH/tTaj3IQSA28esB4AAmu65DgA9w6wHgABEDwABiB4AAhA9AAQgegAIQPQAEEDTJYVcTDgY9OzyC8bGYKDl0hxmPQAE6IUfUnBRoq/SPnNhbPgq7WODWQ8AAYgeAAIQPQAEIHoACED0ABCA6AEgANEDQACiB4AARA8AAYgeAAIQPQAEIHoACED0ABDA96Pn0qVLTz31lNFoHDZs2IQJE9avX9+DRiZMmHDs2LHbrPzAAw/k5eV1uSgnJycxMVGn00VFRfWgG+hdXjU2XnvttYkTJw4bNmz06NHr1q1ramrqQWcGEB+PntbW1ieeeGLkyJG//fZbVVVVXl6e2WwW2B+j0bhmzZrs7GyBfYDK28ZGXV3dRx99dO3atby8vLy8vDfeeENgZ/qDooH2FvratWvXJEm6dOlS50XXr19PTk4ODw+PiYl5+eWX6+vr1fIbN25kZGSMHj06ODh40qRJRUVFiqIkJCQcPXpUXTp79uxly5Y1NTW5XK709HSTyWQ0GpcsWeJwOBRFeeWVVwICAoxGY2xs7LJly7rsVW5ubmRkZF9tc+/Rsn8ZGz0bG6rNmzc//PDDvb/NvUf7/vXxWc/IkSPj4+PT09O/+OKLq1evtl+0aNGigICA4uLi8+fPX7hwwWKxqOWpqamlpaXnzp1zOp0HDhwIDg5ue0lpaenMmTNnzZp14MCBgICApUuX2my2ixcvXr16NTQ0NC0tTZKkPXv2TJw4cc+ePVar9cCBA/24rbgz3jw2Tp8+PWXKlN7fZq8iNvn6gc1m27Bhw+TJk/39/ceNG5ebm6soSlFRkSRJdrtdrZOfnx8UFOTxeIqLiyVJKi8v79BIQkLCpk2bTCbTRx99pJaUlJTIstzWgsvlkmXZ6XQqinL//fera+kOsx4v4YVjQ1GUzZs3jx07tqqqqhe3tNf1QnqIXX1/qq2t3blzp5+f36+//nrixAmdTte26J9//pEkyWaz5efnDxs2rPNrExISIiMjp02b5na71ZIffvjBz88vth2DwfDHH38oRI/m1/Y/7xkbW7ZsMZvNVqu1V7ev92nfvz5+wNWeXq+3WCxBQUG//vqryWSqr693OBzqIqvVGhgYqB6ENzQ0VFRUdH757t27w8PDn3766YaGBkmSRo8eLctyYWGh9V83btyYOHGiJEl+foPoXfUNXjI2NmzYcPDgwVOnTsXGxvbBVnoXH/+QVFZWrl279uLFi/X19dXV1e+8805zc/PUqVPj4+OnT59usVjq6upsNltWVtby5cv9/Pzi4uKSkpJWrVpVUVGhKMrvv//eNtQCAwOPHDkSEhIyd+7c2tpateaKFSvUCg6H4/Dhw2rNqKioy5cvd9kfj8fjdrubm5slSXK73Y2Njf3yNqAL3jY2MjMzjxw5cvz4caPR6Ha7ff7LdR8/4HK5XCtXrhw/fvzQoUMNBsPMmTO//fZbdVFZWdnChQuNRmN0dHRGRkZdXZ1aXl1dvXLlypiYmODg4MmTJ1++fFlp9y1GS0vLCy+88NBDD1VXVzudzszMzDFjxuj1erPZvHr1arWFkydPjh8/3mAwLFq0qEN/9u7d2/7Nbz+x90Ja9i9j447Gxo0bNzp8MOPi4vrvvbhz2vevrGi4XYl6QwwtLcCbadm/jA3fpn3/+vgBFwDvRPQAEIDoASAA0QNAAKIHgABEDwABiB4AAhA9AAQgegAIQPQAEIDoASAA0QNAAKIHgABEDwAB/LU3of58HuiMsYHuMOsBIICmfxUGAD3DrAeAAEQPAAGIHgACED0ABCB6AAhA9AAQgOgBIICmq5m5VnUw0HILQPg2bgEIYIDphd9wcT20r9I+c2Fs+CrtY4NZDwABiB4AAhA9AAQgegAIQPQAEIDoASAA0QNAAKIHgAA+Gz1nzpyZP3/+iBEjdDrdvffem5WVVV9f3w/rbWlpyczMHDFiREhIyNKlS2tqarqsptfr5XYCAwMbGxv7oXuDlqjxYLPZFi9ebDQaDQbD448/fvny5S6r5eTkJCYm6nS6qKio9uVpaWntx0leXl4/9Ll/+Gb0fPPNN4899tj9999/7tw5u91+8OBBu91eWFh4O69VFKW5ubnHq96yZcvx48fPnz9/5cqV0tLS9PT0LqvZbLbafy1cuHDBggWBgYE9XiluTeB4yMjIcDqdf/31V3l5eXR0dEpKSpfVjEbjmjVrsrOzOy+yWCxtQyU5ObnHPfE6igbaW+gLHo/HZDJZLJYO5a2trYqiXL9+PTk5OTw8PCYm5uWXX66vr1eXJiQkZGVlzZo1Kz4+vqCgwOVypaenm0wmo9G4ZMkSh8OhVtu1a1dsbGxoaGh0dPTWrVs7rz0iIuLTTz9VHxcUFPj7+9+4ceMWvXU4HIGBgT/88IPGre4LWvav94wNseMhLi5u//796uOCggI/P7+WlpbuupqbmxsZGdm+ZPny5evXr+/ppvehXkgPsavvC+pfs4sXL3a5dMaMGampqTU1NRUVFTNmzHjppZfU8oSEhHvuuaeqqkp9+uSTTy5YsMDhcDQ0NKxatWr+/PmKoly+fFmv1//999+Kojidzp9//rlD4xUVFe1XrR5tnTlz5ha93bFjx/jx4zVsbh/yjegROB4URVm3bt1jjz1ms9lcLtfzzz+/cOHCW3S1y+iJjo42mUxTpkx59913m5qa7vwN6BNETxdOnDghSZLdbu+8qKioqP2i/Pz8oKAgj8ejKEpCQsL777+vlpeUlMiy3FbN5XLJsux0OouLi4cOHfrll1/W1NR0ueq//vpLkqSSkpK2Ej8/v++///4WvY2Pj9+xY8edb2V/8I3oETge1MqzZ89W342777776tWrt+hq5+g5fvz42bNn//7778OHD8fExHSeu4miff/64Lme8PBwSZLKy8s7LyorK9PpdGoFSZLMZrPb7a6qqlKfjhw5Un1gtVplWZ46deqYMWPGjBlz3333hYaGlpeXm83mnJycDz74ICoq6n//+9+pU6c6tB8cHCxJksvlUp/W1ta2traGhIR8/vnnbWcK29cvKCiwWq1paWm9te3oTOB4UBRlzpw5ZrO5urq6rq5u8eLFs2bNqq+v7248dJaUlDRjxoxx48YtWrTo3XffPXjwoJa3wruITb6+oB7bv/766x3KW1tbO/yVKygoCAwMbPsrd/ToUbX8ypUrd911l9Pp7G4VDQ0Nb7/99vDhw9XzBe1FRER89tln6uOTJ0/e+lzPkiVLnn322TvbvH6kZf96z9gQOB4cDofU6QD8p59+6q6dzrOe9r788ssRI0bcalP7US+kh9jV95Gvv/46KCho06ZNxcXFbrf7999/z8jIOHPmTGtr6/Tp059//vna2trKysqZM2euWrVKfUn7oaYoyty5c5OTk69fv64oit1uP3TokKIof/75Z35+vtvtVhRl3759ERERnaMnKysrISGhpKTEZrM9/PDDqamp3XXSbrcPGTLEO08wq3wjehSh4yE2NnblypUul+vmzZtvvvmmXq+vrq7u3MOWlpabN2/m5ORERkbevHlTbdPj8ezfv99qtTqdzpMnT8bFxbWdihKO6OnW6dOn586dazAYhg0bdu+9977zzjvqlxdlZWULFy40Go3R0dEZGRl1dXVq/Q5Dzel0ZmZmjhkzRq/Xm83m1atXK4py4cKFhx56KCQkZPjw4dOmTfvxxx87r7epqenVV181GAx6vT41NdXlcnXXw+3bt3vtCWaVz0SPIm48FBYWJiUlDR8+PCQkZMaMGd39pdm7d2/7YxGdTqcoisfjmTNnTlhY2JAhQ8xm88aNGxsaGnr9nekZ7ftX0z3X1SNVLS3Am2nZv4wN36Z9//rgaWYA3o/oASAA0QNAAKIHgABEDwABiB4AAhA9AAQgegAIQPQAEKAX7rmu/e7L8FWMDXSHWQ8AATT9hgsAeoZZDwABiB4AAhA9AAQgegAIQPQAEIDoASAA0QNAAE1XM3OtKjCY8b+ZAQwwvfAbLq6HBgYb7Uc8zHoACED0ABCA6AEgANGDjlpaWjIzM0eMGBESErJ06dKampouq+Xk5CQmJup0uqioqA6L0tLS5Hby8vL6vtcYYIgedLRly5bjx4+fP3/+ypUrpaWl6enpXVYzGo1r1qzJzs7ucqnFYqn9V3Jych92FwMT0YOOPv744w0bNpjN5oiIiLfeeuvQoUNOp7NztXnz5i1evHjUqFFdNhIQEKD/l79/L3yRCh9D9OA/Kisr7Xb7pEmT1KdTpkxpaWm5dOnSnbaTk5MzatSoBx98cPv27c3Nzb3dTQx4/DnCf9TW1kqSFBoaqj4NDg728/Pr7nRPd5577rmXXnopPDy8sLBw9erVNptt586dvd9XDGRED/4jODhYkiSXy6U+ra2tbW1tDQkJ+fzzz1988UW18P+9iDQpKUl9MG7cOLfbbbFYiB50wAEX/iMqKioiIuKXX35Rn164cMHf33/ixIlpaWnKv+6owSFDhrS0tPRBTzGwET3oaNWqVdu2bfvnn3/sdvumTZtSUlIMBkPnah6Px+12q+dx3G53Y2OjWt7a2vrJJ5+Ulpa6XK5Tp05t3LgxJSWlXzcAA4KigfYW4IWamppeffVVg8Gg1+tTU1NdLleX1fbu3dt+IOl0OrXc4/HMmTMnLCxsyJAhZrN548aNDQ0N/dh99Aftn31NN8NRf0KmpQUAA5H2zz4HXAAEIHoACED0ABCA6AEgQC9cUsh/aAZwp5j1ABBA05frANAzzHoACED0ABCA6AEgANEDQACiB4AARA8AAYgeAAIQPQAEIHoACED0ABCA6AEgANEDQACiB4AARA8AAYgeAAIQPQAEIHoACED0ABCA6AEgwP8BhqBe/aVBoe8AAAAASUVORK5CYII="
-/>
+#SBATCH --ntasks=24                   # All #SBATCH lines have to follow uninterrupted
+#SBATCH --time=01:00:00               # after the shebang line
+#SBATCH --account=<KTR>               # Comments start with # and do not count as interruptions
+#SBATCH --job-name=fancyExp
+#SBATCH --output=simulation-%j.out
+#SBATCH --error=simulation-%j.err
 
-#### MPI
+module purge                          # Set up environment, e.g., clean modules environment
+module load <modules>                 # and load necessary modules
 
-The illustration below shows the default binding of a pure MPI-job. In
-which 32 global ranks are distributed onto 2 nodes with 16 cores each.
-Each rank has 1 core assigned to it.
+srun ./application [options]          # Execute parallel application with srun
+```
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=16
-#SBATCH --cpus-per-task=1
+The following two examples show the basic resource specifications for a pure OpenMP application and
+a pure MPI application, respectively. Within the section [Job Examples](slurm_examples.md) we
+provide a comprehensive collection of job examples.
 
-srun --ntasks 32 ./application
-```
+??? example "Job file OpenMP"
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAw4AAADeCAIAAAAb9sCoAAAABmJLR0QA/wD/AP+gvaeTAAAfBklEQVR4nO3dfXBU1f348bshJEA2ISGbB0gIZAMJxqciIhCktGKxaqs14UEGC9gBJVUjxIo4EwFlpiqMOgydWipazTBNVATbGevQMQQYUMdSEEUNYGIID8kmMewmm2TzeH9/3On+9pvN2T27N9nsJu/XX+Tu/dx77uee8+GTu8tiUFVVAQAAQH/ChnoAAAAAwYtWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQChcT7DBYBiocQAIOaqqDvUQfEC9AkYyPfWKp0oAAABCup4qaULrN0sA+oXuExrqFTDS6K9XPFUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUauX72s58ZDIZPP/3UuSU5OfnDDz+UP8KXX35pNBrl9y8uLs7JyYmKikpOTvZhoABGvMDXq40bN2ZnZ48bNy4tLW3Tpk2dnZ0+DBfDC63SiBYfH//0008H7HQmk2nDhg3btm0L2BkBDBsBrld2u33Pnj2XLl0qLS0tLS3dunVrwE6NYEOrNKKtXbu2srLygw8+cH+ptrZ26dKliYmJqampjz/+eFtbm7b90qVLd911V2xs7A033HDixAnn/s3Nzfn5+ZMnT05ISHjwwQcbGxvdj3nPPfcsW7Zs8uTJg3Q5AIaxANerN954Y8GCBfHx8Tk5OQ8//LBrOEYaWqURzWg0btu27dlnn+3q6urzUl5e3ujRoysrK0+ePHnq1KnCwkJt+9KlS1NTU+vq6v71r3/95S9/ce6/cuVKi8Vy+vTpmpqa8ePHr1mzJmBXAWAkGMJ6dfz48VmzZg3o1SCkqDroPwKG0MKFC7dv397V1TVjxozdu3erqpqUlHTw4EFVVSsqKhRFqa+v1/YsKysbM2ZMT09PRUWFwWBoamrSthcXF0dFRamqWlVVZTAYnPvbbDaDwWC1Wvs9b0lJSVJS0mBfHQZVKK79UBwznIaqXqmqumXLlvT09MbGxkG9QAwe/Ws/PNCtGYJMeHj4Sy+9tG7dulWrVjk3Xr58OSoqKiEhQfvRbDY7HI7GxsbLly/Hx8fHxcVp26dPn679obq62mAwzJ4923mE8ePHX7lyZfz48YG6DgDDX+Dr1QsvvLBv377y8vL4+PjBuioEPVolKPfff/8rr7zy0ksvObekpqa2trY2NDRo1ae6ujoyMtJkMqWkpFit1o6OjsjISEVR6urqtP3T0tIMBsOZM2fojQAMqkDWq82bNx84cODo0aOpqamDdkEIAXxWCYqiKDt37ty1a1dLS4v2Y2Zm5ty5cwsLC+12u8ViKSoqWr16dVhY2IwZM2bOnPnaa68pitLR0bFr1y5t/4yMjMWLF69du7a2tlZRlIaGhv3797ufpaenx+FwaJ8zcDgcHR0dAbo8AMNIYOpVQUHBgQMHDh06ZDKZHA4HXxYwktEqQVEUZc6cOffee6/zn40YDIb9+/e3tbWlp6fPnDnzpptuevXVV7WX3n///bKysltuueWOO+644447nEcoKSmZNGlSTk5OdHT03Llzjx8/7n6WN954Y+zYsatWrbJYLGPHjuWBNgA/BKBeWa3W3bt3X7hwwWw2jx07duzYsdnZ2YG5OgQhg/MTT/4EGwyKoug5AoBQFIprPxTHDEA//Wufp0oAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABC4UM9AAAInKqqqqEeAoAQY1BV1f9gg0FRFD1HABCKQnHta2MGMDLpqVcD8FSJAgQg+JnN5qEeAoCQNABPlQCMTKH1VAkA/KOrVQIAABje+BdwAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrq+gpLvVRoJ/Ps6CebGSBBaXzXCnBwJqFcQ0VOveKoEAAAgNAD/sUlo/WYJefp/02JuDFeh+1s4c3K4ol5BRP/c4KkSAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACA0PBvlb799ttf//rXJpNp3LhxM2bMeOaZZ/w4yIwZMz788EPJnX/yk5+Ulpb2+1JxcXFOTk5UVFRycrIfw8DACqq5sXHjxuzs7HHjxqWlpW3atKmzs9OPwSDUBdWcpF4FlaCaGyOtXg3zVqm3t/eXv/zlpEmTvv7668bGxtLSUrPZPITjMZlMGzZs2LZt2xCOAZpgmxt2u33Pnj2XLl0qLS0tLS3dunXrEA4GQyLY5iT1KngE29wYcfVK1UH/EQbbpUuXFEX59ttv3V+6evXqkiVLEhISUlJSHnvssdbWVm37tWvX8vPz09LSoqOjZ86cWVFRoapqVlbWwYMHtVcXLly4atWqzs5Om822fv361NRUk8m0fPnyhoYGVVUff/zx0aNHm0ymKVOmrFq1qt9RlZSUJCUlDdY1Dxw995e54d/c0GzZsmXBggUDf80DJ/jvr7vgH3NwzknqVTAIzrmhGQn1apg/VZo0aVJmZub69evffffdmpoa15fy8vJGjx5dWVl58uTJU6dOFRYWattXrFhx8eLFzz77zGq1vvPOO9HR0c6Qixcvzp8///bbb3/nnXdGjx69cuVKi8Vy+vTpmpqa8ePHr1mzRlGU3bt3Z2dn7969u7q6+p133gngtcI3wTw3jh8/PmvWrIG/ZgS3YJ6TGFrBPDdGRL0a2k4tACwWy+bNm2+55Zbw8PBp06aVlJSoqlpRUaEoSn19vbZPWVnZmDFjenp6KisrFUW5cuVKn4NkZWU999xzqampe/bs0bZUVVUZDAbnEWw2m8FgsFqtqqrefPPN2llE+C0tSATh3FBVdcuWLenp6Y2NjQN4pQMuJO5vHyEx5iCck9SrIBGEc0MdMfVq+LdKTi0tLa+88kpYWNhXX331ySefREVFOV/64YcfFEWxWCxlZWXjxo1zj83KykpKSpozZ47D4dC2HD58OCwsbIqL2NjYb775RqX06I4NvOCZG88//7zZbK6urh7Q6xt4oXV/NaE15uCZk9SrYBM8c2Pk1Kth/gacK6PRWFhYOGbMmK+++io1NbW1tbWhoUF7qbq6OjIyUntTtq2trba21j18165dCQkJ9913X1tbm6IoaWlpBoPhzJkz1f9z7dq17OxsRVHCwkZQVoeHIJkbmzdv3rdv39GjR6dMmTIIV4lQEiRzEkEoSObGiKpXw3yR1NXVPf3006dPn25tbW1qanrxxRe7urpmz56dmZk5d+7cwsJCu91usViKiopWr14dFhaWkZGxePHiRx55pLa2VlXVs2fPOqdaZGTkgQMHYmJi7r777paWFm3PtWvXajs0NDTs379f2zM5OfncuXP9jqenp8fhcHR1dSmK4nA4Ojo6ApIG9CPY5kZBQcGBAwcOHTpkMpkcDsew/8e3cBdsc5J6FTyCbW6MuHo1tA+1BpvNZlu3bt306dPHjh0bGxs7f/78jz76SHvp8uXLubm5JpNp4sSJ+fn5drtd297U1LRu3bqUlJTo6Ohbbrnl3Llzqsu/Guju7v7tb3972223NTU1Wa3WgoKCqVOnGo1Gs9n85JNPakc4cuTI9OnTY2Nj8/Ly+ozn9ddfd02+64PTIKTn/jI3fJob165d67MwMzIyApcL3wX//XUX/GMOqjmpUq+CSVDNjRFYrwzOo/jBYDBop/f7CAhmeu4vc2N4C8X7G4pjhjzqFUT0399h/gYcAACAHrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQuH6D2EwGPQfBMMScwPBhjkJEeYGRHiqBAAAIGRQVXWoxwAAABCkeKoEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgpOvbuvlu05HAv2/eYm6MBKH1rWzMyZGAegURPfWKp0oAAABCA/B/wIXWb5aQp/83LebGcBW6v4UzJ4cr6hVE9M8NnioBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAIDdtW6cSJE/fee++ECROioqJuvPHGoqKi1tbWAJy3u7u7oKBgwoQJMTExK1eubG5u7nc3o9FocBEZGdnR0RGA4Y1YQzUfLBbLsmXLTCZTbGzsXXfdde7cuX53Ky4uzsnJiYqKSk5Odt2+Zs0a13lSWloagDEj8KhXcEW9CjbDs1X65z//uWjRoptvvvmzzz6rr6/ft29ffX39mTNnZGJVVe3q6vL71M8///yhQ4dOnjz5/fffX7x4cf369f3uZrFYWv4nNzf3gQceiIyM9Puk8GwI50N+fr7Vaj1//vyVK1cmTpy4dOnSfnczmUwbNmzYtm2b+0uFhYXOqbJkyRK/R4KgRb2CK+pVMFJ10H+EwdDT05OamlpYWNhne29vr6qqV69eXbJkSUJCQkpKymOPPdba2qq9mpWVVVRUdPvtt2dmZpaXl9tstvXr16empppMpuXLlzc0NGi7vfrqq1OmTBk/fvzEiRO3b9/ufvbExMS33npL+3N5eXl4ePi1a9c8jLahoSEyMvLw4cM6r3ow6Lm/wTM3hnY+ZGRk7N27V/tzeXl5WFhYd3e3aKglJSVJSUmuW1avXv3MM8/4e+mDKHjur7zgHDP1aqBQr6hXIgPQ7Qzt6QeD1n2fPn2631fnzZu3YsWK5ubm2traefPmPfroo9r2rKysG264obGxUfvxV7/61QMPPNDQ0NDW1vbII4/ce++9qqqeO3fOaDReuHBBVVWr1frf//63z8Fra2tdT609zT5x4oSH0e7cuXP69Ok6LncQDY/SM4TzQVXVTZs2LVq0yGKx2Gy2hx56KDc318NQ+y09EydOTE1NnTVr1ssvv9zZ2el7AgZF8NxfecE5ZurVQKFeUa9EaJX68cknnyiKUl9f7/5SRUWF60tlZWVjxozp6elRVTUrK+tPf/qTtr2qqspgMDh3s9lsBoPBarVWVlaOHTv2vffea25u7vfU58+fVxSlqqrKuSUsLOzjjz/2MNrMzMydO3f6fpWBMDxKzxDOB23nhQsXatm47rrrampqPAzVvfQcOnTo008/vXDhwv79+1NSUtx/1xwqwXN/5QXnmKlXA4V6pW2nXrnTf3+H4WeVEhISFEW5cuWK+0uXL1+OiorSdlAUxWw2OxyOxsZG7cdJkyZpf6iurjYYDLNnz546derUqVNvuumm8ePHX7lyxWw2FxcX//nPf05OTv7pT3969OjRPsePjo5WFMVms2k/trS09Pb2xsTEvP32285PurnuX15eXl1dvWbNmoG6drgbwvmgquqdd95pNpubmprsdvuyZctuv/321tZW0Xxwt3jx4nnz5k2bNi0vL+/ll1/et2+fnlQgCFGv4Ip6FaSGtlMbDNp7vU899VSf7b29vX268vLy8sjISGdXfvDgQW37999/P2rUKKvVKjpFW1vbH//4x7i4OO39Y1eJiYl/+9vftD8fOXLE83v/y5cvf/DBB327vADSc3+DZ24M4XxoaGhQ3N7g+Pzzz0XHcf8tzdV77703YcIET5caQMFzf+UF55ipVwOFeqVtp165G4BuZ2hPP0j+8Y9/jBkz5rnnnqusrHQ4HGfPns3Pzz9x4kRvb+/cuXMfeuihlpaWurq6+fPnP/LII1qI61RTVfXuu+9esmTJ1atXVVWtr69///33VVX97rvvysrKHA6HqqpvvPFGYmKie+kpKirKysqqqqqyWCwLFixYsWKFaJD19fURERHB+QFJzfAoPeqQzocpU6asW7fOZrO1t7e/8MILRqOxqanJfYTd3d3t7e3FxcVJSUnt7e3aMXt6evbu3VtdXW21Wo8cOZKRkeH8aMKQC6r7Kylox0y9GhDUK+cRqFd90CoJHT9+/O67746NjR03btyNN9744osvav9Y4PLly7m5uSaTaeLEifn5+Xa7Xdu/z1SzWq0FBQVTp041Go1ms/nJJ59UVfXUqVO33XZbTExMXFzcnDlzjh075n7ezs7OJ554IjY21mg0rlixwmaziUa4Y8eOoP2ApGbYlB516ObDmTNnFi9eHBcXFxMTM2/ePNHfNK+//rrrs96oqChVVXt6eu688874+PiIiAiz2fzss8+2tbUNeGb8E2z3V0Ywj5l6pR/1yhlOvepD//01OI/iB+2dSz1HQDDTc3+ZG8NbKN7fUBwz5FGvIKL//g7Dj3UDAAAMFFolAAAAIVolAAAAIVolAAAAIVolAAAAIVolAAAAIVolAAAAIVolAAAAIVolAAAAoXD9h/D6vw1jxGJuINgwJyHC3IAIT5UAAACEdP0fcAAAAMMbT5UAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEdH1bN99tOhL4981bzI2RILS+lY05ORJQryCip17xVAkAAEBoAP4POD1dPLHBH6tHKF4vsfKxoSgU80ysfKweoXi9xMrH6sFTJQAAACFaJQAAACFaJQAAAKFBaZW6u7sLCgomTJgQExOzcuXK5uZm+diNGzdmZ2ePGzcuLS1t06ZNnZ2dfpx95syZBoOhrq7Op8B///vfc+bMGTNmTEJCwqZNm+QDLRbLsmXLTCZTbGzsXXfdde7cOc/7FxcX5+TkREVFJScn9xm517yJYmXyJop1nt2/vPnE8xg8KyoqSk9Pj4yMjI+Pv++++77//nv52DVr1hhclJaWyscajUbX2MjIyI6ODsnYy5cv5+XlxcfHT5gw4fe//73XQFF+ZPIm2kcmb6JYPXkLZh7y6bUOiGJl6oBoncqsfVGszNr3vI/nte8h1muuRLEyuRLNWz1/v8gQ3V+ZOiCKlakDolzJrH1RrMzaF8XKrH1RrEyuRLEyuRJdl56/X7xQdRAdoaioKDMzs7Ky0mKxzJ8/f8WKFfKxa9euPXbsWGNj44kTJyZPnrx582b5WM327dsXLVqkKEptba18bFlZmdFo/Otf/1pXV1dTU3Ps2DH52AceeOAXv/jFjz/+aLfbV69efeONN3qO/eijj959990dO3YkJSW57iPKm0ysKG8ysRr3vOmZIaJYz2PwHPv5559XVlY2NzdXVVXdf//9OTk58rGrV68uLCxs+Z+uri75WLvd7gzMzc1dvny5fOxtt9324IMP2my2q1evzp0798knn/QcK8qPaLtMrChvMrGivOmvHoEnc72iOiATK6oDrrGidSqz9kWxMmvfc131vPZFsTK5EsXK5Eo0b2Vy5SuZ+yuqAzKxojogkyuZtS+KlVn7oliZtS+KlcmVKFYmV6LrksmVfwalVUpMTHzrrbe0P5eXl4eHh1+7dk0y1tWWLVsWLFggf15VVb/55puMjIwvvvhC8bFVysnJeeaZZzyPRxSbkZGxd+9e7c/l5eVhYWHd3d1eY0tKSvrcTlHeZGJdueZNMrbfvA1U6XHnefxez9vZ2Zmfn3/PPffIx65evdrv++vU0NAQGRl5+PBhydgrV64oilJRUaH9ePDgQaPR2NHR4TVWlB/37T7NjT55k4kV5U1/6Qk8mesV1QGZWFEdEOXKdZ3Kr333WNF2yVif1r5rrHyu3GN9ylWfeetrrmT4tI761AGvsR7qgPz9lVn7olhVYu27x/q69vs9r9dc9Yn1NVf9/l0gnyt5A/8GXF1dXX19/cyZM7UfZ82a1d3d/e233/pxqOPHj8+aNUt+/56ent/97nevvfZadHS0TydyOByff/55T0/PddddFxcXt2jRoq+++ko+PC8vr6SkpL6+vrm5+c033/zNb34zatQonwaghGbeAq+4uDg5OTk6Ovrrr7/++9//7mvs5MmTb7311h07dnR1dflx9rfffjstLe3nP/+55P7OJepkt9t9et9woAxt3kJFgOuAc536sfZFa1xm7bvu4+vad8b6kSvX80rmyn3eDmCd9FsA6oCvNdxDrE9r3z1Wfu33O2bJXDlj5XOlp6b5Q0+f1e8Rzp8/ryhKVVXV/2/HwsI+/vhjmVhXW7ZsSU9Pb2xslDyvqqo7d+5cunSpqqrfffed4stTpdraWkVR0tPTz549a7fbN2zYkJKSYrfbJc9rs9kWLlyovXrdddfV1NTInLdP5+shb15jXfXJm0ysKG96ZojnWL+fKrW1tV29evXYsWMzZ85cu3atfOyhQ4c+/fTTCxcu7N+/PyUlpbCw0Ncxq6qamZm5c+dOn8Z86623Oh8mz5s3T1GUzz77zGvsgD9V6jdvMrGivOmvHoHn9Xo91AGZXInqQL+5cl2nPq19VVwbva599318WvuusT7lyv28krlyn7e+5kqSTzW2Tx2QiRXVAfn7K/mkxD1Wcu27x/q09kVz0muu3GMlc+Xh74LBeKo08K2StoROnz6t/ah95u7EiRMysU7PP/+82Wyurq6WP++FCxcmTZpUV1en+t4qtbS0KIqyY8cO7cf29vZRo0YdPXpUJra3t3f27NkPP/xwU1OT3W7funVrWlqaTJvVb5nuN2/yy9g9b15jPeRtYEuPzPjlz3vs2DGDwdDa2upH7L59+xITE3097+HDhyMiIhoaGnwa88WLF3Nzc5OSktLT07du3aooyvnz573GDtIbcOr/zZuvsa550196As/r9XqoA15jPdQB99g+69SntS+qjTJrv88+Pq39PrE+5apPrE+50jjnrU+5kie/FtzrgEysqA7I31+Zte/5703Pa99zrOe1L4qVyZV7rHyu3K9LExpvwCUnJycmJn755Zfaj6dOnQoPD8/OzpY/wubNm/ft23f06NEpU6bIRx0/fryxsfH66683mUxaK3r99de/+eabMrFGo3HatGnOL/T06Zs9f/zxx//85z8FBQVxcXFRUVFPPfVUTU3N2bNn5Y+gCcW8Da1Ro0b58UanoigRERHd3d2+Ru3Zsyc3N9dkMvkUlZaW9sEHH9TV1VVVVaWmpqakpEybNs3XUw+sAOcthASmDrivU/m1L1rjMmvffR/5te8eK58r91j/aqY2b/XXSZ0GtQ74V8PlY0Vr32ush7XvIdZrrvqN9aNm+l3TfKCnzxIdoaioKCsrq6qqymKxLFiwwKd/AffEE09Mnz69qqqqvb29vb3d/TOwotjW1tZL/3PkyBFFUU6dOiX/Jtqrr75qNpvPnTvX3t7+hz/8YfLkyfJPLKZMmbJu3Tqbzdbe3v7CCy8YjcampiYPsd3d3e3t7cXFxUlJSe3t7Q6HQ9suyptMrChvXmM95E3PDBHFisbvNbazs/PFF1+sqKiwWq1ffPHFrbfempeXJxnb09Ozd+/e6upqq9V65MiRjIyMRx99VH7MqqrW19dHRET0+4Fuz7EnT5784YcfGhsbDxw4kJCQ8Pbbb3uOFeVHtN1rrIe8eY31kDf91SPwZPIsqgMysaI64BorWqcya18UK7P2+91Hcu2Lji+TK1Gs11x5mLcyuRqMuaEK6oBMrKgOyORKZu33Gyu59vuNlVz7Hv6+9porUazXXHm4Lplc+WdQWqXOzs4nnngiNjbWaDSuWLHCZrNJxl67dk35vzIyMuTP6+TrG3Cqqvb29m7ZsiUpKSkmJuaOO+74+uuv5WPPnDmzePHiuLi4mJiYefPmef0XUq+//rrrNUZFRWnbRXnzGushbzLnFeVNz/QSxXodgyi2q6vrvvvuS0pKioiImDp16saNG+XnVU9Pz5133hkfHx8REWE2m5999tm2tjb5MauqumPHjunTp/f7kufYXbt2JSYmjh49Ojs7u7i42GusKD+i7V5jPeTNa6yHvOmZG0NFJs+iOiATK6oDzlgP69Tr2hfFyqx9mboqWvseYr3mykOs11x5mLcyddJXMvdXFdQBmVhRHZDJlde1L4qVWfuiWJm173leec6Vh1ivufJwXTJ10j8G51H8ELr/bR6xxBI7VLFDJRRzRSyxxA5trIb/2AQAAECIVgkAAECIVgkAAECIVgkAAEBoAD7WjeFNz8foMLyF4se6MbxRryDCx7oBAAAGha6nSgAAAMMbT5UAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACE/h82xQH7rLtt0wAAAABJRU5ErkJggg=="
-/>
+    ```bash
+    #!/bin/bash
 
-#### Hybrid (MPI and OpenMP)
+    #SBATCH --nodes=1
+    #SBATCH --tasks-per-node=1
+    #SBATCH --cpus-per-task=64
+    #SBATCH --time=01:00:00
+    #SBATCH --account=<account>
 
-In the illustration below the default binding of a Hybrid-job is shown.
-In which 8 global ranks are distributed onto 2 nodes with 16 cores each.
-Each rank has 4 cores assigned to it.
+    module purge
+    module load <modules>
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=4
-#SBATCH --cpus-per-task=4
+    export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
+    srun ./path/to/openmpi_application
+    ```
 
-export OMP_NUM_THREADS=4
+    * Submisson: `marie@login$ sbatch batch_script.sh`
+    * Run with fewer CPUs: `marie@login$ sbatch -c 14 batch_script.sh`
 
-srun --ntasks 8 --cpus-per-task $OMP_NUM_THREADS ./application
-```
+??? example "Job file MPI"
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAvoAAADyCAIAAACzsfbGAAAABmJLR0QA/wD/AP+gvaeTAAAgAElEQVR4nO3de1iUdf7/8XsQA+SoDgdhZHA4CaUlpijmYdXooJvrsbxqzXa1dCsPbFlt5qF2O2xbXV52bdtlV25c7iVrhrVXWVaEupJ2gjxUYAIDgjgcZJCDIIf7+8f9a36zjCAwM/eMn3k+/oJ77rnf9z3z5u1r7hnn1siyLAEAAIjLy9U7AAAA4FzEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABCctz131mg0jtoPANccWZZVrsjMATyZPTOHszsAAEBwdp3dUaj/Cg+Aa7n2LAszB/A09s8czu4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9zxXDNmzNBoNF9++aVlSURExPvvv9/3LXz//fcBAQF9Xz8zMzMtLc3f3z8iIqIfOwpACOrPnPXr1ycnJw8ZMiQ6OnrDhg2XL1/ux+5CLMQdjzZ8+PDHH39ctXJarXbdunVbtmxRrSIAt6LyzGlqanrzzTfPnj2blZWVlZW1efNm1UrD3RB3PNqKFSuKi4vfe+8925uqqqoWL14cFham0+keeeSRlpYWZfnZs2dvu+22kJCQG264IS8vz7L+xYsXV69ePXLkyNDQ0Hvuuae2ttZ2m3feeeeSJUtGjhzppMMB4OZUnjk7duyYOnXq8OHD09LSHnjgAeu7w9MQdzxaQEDAli1bnnrqqfb29m43LVy4cPDgwcXFxd9++21+fn5GRoayfPHixTqd7vz58/v37//HP/5hWf/ee+81mUwFBQXl5eXBwcHLly9X7SgAXCtcOHOOHDkyfvx4hx4NrimyHezfAlxo+vTpzz33XHt7++jRo7dv3y7Lcnh4+L59+2RZLiwslCSpurpaWTMnJ8fX17ezs7OwsFCj0Vy4cEFZnpmZ6e/vL8tySUmJRqOxrN/Q0KDRaMxm8xXr7t69Ozw83NlHB6dy1d8+M+ea5qqZI8vypk2bRo0aVVtb69QDhPPY/7fvrXa8gpvx9vZ+8cUXV65cuWzZMsvCiooKf3//0NBQ5VeDwdDa2lpbW1tRUTF8+PChQ4cqy+Pj45UfjEajRqOZMGGCZQvBwcGVlZXBwcFqHQeAa4P6M+fZZ5/dtWtXbm7u8OHDnXVUcHvEHUjz5s175ZVXXnzxRcsSnU7X3NxcU1OjTB+j0ejj46PVaqOiosxmc1tbm4+PjyRJ58+fV9aPjo7WaDTHjx8n3wC4KjVnzpNPPpmdnX3o0CGdTue0A8I1gM/uQJIk6eWXX962bVtjY6Pya0JCwqRJkzIyMpqamkwm08aNG++//34vL6/Ro0ePGzfutddekySpra1t27ZtyvqxsbHp6ekrVqyoqqqSJKmmpmbv3r22VTo7O1tbW5X37FtbW9va2lQ6PABuRp2Zs2bNmuzs7AMHDmi12tbWVv4juicj7kCSJCk1NXXOnDmW/wqh0Wj27t3b0tIyatSocePGjR079tVXX1Vuevfdd3NyclJSUmbOnDlz5kzLFnbv3h0ZGZmWlhYYGDhp0qQjR47YVtmxY4efn9+yZctMJpOfnx8nlgGPpcLMMZvN27dv//nnnw0Gg5+fn5+fX3JysjpHBzeksXwCaCB31mgkSbJnCwCuRa7622fmAJ7J/r99zu4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBedu/CY1GY/9GAKCPmDkA+ouzOwAAQHAaWZZdvQ8AAABOxNkdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADB2fU1g3zZlycY2FcV0BueQP2vsaCvPAEzBz2xZ+ZwdgcAAAjOAReR4IsKRWX/qyV6Q1SufSVNX4mKmYOe2N8bnN0BAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjx486PP/7461//WqvVDhkyZPTo0U888cQANjJ69Oj333+/jyvfdNNNWVlZV7wpMzMzLS3N398/IiJiALsBx3Kr3li/fn1ycvKQIUOio6M3bNhw+fLlAewM3IFb9RUzx624VW942swRPO50dXXdfvvtkZGRJ0+erK2tzcrKMhgMLtwfrVa7bt26LVu2uHAfoHC33mhqanrzzTfPnj2blZWVlZW1efNmF+4MBszd+oqZ4z7crTc8bubIdrB/C8529uxZSZJ+/PFH25vOnTu3aNGi0NDQqKiohx9+uLm5WVleX1+/evXq6OjowMDAcePGFRYWyrKcmJi4b98+5dbp06cvW7bs8uXLDQ0Nq1at0ul0Wq327rvvrqmpkWX5kUceGTx4sFar1ev1y5Ytu+Je7d69Ozw83FnH7Dj2PL/0xsB6Q7Fp06apU6c6/pgdx1XPL33FzHHGfdXhnr2h8ISZI/jZncjIyISEhFWrVv373/8uLy+3vmnhwoWDBw8uLi7+9ttv8/PzMzIylOVLly4tKys7evSo2Wx+5513AgMDLXcpKyubMmXKLbfc8s477wwePPjee+81mUwFBQXl5eXBwcHLly+XJGn79u3Jycnbt283Go3vvPOOiseK/nHn3jhy5Mj48eMdf8xwPnfuK7iWO/eGR8wc16YtFZhMpieffDIlJcXb2zsuLm737t2yLBcWFkqSVF1drayTk5Pj6+vb2dlZXFwsSVJlZWW3jSQmJj7zzDM6ne7NN99UlpSUlGg0GssWGhoaNBqN2WyWZfnGG29UqvSEV1puwg17Q5blTZs2jRo1qra21oFH6nCuen7pK2aOM+6rGjfsDdljZo74cceisbHxlVde8fLyOnHixOeff+7v72+5qbS0VJIkk8mUk5MzZMgQ2/smJiaGh4enpqa2trYqS7744gsvLy+9lZCQkB9++EFm9Nh9X/W5T29s3brVYDAYjUaHHp/jEXf6wn36ipnjbtynNzxn5gj+Zpa1gICAjIwMX1/fEydO6HS65ubmmpoa5Saj0ejj46O8wdnS0lJVVWV7923btoWGht51110tLS2SJEVHR2s0muPHjxt/UV9fn5ycLEmSl5cHPapicJPeePLJJ3ft2nXo0CG9Xu+Eo4Ta3KSv4IbcpDc8auYI/kdy/vz5xx9/vKCgoLm5+cKFCy+88EJ7e/uECRMSEhImTZqUkZHR1NRkMpk2btx4//33e3l5xcbGpqenP/jgg1VVVbIsnzp1ytJqPj4+2dnZQUFBd9xxR2Njo7LmihUrlBVqamr27t2rrBkREVFUVHTF/ens7GxtbW1vb5ckqbW1ta2tTZWHAVfgbr2xZs2a7OzsAwcOaLXa1tZW4f9TqKjcra+YOe7D3XrD42aOa08uOVtDQ8PKlSvj4+P9/PxCQkKmTJny0UcfKTdVVFQsWLBAq9WOGDFi9erVTU1NyvILFy6sXLkyKioqMDAwJSWlqKhItvokfEdHx29/+9uJEydeuHDBbDavWbMmJiYmICDAYDCsXbtW2cLBgwfj4+NDQkIWLlzYbX/eeOMN6wff+gSmG7Ln+aU3+tUb9fX13f4wY2Nj1Xss+s9Vzy99xcxxxn3V4Va94YEzR2PZygBoNBql/IC3AHdmz/NLb4jNVc8vfSU2Zg56Yv/zK/ibWQAAAMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAILztn8TymXZAVv0BpyBvkJP6A30hLM7AABAcBpZll29DwAAAE7E2R0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgODs+lZlvr/SEwzsm5noDU+g/rd20VeegJmDntgzczi7AwAABOeAa2bxvcyisv/VEr0hKte+kqavRMXMQU/s7w3O7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAghM27uTl5c2ZM2fYsGH+/v5jxozZuHFjc3OzCnU7OjrWrFkzbNiwoKCge++99+LFi1dcLSAgQGPFx8enra1Nhd3zWK7qB5PJtGTJEq1WGxIScttttxUVFV1xtczMzLS0NH9//4iICOvly5cvt+6TrKwsFfYZA8PMgTVmjrsRM+785z//mTVr1o033nj06NHq6updu3ZVV1cfP368L/eVZbm9vX3Apbdu3XrgwIFvv/32zJkzZWVlq1atuuJqJpOp8RcLFiyYP3++j4/PgIuidy7sh9WrV5vN5tOnT1dWVo4YMWLx4sVXXE2r1a5bt27Lli22N2VkZFhaZdGiRQPeEzgVMwfWmDnuSLaD/Vtwhs7OTp1Ol5GR0W15V1eXLMvnzp1btGhRaGhoVFTUww8/3NzcrNyamJi4cePGW265JSEhITc3t6GhYdWqVTqdTqvV3n333TU1Ncpqr776ql6vDw4OHjFixHPPPWdbPSws7O2331Z+zs3N9fb2rq+v72Vva2pqfHx8vvjiCzuP2hnseX7dpzdc2w+xsbFvvfWW8nNubq6Xl1dHR0dPu7p79+7w8HDrJffff/8TTzwx0EN3Ilc9v+7TV9aYOY7CzGHm9MQBicW15Z1BSdAFBQVXvHXy5MlLly69ePFiVVXV5MmTH3roIWV5YmLiDTfcUFtbq/w6d+7c+fPn19TUtLS0PPjgg3PmzJFluaioKCAg4Oeff5Zl2Ww2f/fdd902XlVVZV1aOaucl5fXy96+/PLL8fHxdhyuE4kxelzYD7Isb9iwYdasWSaTqaGh4b777luwYEEvu3rF0TNixAidTjd+/PiXXnrp8uXL/X8AnIK4Y42Z4yjMHGZOT4g7V/D5559LklRdXW17U2FhofVNOTk5vr6+nZ2dsiwnJia+/vrryvKSkhKNRmNZraGhQaPRmM3m4uJiPz+/PXv2XLx48YqlT58+LUlSSUmJZYmXl9fHH3/cy94mJCS8/PLL/T9KNYgxelzYD8rK06dPVx6NpKSk8vLyXnbVdvQcOHDgyy+//Pnnn/fu3RsVFWX7etFViDvWmDmOwsxRljNzbNn//Ar42Z3Q0FBJkiorK21vqqio8Pf3V1aQJMlgMLS2ttbW1iq/RkZGKj8YjUaNRjNhwoSYmJiYmJixY8cGBwdXVlYaDIbMzMy///3vERER06ZNO3ToULftBwYGSpLU0NCg/NrY2NjV1RUUFPTPf/7T8skv6/Vzc3ONRuPy5csddeyw5cJ+kGV59uzZBoPhwoULTU1NS5YsueWWW5qbm3vqB1vp6emTJ0+Oi4tbuHDhSy+9tGvXLnseCjgJMwfWmDluyrVpyxmU903/+Mc/dlve1dXVLVnn5ub6+PhYkvW+ffuU5WfOnBk0aJDZbO6pREtLy/PPPz906FDlvVhrYWFhO3fuVH4+ePBg7++j33333ffcc0//Dk9F9jy/7tMbLuyHmpoayeaNhmPHjvW0HdtXWtb27NkzbNiw3g5VRa56ft2nr6wxcxyFmaMsZ+bYckBicW15J/nggw98fX2feeaZ4uLi1tbWU6dOrV69Oi8vr6ura9KkSffdd19jY+P58+enTJny4IMPKnexbjVZlu+4445FixadO3dOluXq6up3331XluWffvopJyentbVVluUdO3aEhYXZjp6NGzcmJiaWlJSYTKapU6cuXbq0p52srq6+7rrr3PMDgwoxRo/s0n7Q6/UrV65saGi4dOnSs88+GxAQcOHCBds97OjouHTpUmZmZnh4+KVLl5RtdnZ2vvXWW0aj0Ww2Hzx4MDY21vI2v8sRd7ph5jgEM8eyBWZON8SdHh05cuSOO+4ICQkZMmTImDFjXnjhBeUD8BUVFQsWLNBqtSNGjFi9enVTU5OyfrdWM5vNa9asiYmJCQgIMBgMa9eulWU5Pz9/4sSJQUFBQ4cOTU1NPXz4sG3dy5cvP/rooyEhIQEBAUuXLm1oaOhpD//617+67QcGFcKMHtl1/XD8+PH09PShQ4cGBQVNnjy5p39p3njjDetzrv7+/rIsd3Z2zp49e/jw4dddd53BYHjqqadaWloc/sgMDHHHFjPHfswcy92ZOd3Y//xqLFsZAOVdQHu2AHdmz/NLb4jNVc8vfSU2Zg56Yv/zK+BHlQEAAKwRdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwXnbv4mrXmEVHovegDPQV+gJvYGecHYHAAAIzq5rZgEAALg/zu4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARn17cq8/2VnmBg38xEb3gC9b+1i77yBMwc9MSemcPZHQAAIDgHXDPLVa/wqKtOXXt42mPlaXVdxdMeZ0+raw9Pe6w8ra49OLsDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABCcw+KO2Wz29vaOiYnR6/V/+MMf+v6f8o1G4+zZs3u69cMPPzQYDDExMZmZmWrWnT9/fkhIyKJFi3pawRl1S0tLZ86cGRUVlZSU9Mknn6hWt6WlJSUlRafT6fX6bdu29XGDfUdv2F9X1N6wh5OeX0mSWlpa9Hr9unXr1Kzr7++v0+l0Ot3ixYvVrHv27NmZM2eGhYUlJSW1traqU7egoED3C29v77y8vD5us4/oDYfUFa03ZDtYb6G+vj4qKkqW5dbW1gkTJnz88cd93EhpaemsWbOueFN7e7vBYDAajTU1NdHR0Q0NDerUlWU5Nzc3Ozt74cKF1gudXbe4uPjo0aOyLJ86dSo8PLyzs1Oduh0dHefPn5dlua6uLjIyUvm5W93+ojccW1ek3rCHCs+vLMsbN25cvHjx2rVr1ayr1+ttF6pQd/bs2Tt27JBluby8vL29XbW6ipqamhEjRnR0dNjW7S96w+F1hekNhePfzPLx8Zk4ceKZM2ckSWpra5s1a1ZKSsq4ceMOHTokSZLRaExNTX3ooYduvfXWRx991PqOeXl5kydPrqmpsSz5+uuvExIS9Hq9VqudMWNGTk6OOnUlSZoxY0ZgYKDKx2swGCZNmiRJ0vXXXy9JUnNzszp1Bw0aFB4eLklSR0dHQECAn59fXw58AOgNesMZHPv8lpSU/Pjjj3feeafKdV1yvKWlpUajccWKFZIkjRw50tu7t+/Zd8bxvvfee3fdddegQYMG9lBcFb1Bb/x/9mQl6y1YUt7FixfHjh2bm5sry3JnZ2d9fb0sy1VVVWlpabIsl5aWBgcH19TUyLI8bdq0kpISJeXl5eWlpqaaTCbr7b/77ru///3vlZ//9Kc/bd++XZ26is8++6wvr+AdXleW5U8//XTKlClq1m1oaIiOjh40aNAbb7xxxbr9RW84o64sRG/YQ4XjXbhwYWFh4c6dO6/6Ct6xdQMCAgwGw/jx4z/55BPV6n766aczZsyYP3/+TTfdtHnzZjWPVzFz5sycnJwr1u0vesOxdUXqjf+3Bbvu/L+HPWjQIL1ef9111y1btkxZ2NXV9fTTT6elpU2fPj04OFiW5dLS0mnTpim3rly5Mjc3t7S0VK/XjxkzxnKe3KKP/6Q5vK7iqv+kOaluWVlZUlLSTz/9pHJd5V6jRo0qLy+3rdtf9Aa94QzOPt5PPvlk/fr1siz3/k+aMx5no9Eoy3J+fn5kZGRdXZ06dT/++GNfX9/CwsJLly5NnTrV8maEOn1lMpkiIyMt71bI7j1z6A3Vjld2dG8oHPlmVkREhNFoLCsr++qrr3744QdJkvbv319cXHzo0KGDBw/6+voqqw0ePFj5wcvLq6OjQ5KksLAwPz+/EydOdNtgZGTkuXPnlJ8rKysjIyPVqeuq45UkyWw233XXXdu3bx89erSadRUxMTGpqamnTp3q/4NxFfQGveEMDj/eY8eO7dmzJyYm5rHHHnv77befffZZdepKkqTX6yVJGjduXHJy8unTp9WpGxUVlZiYmJiY6Ovre+utt548eVK145Uk6b333ps3b56T3smiN+iNbhz/2Z2IiIgtW7Zs3bpVkqT6+nqDweDt7f3111+bTKae7hIUFPTBBx889thj33zzjfXyiRMnFhUVlZeX19XV5ebm9v6BeQfW7RcH1r18+fKCBQvWr18/a9YsNetWVVUp0aGiouLYsWPJyclXrT4w9Aa94QwOPN7NmzdXVFQYjca//e1vv/vd7zZt2qRO3bq6ugsXLkiSVFRUdOrUqdjYWHXq3nDDDV1dXRUVFZ2dnf/973+TkpLUqavYs2fPkiVLeqloP3qD3rBwyvfuLF68+MSJE4WFhfPmzfv666+XLl36r3/9Kzo6upe7REREZGdnP/DAA0VFRZaF3t7er7322owZM1JSUrZu3RoUFKROXUmSbrvttqVLl+7fv1+n0xUUFKhT9/PPPz98+PDTTz+t/B88o9GoTt26urrZs2dHRUXNmjXrz3/+s/JKwknoDXrDGRz4/Lqk7tmzZ1NTU6Oion7zm9+8/vrroaGh6tTVaDTbtm1LT09PSkq6/vrr586dq05dSZJMJtPp06enTZvWe0X70Rv0hkJjeUtsIHfWaCRJsmcL1BW17rW4z9SlLnWv3brX4j5TV826fKsyAAAQHHEHAAAIjrgDAAAEp17c6eUKR1e9CNGAlfZ8pSEVLgbU09VVrnoBFHv0dJUTZ1+kxh5/+ctf4uPj4+Li1q9fb3tTQkJCQkLCvn377Kxi22b96skBd2m3O/bSk7Yr29OlV9zhnnrSdmWndqk6mDkWzJxumDk9rSzyzLHnS3v6vgXbKxw1NDR0dXUpt17xIkQOqWt7pSFL3Z4uBuSQugrrq6tYH+8VL4DiqLrdrnJiXVfR7UIkjqo74PuePXtWr9e3tLS0t7enpKR88803ln3+7rvvbrrppkuXLtXV1SmT1J663dqsvz3Ze5f2vW4vPWm78lW7tO91FT31pO3KvXep/dNjYJg5vWPm9GVNZo5nzhyVzu7YXuFo7NixlZWVyq19vwhRf9leachS19kXA+p2dRXr43WeUpurnNjWdfZFavorICDA19e3ra1NuQTd8OHDLftcWFiYmprq6+s7bNiwkSNHHj582J5C3dqsvz054C7tdsdeetJ2ZXu61HaHe+lJ5/0Nugozh5nTE2aOZ84cleLOuXPnoqKilJ91Ol1lZWVWVtZVvz/AgT777LO4uLjAwEDruhcvXtTr9ZGRkevXr7/qF7f014YNG55//nnLr9Z16+rqYmNjb7755gMHDji26JkzZ3Q63YIFC8aNG7dly5ZudRUqfLVXv4SEhGRkZERHR0dGRs6bN2/UqFGWfR4zZsyRI0caGxvPnz+fn5/v2Nntnj1py4Fd2ktP2nJel6rDPZ9fZo47YOZ45szp7RqnTqWETXWUl5evXbs2Ozu7W92goKCysjKj0Thz5sw5c+aMHDnSURUPHDgQHR2dmJh49OhRZYl13VOnTun1+oKCgrlz5548eXLYsGGOqtvZ2Xns2LHvv/9er9enp6dPmjTp9ttvt16hurq6sLBw+vTpjqpov/Ly8ldffbWkpMTX1/dXv/rV3LlzLY/VmDFjVq1aNX369IiIiLS0tN4vyWs/d+hJW47q0t570pbzutRV3OH5Zea4A2aOZ84clc7u9PEKR85w1SsNOeNiQL1fXaUvF0AZmKte5cSpF6kZmIKCgptvvlmr1QYEBMycOfOrr76yvvWRRx7Jz8/fv39/fX19XFycA+u6c0/asr9L+3jFHwvndak63Pn5Zea4FjOnL8SbOSrFHdsrHG3evNlsNju7ru2Vhix1nXoxINurq1jq9usCKP1le5WTbo+zu51VliQpPj7+m2++aWpqamtrO3z4cEJCgvU+l5WVSZL04Ycfms3m1NRUB9Z1w5605cAu7aUnbTm1S9Xhhs8vM8dNMHM8dObY8znnfm3hgw8+GDVqVHR09M6dO2VZHjlyZGNjo3JTenq6Vqv18/OLiorKz893YN2PPvpo0KBBUb8oLS211D158mRSUlJkZGRCQsKuXbv6srUBPGI7d+5UPpFuqVtQUBAXFxcZGTl69Oi9e/c6vO4XX3yRlJQUHx+/bt06+X8f5/Pnz0dGRnZ2dvZxU/Z0SL/u+/zzz8fFxcXGxmZkZMj/u88TJ04MCwu7+eabT506ZWdd2zbrV0/23qV9r9tLT9qufNUu7dfxKmx70nblq3ap/dNjYJg5V8XM6QtmjgfOHPXijrWioqJHH32UuqLWtee+nvZYeVpdO11zx0tdderac19Pe6w8ra4FlwilrlPqXov7TF3qUvfarXst7jN11azLRSQAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAME54GsGITZ7vvILYnPVV41BbMwc9ISvGQQAAOiRXWd3AAAA3B9ndwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACC4/wNeW27o5DoAAAACSURBVCEI/r8gawAAAABJRU5ErkJggg=="
-/>
+    ```bash
+    #!/bin/bash
 
-### Node Features for Selective Job Submission
+    #SBATCH --ntasks=64
+    #SBATCH --time=01:00:00
+    #SBATCH --account=<account>
 
-The nodes in our HPC system are becoming more diverse in multiple aspects: hardware, mounted
-storage, software. The system administrators can describe the set of properties and it is up to the
-user to specify her/his requirements. These features should be thought of as changing over time
-(e.g. a file system get stuck on a certain node).
+    module purge
+    module load <modules>
 
-A feature can be used with the Slurm option `--constrain` or `-C` like
-`srun -C fs_lustre_scratch2 ...` with `srun` or `sbatch`. Combinations like
-`--constraint="fs_beegfs_global0`are allowed. For a detailed description of the possible
-constraints, please refer to the Slurm documentation (<https://slurm.schedmd.com/srun.html>).
+    srun ./path/to/mpi_application
+    ```
 
-**Remark:** A feature is checked only for scheduling. Running jobs are not affected by changing
-features.
+    * Submisson: `marie@login$ sbatch batch_script.sh`
+    * Run with fewer MPI tasks: `marie@login$ sbatch --ntasks 14 batch_script.sh`
 
-### Available features on Taurus
+## Manage and Control Jobs
 
-| Feature | Description                                                              |
-|:--------|:-------------------------------------------------------------------------|
-| DA      | subset of Haswell nodes with a high bandwidth to NVMe storage (island 6) |
+### Job and Slurm Monitoring
 
-#### File system features
+On the command line, use `squeue` to watch the scheduling queue. This command will tell the reason,
+why a job is not running (job status in the last column of the output). More information about job
+parameters can also be determined with `scontrol -d show job <jobid>`. The following table holds
+detailed descriptions of the possible job states:
+
+??? tip "Reason Table"
+
+    | Reason             | Long Description  |
+    |:-------------------|:------------------|
+    | `Dependency`         | This job is waiting for a dependent job to complete. |
+    | `None`               | No reason is set for this job. |
+    | `PartitionDown`      | The partition required by this job is in a down state. |
+    | `PartitionNodeLimit` | The number of nodes required by this job is outside of its partitions current limits. Can also indicate that required nodes are down or drained. |
+    | `PartitionTimeLimit` | The jobs time limit exceeds its partitions current time limit. |
+    | `Priority`           | One or higher priority jobs exist for this partition. |
+    | `Resources`          | The job is waiting for resources to become available. |
+    | `NodeDown`           | A node required by the job is down. |
+    | `BadConstraints`     | The jobs constraints can not be satisfied. |
+    | `SystemFailure`      | Failure of the Slurm system, a filesystem, the network, etc. |
+    | `JobLaunchFailure`   | The job could not be launched. This may be due to a filesystem problem, invalid program name, etc. |
+    | `NonZeroExitCode`    | The job terminated with a non-zero exit code. |
+    | `TimeLimit`          | The job exhausted its time limit. |
+    | `InactiveLimit`      | The job reached the system inactive limit. |
 
-A feature `fs_*` is active if a certain file system is mounted and available on a node. Access to
-these file systems are tested every few minutes on each node and the Slurm features set accordingly.
+In addition, the `sinfo` command gives you a quick status overview.
 
-| Feature            | Description                                                          |
-|:-------------------|:---------------------------------------------------------------------|
-| fs_lustre_scratch2 | `/scratch` mounted read-write (the OS mount point is `/lustre/scratch2)` |
-| fs_lustre_ssd      | `/lustre/ssd` mounted read-write                                       |
-| fs_warm_archive_ws | `/warm_archive/ws` mounted read-only                                   |
-| fs_beegfs_global0  | `/beegfs/global0` mounted read-write                                   |
+For detailed information on why your submitted job has not started yet, you can use the command
 
-For certain projects, specific file systems are provided. For those,
-additional features are available, like `fs_beegfs_<projectname>`.
+```console
+marie@login$ whypending <jobid>
+```
 
-## Editing Jobs
+### Editing Jobs
 
 Jobs that have not yet started can be altered. Using `scontrol update timelimit=4:00:00
-jobid=<jobid>` is is for example possible to modify the maximum runtime. scontrol understands many
-different options, please take a look at the man page for more details.
+jobid=<jobid>` it is for example possible to modify the maximum runtime. `scontrol` understands many
+different options, please take a look at the [man page](https://slurm.schedmd.com/scontrol.html) for
+more details.
 
-## Job and Slurm Monitoring
+### Canceling Jobs
 
-On the command line, use `squeue` to watch the scheduling queue. This command will tell the reason,
-why a job is not running (job status in the last column of the output). More information about job
-parameters can also be determined with `scontrol -d show job <jobid>` Here are detailed descriptions
-of the possible job status:
-
-| Reason             | Long description                                                                                                                                 |
-|:-------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------|
-| Dependency         | This job is waiting for a dependent job to complete.                                                                                             |
-| None               | No reason is set for this job.                                                                                                                   |
-| PartitionDown      | The partition required by this job is in a DOWN state.                                                                                           |
-| PartitionNodeLimit | The number of nodes required by this job is outside of its partitions current limits. Can also indicate that required nodes are DOWN or DRAINED. |
-| PartitionTimeLimit | The jobs time limit exceeds its partitions current time limit.                                                                                   |
-| Priority           | One or higher priority jobs exist for this partition.                                                                                            |
-| Resources          | The job is waiting for resources to become available.                                                                                            |
-| NodeDown           | A node required by the job is down.                                                                                                              |
-| BadConstraints     | The jobs constraints can not be satisfied.                                                                                                       |
-| SystemFailure      | Failure of the Slurm system, a file system, the network, etc.                                                                                    |
-| JobLaunchFailure   | The job could not be launched. This may be due to a file system problem, invalid program name, etc.                                              |
-| NonZeroExitCode    | The job terminated with a non-zero exit code.                                                                                                    |
-| TimeLimit          | The job exhausted its time limit.                                                                                                                |
-| InactiveLimit      | The job reached the system InactiveLimit.                                                                                                        |
+The command `scancel <jobid>` kills a single job and removes it from the queue. By using `scancel -u
+<username>` you can send a canceling signal to all of your jobs at once.
 
-In addition, the `sinfo` command gives you a quick status overview.
+### Accounting
+
+The Slurm command `sacct` provides job statistics like memory usage, CPU time, energy usage etc.
 
-For detailed information on why your submitted job has not started yet, you can use: `whypending
-<jobid>`.
+!!! hint "Learn from old jobs"
 
-## Accounting
+    We highly encourage you to use `sacct` to learn from you previous jobs in order to better
+    estimate the requirements, e.g., runtime, for future jobs.
 
-The Slurm command `sacct` provides job statistics like memory usage, CPU
-time, energy usage etc. Examples:
+`sacct` outputs the following fields by default.
 
-```Shell Session
+```console
 # show all own jobs contained in the accounting database
-sacct
-# show specific job
-sacct -j &lt;JOBID&gt;
-# specify fields
-sacct -j &lt;JOBID&gt; -o JobName,MaxRSS,MaxVMSize,CPUTime,ConsumedEnergy
-# show all fields
-sacct -j &lt;JOBID&gt; -o ALL
+marie@login$ sacct
+       JobID    JobName  Partition    Account  AllocCPUS      State ExitCode
+------------ ---------- ---------- ---------- ---------- ---------- --------
+[...]
 ```
 
-Read the manpage (`man sacct`) for information on the provided fields.
+We'd like to point your attention to the following options gain insight in your jobs.
 
-Note that sacct by default only shows data of the last day. If you want
-to look further into the past without specifying an explicit job id, you
-need to provide a startdate via the **-S** or **--starttime** parameter,
-e.g
+??? example "Show specific job"
 
-```Shell Session
-# show all jobs since the beginning of year 2020:
-sacct -S 2020-01-01
-```
+    ```console
+    marie@login$ sacct -j <JOBID>
+    ```
 
-## Killing jobs
+??? example "Show all fields for a specific job"
 
-The command `scancel <jobid>` kills a single job and removes it from the queue. By using `scancel -u
-<username>` you are able to kill all of your jobs at once.
+    ```console
+    marie@login$ sacct -j <JOBID> -o All
+    ```
 
-## Host List
+??? example "Show specific fields"
 
-If you want to place your job onto specific nodes, there are two options for doing this. Either use
-`-p` to specify a host group that fits your needs. Or, use `-w` or (`--nodelist`) with a name node
-nodes that will work for you.
+    ```console
+    marie@login$ sacct -j <JOBID> -o JobName,MaxRSS,MaxVMSize,CPUTime,ConsumedEnergy
+    ```
 
-## Job Profiling
+The manual page (`man sacct`) and the [online reference](https://slurm.schedmd.com/sacct.html)
+provide a comprehensive documentation regarding available fields and formats.
 
-\<a href="%ATTACHURL%/hdfview_memory.png"> \<img alt="" height="272"
-src="%ATTACHURL%/hdfview_memory.png" style="float: right; margin-left:
-10px;" title="hdfview" width="324" /> \</a>
+!!! hint "Time span"
 
-Slurm offers the option to gather profiling data from every task/node of the job. Following data can
-be gathered:
+    By default, `sacct` only shows data of the last day. If you want to look further into the past
+    without specifying an explicit job id, you need to provide a start date via the `-S` option.
+    A certain end date is also possible via `-E`.
 
-- Task data, such as CPU frequency, CPU utilization, memory
-  consumption (RSS and VMSize), I/O
-- Energy consumption of the nodes
-- Infiniband data (currently deactivated)
-- Lustre filesystem data (currently deactivated)
+??? example "Show all jobs since the beginning of year 2021"
 
-The data is sampled at a fixed rate (i.e. every 5 seconds) and is stored in a HDF5 file.
+    ```console
+    marie@login$ sacct -S 2021-01-01 [-E now]
+    ```
 
-**CAUTION**: Please be aware that the profiling data may be quiet large, depending on job size,
-runtime, and sampling rate. Always remove the local profiles from
-`/lustre/scratch2/profiling/${USER}`, either by running sh5util as shown above or by simply removing
-those files.
+## Jobs at Reservations
 
-Usage examples:
+How to ask for a reservation is described in the section
+[reservations](overview.md#exclusive-reservation-of-hardware).
+After we agreed with your requirements, we will send you an e-mail with your reservation name. Then
+you could see more information about your reservation with the following command:
 
-```Shell Session
-# create energy and task profiling data (--acctg-freq is the sampling rate in seconds)
-srun --profile=All --acctg-freq=5,energy=5 -n 32 ./a.out
-# create task profiling data only
-srun --profile=All --acctg-freq=5 -n 32 ./a.out
+```console
+marie@login$ scontrol show res=<reservation name>
+# e.g. scontrol show res=hpcsupport_123
+```
 
-# merge the node local files in /lustre/scratch2/profiling/${USER} to single file
-# (without -o option output file defaults to job_&lt;JOBID&gt;.h5)
-sh5util -j &lt;JOBID&gt; -o profile.h5
-# in jobscripts or in interactive sessions (via salloc):
-sh5util -j ${SLURM_JOBID} -o profile.h5
+If you want to use your reservation, you have to add the parameter
+`--reservation=<reservation name>` either in your sbatch script or to your `srun` or `salloc` command.
 
-# view data:
-module load HDFView
-hdfview.sh profile.h5
-```
+## Node Features for Selective Job Submission
 
-More information about profiling with Slurm:
+The nodes in our HPC system are becoming more diverse in multiple aspects: hardware, mounted
+storage, software. The system administrators can describe the set of properties and it is up to the
+user to specify her/his requirements. These features should be thought of as changing over time
+(e.g., a filesystem get stuck on a certain node).
 
-- [Slurm Profiling](http://slurm.schedmd.com/hdf5_profile_user_guide.html)
-- [sh5util](http://slurm.schedmd.com/sh5util.html)
+A feature can be used with the Slurm option `--constrain` or `-C` like
+`srun -C fs_lustre_scratch2 ...` with `srun` or `sbatch`. Combinations like
+`--constraint="fs_beegfs_global0`are allowed. For a detailed description of the possible
+constraints, please refer to the [Slurm documentation](https://slurm.schedmd.com/srun.html).
 
-## Reservations
+!!! hint
 
-If you want to run jobs, which specifications are out of our job limitations, you could
-[ask for a reservation](mailto:hpcsupport@zih.tu-dresden.de). Please add the following information
-to your request mail:
+      A feature is checked only for scheduling. Running jobs are not affected by changing features.
 
-- start time (please note, that the start time have to be later than
-  the day of the request plus 7 days, better more, because the longest
-  jobs run 7 days)
-- duration or end time
-- account
-- node count or cpu count
-- partition
+### Available Features
 
-After we agreed with your requirements, we will send you an e-mail with your reservation name. Then
-you could see more information about your reservation with the following command:
+| Feature | Description                                                              |
+|:--------|:-------------------------------------------------------------------------|
+| DA      | subset of Haswell nodes with a high bandwidth to NVMe storage (island 6) |
 
-```Shell Session
-scontrol show res=<reservation name>
-# e.g. scontrol show res=hpcsupport_123
-```
+#### Filesystem Features
 
-If you want to use your reservation, you have to add the parameter `--reservation=<reservation
-name>` either in your sbatch script or to your `srun` or `salloc` command.
+A feature `fs_*` is active if a certain filesystem is mounted and available on a node. Access to
+these filesystems are tested every few minutes on each node and the Slurm features set accordingly.
 
-## Slurm External Links
+| Feature            | Description                                                          |
+|:-------------------|:---------------------------------------------------------------------|
+| `fs_lustre_scratch2` | `/scratch` mounted read-write (mount point is `/lustre/scratch2)`  |
+| `fs_lustre_ssd`      | `/lustre/ssd` mounted read-write                                   |
+| `fs_warm_archive_ws` | `/warm_archive/ws` mounted read-only                               |
+| `fs_beegfs_global0`  | `/beegfs/global0` mounted read-write                               |
 
-- Manpages, tutorials, examples, etc: (http://slurm.schedmd.com/)
-- Comparison with other batch systems: (http://www.schedmd.com/slurmdocs/rosetta.html)
+For certain projects, specific filesystems are provided. For those,
+additional features are available, like `fs_beegfs_<projectname>`.
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md
index 187bd7cf82651718fb0b188edfa0c95f33621b20..396657db06766eaab6f8694ca4bed4f8014cf7f4 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md
@@ -1,5 +1,358 @@
-# SlurmExamples
+# Job Examples
 
-## Array-Job with Afterok-Dependency and DataMover Usage
+## Parallel Jobs
 
-TODO
+For submitting parallel jobs, a few rules have to be understood and followed. In general, they
+depend on the type of parallelization and architecture.
+
+### OpenMP Jobs
+
+An SMP-parallel job can only run within a node, so it is necessary to include the options `-N 1` and
+`-n 1`. The maximum number of processors for an SMP-parallel program is 896 and 56 on partition
+`taurussmp8` and  `smp2`, respectively.  Please refer to the
+[partitions section](partitions_and_limits.md#memory-limits) for up-to-date information. Using the
+option `--cpus-per-task=<N>` Slurm will start one task and you will have `N` CPUs available for your
+job.  An example job file would look like:
+
+!!! example "Job file for OpenMP application"
+
+    ```Bash
+    #!/bin/bash
+    #SBATCH --nodes=1
+    #SBATCH --tasks-per-node=1
+    #SBATCH --cpus-per-task=8
+    #SBATCH --time=08:00:00
+    #SBATCH -J Science1
+    #SBATCH --mail-type=end
+    #SBATCH --mail-user=your.name@tu-dresden.de
+
+    export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
+    ./path/to/binary
+    ```
+
+### MPI Jobs
+
+For MPI-parallel jobs one typically allocates one core per task that has to be started.
+
+!!! warning "MPI libraries"
+
+    There are different MPI libraries on ZIH systems for the different micro archtitectures. Thus,
+    you have to compile the binaries specifically for the target architecture and partition. Please
+    refer to the sections [building software](../software/building_software.md) and
+    [module environments](../software/runtime_environment.md#module-environments) for detailed
+    information.
+
+!!! example "Job file for MPI application"
+
+    ```Bash
+    #!/bin/bash
+    #SBATCH --ntasks=864
+    #SBATCH --time=08:00:00
+    #SBATCH -J Science1
+    #SBATCH --mail-type=end
+    #SBATCH --mail-user=your.name@tu-dresden.de
+
+    srun ./path/to/binary
+    ```
+
+### Multiple Programs Running Simultaneously in a Job
+
+In this short example, our goal is to run four instances of a program concurrently in a **single**
+batch script. Of course we could also start a batch script four times with `sbatch` but this is not
+what we want to do here. Please have a look at
+[this subsection](#multiple-programs-running-simultaneously-in-a-job)
+in case you intend to run GPU programs simultaneously in a **single** job.
+
+!!! example " "
+
+    ```Bash
+    #!/bin/bash
+    #SBATCH --ntasks=4
+    #SBATCH --cpus-per-task=1
+    #SBATCH --time=01:00:00
+    #SBATCH -J PseudoParallelJobs
+    #SBATCH --mail-type=end
+    #SBATCH --mail-user=your.name@tu-dresden.de
+
+    # The following sleep command was reported to fix warnings/errors with srun by users (feel free to uncomment).
+    #sleep 5
+    srun --exclusive --ntasks=1 ./path/to/binary &
+
+    #sleep 5
+    srun --exclusive --ntasks=1 ./path/to/binary &
+
+    #sleep 5
+    srun --exclusive --ntasks=1 ./path/to/binary &
+
+    #sleep 5
+    srun --exclusive --ntasks=1 ./path/to/binary &
+
+    echo "Waiting for parallel job steps to complete..."
+    wait
+    echo "All parallel job steps completed!"
+    ```
+
+## Requesting GPUs
+
+Slurm will allocate one or many GPUs for your job if requested. Please note that GPUs are only
+available in certain partitions, like `gpu2`, `gpu3` or `gpu2-interactive`. The option
+for `sbatch/srun` in this case is `--gres=gpu:[NUM_PER_NODE]` (where `NUM_PER_NODE` can be `1`, `2` or
+`4`, meaning that one, two or four of the GPUs per node will be used for the job).
+
+!!! example "Job file to request a GPU"
+
+    ```Bash
+    #!/bin/bash
+    #SBATCH --nodes=2              # request 2 nodes
+    #SBATCH --mincpus=1            # allocate one task per node...
+    #SBATCH --ntasks=2             # ...which means 2 tasks in total (see note below)
+    #SBATCH --cpus-per-task=6      # use 6 threads per task
+    #SBATCH --gres=gpu:1           # use 1 GPU per node (i.e. use one GPU per task)
+    #SBATCH --time=01:00:00        # run for 1 hour
+    #SBATCH -A Project1            # account CPU time to Project1
+
+    srun ./your/cuda/application   # start you application (probably requires MPI to use both nodes)
+    ```
+
+Please be aware that the partitions `gpu`, `gpu1` and `gpu2` can only be used for non-interactive
+jobs which are submitted by `sbatch`.  Interactive jobs (`salloc`, `srun`) will have to use the
+partition `gpu-interactive`. Slurm will automatically select the right partition if the partition
+parameter `-p, --partition` is omitted.
+
+!!! note
+
+    Due to an unresolved issue concerning the Slurm job scheduling behavior, it is currently not
+    practical to use `--ntasks-per-node` together with GPU jobs. If you want to use multiple nodes,
+    please use the parameters `--ntasks` and `--mincpus` instead. The values of `mincpus`*`nodes`
+    has to equal `ntasks` in this case.
+
+### Limitations of GPU Job Allocations
+
+The number of cores per node that are currently allowed to be allocated for GPU jobs is limited
+depending on how many GPUs are being requested. On the K80 nodes, you may only request up to 6
+cores per requested GPU (8 per on the K20 nodes). This is because we do not wish that GPUs remain
+unusable due to all cores on a node being used by a single job which does not, at the same time,
+request all GPUs.
+
+E.g., if you specify `--gres=gpu:2`, your total number of cores per node (meaning:
+`ntasks`*`cpus-per-task`) may not exceed 12 (on the K80 nodes)
+
+Note that this also has implications for the use of the `--exclusive` parameter. Since this sets the
+number of allocated cores to 24 (or 16 on the K20X nodes), you also **must** request all four GPUs
+by specifying `--gres=gpu:4`, otherwise your job will not start. In the case of `--exclusive`, it won't
+be denied on submission, because this is evaluated in a later scheduling step. Jobs that directly
+request too many cores per GPU will be denied with the error message:
+
+```console
+Batch job submission failed: Requested node configuration is not available
+```
+
+### Running Multiple GPU Applications Simultaneously in a Batch Job
+
+Our starting point is a (serial) program that needs a single GPU and four CPU cores to perform its
+task (e.g. TensorFlow). The following batch script shows how to run such a job on the partition `ml`.
+
+!!! example
+
+    ```bash
+    #!/bin/bash
+    #SBATCH --ntasks=1
+    #SBATCH --cpus-per-task=4
+    #SBATCH --gres=gpu:1
+    #SBATCH --gpus-per-task=1
+    #SBATCH --time=01:00:00
+    #SBATCH --mem-per-cpu=1443
+    #SBATCH --partition=ml
+
+    srun some-gpu-application
+    ```
+
+When `srun` is used within a submission script, it inherits parameters from `sbatch`, including
+`--ntasks=1`, `--cpus-per-task=4`, etc. So we actually implicitly run the following
+
+```bash
+srun --ntasks=1 --cpus-per-task=4 ... --partition=ml some-gpu-application
+```
+
+Now, our goal is to run four instances of this program concurrently in a single batch script. Of
+course we could also start the above script multiple times with `sbatch`, but this is not what we want
+to do here.
+
+#### Solution
+
+In order to run multiple programs concurrently in a single batch script/allocation we have to do
+three things:
+
+1. Allocate enough resources to accommodate multiple instances of our program. This can be achieved
+   with an appropriate batch script header (see below).
+1. Start job steps with srun as background processes. This is achieved by adding an ampersand at the
+   end of the `srun` command
+1. Make sure that each background process gets its private resources. We need to set the resource
+   fraction needed for a single run in the corresponding srun command. The total aggregated
+   resources of all job steps must fit in the allocation specified in the batch script header.
+   Additionally, the option `--exclusive` is needed to make sure that each job step is provided with
+   its private set of CPU and GPU resources.  The following example shows how four independent
+   instances of the same program can be run concurrently from a single batch script. Each instance
+   (task) is equipped with 4 CPUs (cores) and one GPU.
+
+!!! example "Job file simultaneously executing four independent instances of the same program"
+
+    ```Bash
+    #!/bin/bash
+    #SBATCH --ntasks=4
+    #SBATCH --cpus-per-task=4
+    #SBATCH --gres=gpu:4
+    #SBATCH --gpus-per-task=1
+    #SBATCH --time=01:00:00
+    #SBATCH --mem-per-cpu=1443
+    #SBATCH --partition=ml
+
+    srun --exclusive --gres=gpu:1 --ntasks=1 --cpus-per-task=4 --gpus-per-task=1 --mem-per-cpu=1443 some-gpu-application &
+    srun --exclusive --gres=gpu:1 --ntasks=1 --cpus-per-task=4 --gpus-per-task=1 --mem-per-cpu=1443 some-gpu-application &
+    srun --exclusive --gres=gpu:1 --ntasks=1 --cpus-per-task=4 --gpus-per-task=1 --mem-per-cpu=1443 some-gpu-application &
+    srun --exclusive --gres=gpu:1 --ntasks=1 --cpus-per-task=4 --gpus-per-task=1 --mem-per-cpu=1443 some-gpu-application &
+
+    echo "Waiting for all job steps to complete..."
+    wait
+    echo "All jobs completed!"
+    ```
+
+In practice it is possible to leave out resource options in `srun` that do not differ from the ones
+inherited from the surrounding `sbatch` context. The following line would be sufficient to do the
+job in this example:
+
+```bash
+srun --exclusive --gres=gpu:1 --ntasks=1 some-gpu-application &
+```
+
+Yet, it adds some extra safety to leave them in, enabling the Slurm batch system to complain if not
+enough resources in total were specified in the header of the batch script.
+
+## Exclusive Jobs for Benchmarking
+
+Jobs ZIH systems run, by default, in shared-mode, meaning that multiple jobs (from different users)
+can run at the same time on the same compute node. Sometimes, this behavior is not desired (e.g.
+for benchmarking purposes). Thus, the Slurm parameter `--exclusive` request for exclusive usage of
+resources.
+
+Setting `--exclusive` **only** makes sure that there will be **no other jobs running on your nodes**.
+It does not, however, mean that you automatically get access to all the resources which the node
+might provide without explicitly requesting them, e.g. you still have to request a GPU via the
+generic resources parameter (`gres`) to run on the partitions with GPU, or you still have to
+request all cores of a node if you need them. CPU cores can either to be used for a task
+(`--ntasks`) or for multi-threading within the same task (`--cpus-per-task`). Since those two
+options are semantically different (e.g., the former will influence how many MPI processes will be
+spawned by `srun` whereas the latter does not), Slurm cannot determine automatically which of the
+two you might want to use. Since we use cgroups for separation of jobs, your job is not allowed to
+use more resources than requested.*
+
+If you just want to use all available cores in a node, you have to specify how Slurm should organize
+them, like with `-p haswell -c 24` or `-p haswell --ntasks-per-node=24`.
+
+Here is a short example to ensure that a benchmark is not spoiled by other jobs, even if it doesn't
+use up all resources in the nodes:
+
+!!! example "Exclusive resources"
+
+    ```Bash
+    #!/bin/bash
+    #SBATCH -p haswell
+    #SBATCH --nodes=2
+    #SBATCH --ntasks-per-node=2
+    #SBATCH --cpus-per-task=8
+    #SBATCH --exclusive    # ensure that nobody spoils my measurement on 2 x 2 x 8 cores
+    #SBATCH --time=00:10:00
+    #SBATCH -J Benchmark
+    #SBATCH --mail-user=your.name@tu-dresden.de
+
+    srun ./my_benchmark
+    ```
+
+## Array Jobs
+
+Array jobs can be used to create a sequence of jobs that share the same executable and resource
+requirements, but have different input files, to be submitted, controlled, and monitored as a single
+unit. The option is `-a, --array=<indexes>` where the parameter `indexes` specifies the array
+indices. The following specifications are possible
+
+* comma separated list, e.g., `--array=0,1,2,17`,
+* range based, e.g., `--array=0-42`,
+* step based, e.g., `--array=0-15:4`,
+* mix of comma separated and range base, e.g., `--array=0,1,2,16-42`.
+
+A maximum number of simultaneously running tasks from the job array may be specified using the `%`
+separator. The specification `--array=0-23%8` limits the number of simultaneously running tasks from
+this job array to 8.
+
+Within the job you can read the environment variables `SLURM_ARRAY_JOB_ID` and
+`SLURM_ARRAY_TASK_ID` which is set to the first job ID of the array and set individually for each
+step, respectively.
+
+Within an array job, you can use `%a` and `%A` in addition to `%j` and `%N` to make the output file
+name specific to the job:
+
+* `%A` will be replaced by the value of `SLURM_ARRAY_JOB_ID`
+* `%a` will be replaced by the value of `SLURM_ARRAY_TASK_ID`
+
+!!! example "Job file using job arrays"
+
+    ```Bash
+    #!/bin/bash
+    #SBATCH --array 0-9
+    #SBATCH -o arraytest-%A_%a.out
+    #SBATCH -e arraytest-%A_%a.err
+    #SBATCH --ntasks=864
+    #SBATCH --time=08:00:00
+    #SBATCH -J Science1
+    #SBATCH --mail-type=end
+    #SBATCH --mail-user=your.name@tu-dresden.de
+
+    echo "Hi, I am step $SLURM_ARRAY_TASK_ID in this array job $SLURM_ARRAY_JOB_ID"
+    ```
+
+!!! note
+
+    If you submit a large number of jobs doing heavy I/O in the Lustre filesystems you should limit
+    the number of your simultaneously running job with a second parameter like:
+
+    ```Bash
+    #SBATCH --array=1-100000%100
+    ```
+
+Please read the Slurm documentation at https://slurm.schedmd.com/sbatch.html for further details.
+
+## Chain Jobs
+
+You can use chain jobs to create dependencies between jobs. This is often the case if a job relies
+on the result of one or more preceding jobs. Chain jobs can also be used if the runtime limit of the
+batch queues is not sufficient for your job. Slurm has an option
+`-d, --dependency=<dependency_list>` that allows to specify that a job is only allowed to start if
+another job finished.
+
+Here is an example of how a chain job can look like, the example submits 4 jobs (described in a job
+file) that will be executed one after each other with different CPU numbers:
+
+!!! example "Script to submit jobs with dependencies"
+
+    ```Bash
+    #!/bin/bash
+    TASK_NUMBERS="1 2 4 8"
+    DEPENDENCY=""
+    JOB_FILE="myjob.slurm"
+
+    for TASKS in $TASK_NUMBERS ; do
+        JOB_CMD="sbatch --ntasks=$TASKS"
+        if [ -n "$DEPENDENCY" ] ; then
+            JOB_CMD="$JOB_CMD --dependency afterany:$DEPENDENCY"
+        fi
+        JOB_CMD="$JOB_CMD $JOB_FILE"
+        echo -n "Running command: $JOB_CMD  "
+        OUT=`$JOB_CMD`
+        echo "Result: $OUT"
+        DEPENDENCY=`echo $OUT | awk '{print $4}'`
+    done
+    ```
+
+## Array-Job with Afterok-Dependency and Datamover Usage
+
+This is a *todo*
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_profiling.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_profiling.md
new file mode 100644
index 0000000000000000000000000000000000000000..273a87710602b62feb97c342335b4c44f30ad09e
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_profiling.md
@@ -0,0 +1,62 @@
+# Job Profiling
+
+Slurm offers the option to gather profiling data from every task/node of the job. Analyzing this
+data allows for a better understanding of your jobs in terms of elapsed time, runtime and I/O
+behavior, and many more.
+
+The following data can be gathered:
+
+* Task data, such as CPU frequency, CPU utilization, memory consumption (RSS and VMSize), I/O
+* Energy consumption of the nodes
+* Infiniband data (currently deactivated)
+* Lustre filesystem data (currently deactivated)
+
+The data is sampled at a fixed rate (i.e. every 5 seconds) and is stored in a HDF5 file.
+
+!!! note "Data hygiene"
+
+    Please be aware that the profiling data may be quiet large, depending on job size, runtime, and
+    sampling rate. Always remove the local profiles from `/lustre/scratch2/profiling/${USER}`,
+    either by running `sh5util` as shown above or by simply removing those files.
+
+## Examples
+
+The following examples of `srun` profiling command lines are meant to replace the current `srun`
+line within your job file.
+
+??? example "Create profiling data"
+
+    (--acctg-freq is the sampling rate in seconds)
+
+    ```console
+    # Energy and task profiling
+    srun --profile=All --acctg-freq=5,energy=5 -n 32 ./a.out
+    # Task profiling data only
+    srun --profile=All --acctg-freq=5 -n 32 ./a.out
+    ```
+
+??? example "Merge the node local files"
+
+    ... in `/lustre/scratch2/profiling/${USER}` to single file.
+
+    ```console
+    # (without -o option output file defaults to job_$JOBID.h5)
+    sh5util -j <JOBID> -o profile.h5
+    # in jobscripts or in interactive sessions (via salloc):
+    sh5util -j ${SLURM_JOBID} -o profile.h5
+    ```
+
+??? example "View data"
+
+    ```console
+    marie@login$ module load HDFView
+    marie@login$ hdfview.sh profile.h5
+    ```
+
+![HDFView Memory](misc/hdfview_memory.png)
+{: align="center"}
+
+More information about profiling with Slurm:
+
+- [Slurm Profiling](http://slurm.schedmd.com/hdf5_profile_user_guide.html)
+- [`sh5util`](http://slurm.schedmd.com/sh5util.html)
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/system_taurus.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/system_taurus.md
deleted file mode 100644
index 3625bf4503d4b41d73fc7a9de6c02dabc3d3feec..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/system_taurus.md
+++ /dev/null
@@ -1,210 +0,0 @@
-# Taurus
-
-## Information about the Hardware
-
-Detailed information on the current HPC hardware can be found
-[here.](../jobs_and_resources/hardware_taurus.md)
-
-## Applying for Access to the System
-
-Project and login application forms for taurus are available
-[here](../access/overview.md).
-
-## Login to the System
-
-Login to the system is available via ssh at taurus.hrsk.tu-dresden.de.
-There are several login nodes (internally called tauruslogin3 to
-tauruslogin6). Currently, if you use taurus.hrsk.tu-dresden.de, you will
-be placed on tauruslogin5. It might be a good idea to give the other
-login nodes a try if the load on tauruslogin5 is rather high (there will
-once again be load balancer soon, but at the moment, there is none).
-
-Please note that if you store data on the local disk (e.g. under /tmp),
-it will be on only one of the three nodes. If you relogin and the data
-is not there, you are probably on another node.
-
-You can find an list of fingerprints [here](../access/key_fingerprints.md).
-
-## Transferring Data from/to Taurus
-
-taurus has two specialized data transfer nodes. Both nodes are
-accessible via `taurusexport.hrsk.tu-dresden.de`. Currently, only rsync,
-scp and sftp to these nodes will work. A login via SSH is not possible
-as these nodes are dedicated to data transfers.
-
-These nodes are located behind a firewall. By default, they are only
-accessible from IP addresses from with the Campus of the TU Dresden.
-External IP addresses can be enabled upon request. These requests should
-be send via eMail to `servicedesk@tu-dresden.de` and mention the IP
-address range (or node names), the desired protocol and the time frame
-that the firewall needs to be open.
-
-We are open to discuss options to export the data in the scratch file
-system via CIFS or other protocols. If you have a need for this, please
-contact the Service Desk as well.
-
-**Phase 2:** The nodes taurusexport\[3,4\] provide access to the
-`/scratch` file system of the second phase.
-
-## Compiling Parallel Applications
-
-You have to explicitly load a compiler module and an MPI module on
-Taurus. Eg. with `module load GCC OpenMPI`. ( [read more about
-Modules](../software/runtime_environment.md), **todo link** (read more about
-Compilers)(Compendium.Compilers))
-
-Use the wrapper commands like e.g. `mpicc` (`mpiicc` for intel),
-`mpicxx` (`mpiicpc`) or `mpif90` (`mpiifort`) to compile MPI source
-code. To reveal the command lines behind the wrappers, use the option
-`-show`.
-
-For running your code, you have to load the same compiler and MPI module
-as for compiling the program. Please follow the following guiedlines to
-run your parallel program using the batch system.
-
-## Batch System
-
-Applications on an HPC system can not be run on the login node. They
-have to be submitted to compute nodes with dedicated resources for the
-user's job. Normally a job can be submitted with these data:
-
--   number of CPU cores,
--   requested CPU cores have to belong on one node (OpenMP programs) or
-    can distributed (MPI),
--   memory per process,
--   maximum wall clock time (after reaching this limit the process is
-    killed automatically),
--   files for redirection of output and error messages,
--   executable and command line parameters.
-
-The batch system on Taurus is Slurm. If you are migrating from LSF
-(deimos, mars, atlas), the biggest difference is that Slurm has no
-notion of batch queues any more.
-
--   [General information on the Slurm batch system](slurm.md)
--   Slurm also provides process-level and node-level [profiling of
-    jobs](slurm.md#Job_Profiling)
-
-### Partitions
-
-Please note that the islands are also present as partitions for the
-batch systems. They are called
-
--   romeo (Island 7 - AMD Rome CPUs)
--   julia (large SMP machine)
--   haswell (Islands 4 to 6 - Haswell CPUs)
--   gpu (Island 2 - GPUs)
-    -   gpu2 (K80X)
--   smp2 (SMP Nodes)
-
-**Note:** usually you don't have to specify a partition explicitly with
-the parameter -p, because SLURM will automatically select a suitable
-partition depending on your memory and gres requirements.
-
-### Run-time Limits
-
-**Run-time limits are enforced**. This means, a job will be canceled as
-soon as it exceeds its requested limit. At Taurus, the maximum run time
-is 7 days.
-
-Shorter jobs come with multiple advantages:\<img alt="part.png"
-height="117" src="%ATTACHURL%/part.png" style="float: right;"
-title="part.png" width="284" />
-
--   lower risk of loss of computing time,
--   shorter waiting time for reservations,
--   higher job fluctuation; thus, jobs with high priorities may start
-    faster.
-
-To bring down the percentage of long running jobs we restrict the number
-of cores with jobs longer than 2 days to approximately 50% and with jobs
-longer than 24 to 75% of the total number of cores. (These numbers are
-subject to changes.) As best practice we advise a run time of about 8h.
-
-Please always try to make a good estimation of your needed time limit.
-For this, you can use a command line like this to compare the requested
-timelimit with the elapsed time for your completed jobs that started
-after a given date:
-
-    sacct -X -S 2021-01-01 -E now --format=start,JobID,jobname,elapsed,timelimit -s COMPLETED
-
-Instead of running one long job, you should split it up into a chain
-job. Even applications that are not capable of chreckpoint/restart can
-be adapted. The HOWTO can be found [here](../jobs_and_resources/checkpoint_restart.md),
-
-### Memory Limits
-
-**Memory limits are enforced.** This means that jobs which exceed their
-per-node memory limit will be killed automatically by the batch system.
-Memory requirements for your job can be specified via the *sbatch/srun*
-parameters: **--mem-per-cpu=\<MB>** or **--mem=\<MB>** (which is "memory
-per node"). The **default limit** is **300 MB** per cpu.
-
-Taurus has sets of nodes with a different amount of installed memory
-which affect where your job may be run. To achieve the shortest possible
-waiting time for your jobs, you should be aware of the limits shown in
-the following table.
-
-| Partition          | Nodes                                    | # Nodes | Cores per Node  | Avail. Memory per Core | Avail. Memory per Node | GPUs per node     |
-|:-------------------|:-----------------------------------------|:--------|:----------------|:-----------------------|:-----------------------|:------------------|
-| `haswell64`        | `taurusi[4001-4104,5001-5612,6001-6612]` | `1328`  | `24`            | `2541 MB`              | `61000 MB`             | `-`               |
-| `haswell128`       | `taurusi[4105-4188]`                     | `84`    | `24`            | `5250 MB`              | `126000 MB`            | `-`               |
-| `haswell256`       | `taurusi[4189-4232]`                     | `44`    | `24`            | `10583 MB`             | `254000 MB`            | `-`               |
-| `broadwell`        | `taurusi[4233-4264]`                     | `32`    | `28`            | `2214 MB`              | `62000 MB`             | `-`               |
-| `smp2`             | `taurussmp[3-7]`                         | `5`     | `56`            | `36500 MB`             | `2044000 MB`           | `-`               |
-| `gpu2`             | `taurusi[2045-2106]`                     | `62`    | `24`            | `2583 MB`              | `62000 MB`             | `4 (2 dual GPUs)` |
-| `gpu2-interactive` | `taurusi[2045-2108]`                     | `64`    | `24`            | `2583 MB`              | `62000 MB`             | `4 (2 dual GPUs)` |
-| `hpdlf`            | `taurusa[3-16]`                          | `14`    | `12`            | `7916 MB`              | `95000 MB`             | `3`               |
-| `ml`               | `taurusml[1-32]`                         | `32`    | `44 (HT: 176)`  | `1443 MB*`             | `254000 MB`            | `6`               |
-| `romeo`            | `taurusi[7001-7192]`                     | `192`   | `128 (HT: 256)` | `1972 MB*`             | `505000 MB`            | `-`               |
-| `julia`            | `taurussmp8`                             | `1`     | `896`           | `27343 MB*`            | `49000000 MB`          | `-`               |
-
-\* note that the ML nodes have 4way-SMT, so for every physical core
-allocated (e.g., with SLURM_HINT=nomultithread), you will always get
-4\*1443MB because the memory of the other threads is allocated
-implicitly, too.
-
-### Submission of Parallel Jobs
-
-To run MPI jobs ensure that the same MPI module is loaded as during
-compile-time. In doubt, check you loaded modules with `module list`. If
-your code has been compiled with the standard `bullxmpi` installation,
-you can load the module via `module load bullxmpi`. Alternative MPI
-libraries (`intelmpi`, `openmpi`) are also available.
-
-Please pay attention to the messages you get loading the module. They
-are more up-to-date than this manual.
-
-## GPUs
-
-Island 2 of taurus contains a total of 128 NVIDIA Tesla K80 (dual) GPUs
-in 64 nodes.
-
-More information on how to program applications for GPUs can be found
-[GPU Programming](GPU Programming).
-
-The following software modules on taurus offer GPU support:
-
--   `CUDA` : The NVIDIA CUDA compilers
--   `PGI` : The PGI compilers with OpenACC support
-
-## Hardware for Deep Learning (HPDLF)
-
-The partition hpdlf contains 14 servers. Each of them has:
-
--   2 sockets CPU E5-2603 v4 (1.70GHz) with 6 cores each,
--   3 consumer GPU cards NVIDIA GTX1080,
--   96 GB RAM.
-
-## Energy Measurement
-
-Taurus contains sophisticated energy measurement instrumentation.
-Especially HDEEM is available on the haswell nodes of Phase II. More
-detailed information can be found at
-**todo link** (EnergyMeasurement)(EnergyMeasurement).
-
-## Low level optimizations
-
-x86 processsors provide registers that can be used for optimizations and
-performance monitoring. Taurus provides you access to such features via
-the **todo link** (X86Adapt)(X86Adapt) software infrastructure.
diff --git a/doc.zih.tu-dresden.de/docs/legal_notice.md b/doc.zih.tu-dresden.de/docs/legal_notice.md
index 3412a3a0a511d26d1a8bf8e730161622fb7930d9..a5e187ee3f5eb9937e8eb01c33eed182fb2c423d 100644
--- a/doc.zih.tu-dresden.de/docs/legal_notice.md
+++ b/doc.zih.tu-dresden.de/docs/legal_notice.md
@@ -1,8 +1,10 @@
-# Legal Notice / Impressum
+# Legal Notice
+
+## Impressum
 
 Es gilt das [Impressum der TU Dresden](https://tu-dresden.de/impressum) mit folgenden Änderungen:
 
-## Ansprechpartner/Betreiber:
+### Ansprechpartner/Betreiber:
 
 Technische Universität Dresden
 Zentrum für Informationsdienste und Hochleistungsrechnen
@@ -12,7 +14,7 @@ Tel.: +49 351 463-40000
 Fax: +49 351 463-42328
 E-Mail: servicedesk@tu-dresden.de
 
-## Konzeption, Technische Umsetzung, Anbieter:
+### Konzeption, Technische Umsetzung, Anbieter:
 
 Technische Universität Dresden
 Zentrum für Informationsdienste und Hochleistungsrechnen
@@ -22,3 +24,10 @@ Prof. Dr. Wolfgang E. Nagel
 Tel.: +49 351 463-35450
 Fax: +49 351 463-37773
 E-Mail: zih@tu-dresden.de
+
+## License
+
+This documentation and the repository have two licenses:
+
+* All documentation is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
+* All software components are licensed under MIT license.
diff --git a/doc.zih.tu-dresden.de/docs/misc/HPC-Introduction.pdf b/doc.zih.tu-dresden.de/docs/misc/HPC-Introduction.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..71d47f04b75004fad2b9fd7181051c2beae4e2fe
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/misc/HPC-Introduction.pdf differ
diff --git a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md
similarity index 79%
rename from doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md
rename to doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md
index 9db357468d91c5c84850add970b9fc6f0d2007ad..9bc564d05a310005edc1d5564549db8da08ee415 100644
--- a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md
+++ b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md
@@ -1,4 +1,4 @@
-# Big Data Frameworks: Apache Spark, Apache Flink, Apache Hadoop
+# Big Data Frameworks: Apache Spark
 
 !!! note
 
@@ -6,15 +6,15 @@
 
 [Apache Spark](https://spark.apache.org/), [Apache Flink](https://flink.apache.org/)
 and [Apache Hadoop](https://hadoop.apache.org/) are frameworks for processing and integrating
-Big Data. These frameworks are also offered as software [modules](modules.md) on both `ml` and
-`scs5` partition. You can check module versions and availability with the command
+Big Data. These frameworks are also offered as software [modules](modules.md) in both `ml` and
+`scs5` software environments. You can check module versions and availability with the command
 
 ```console
-marie@login$ module av Spark
+marie@login$ module avail Spark
 ```
 
 The **aim** of this page is to introduce users on how to start working with
-these frameworks on ZIH systems, e. g. on the [HPC-DA](../jobs_and_resources/hpcda.md) system.
+these frameworks on ZIH systems.
 
 **Prerequisites:** To work with the frameworks, you need [access](../access/ssh_login.md) to ZIH
 systems and basic knowledge about data analysis and the batch system
@@ -46,20 +46,20 @@ as via [Jupyter notebook](#jupyter-notebook). All three ways are outlined in the
 
 ### Default Configuration
 
-The Spark module is available for both `scs5` and `ml` partitions.
+The Spark module is available in both `scs5` and `ml` environments.
 Thus, Spark can be executed using different CPU architectures, e.g., Haswell and Power9.
 
 Let us assume that two nodes should be used for the computation. Use a
 `srun` command similar to the following to start an interactive session
-using the Haswell partition. The following code snippet shows a job submission
-to Haswell nodes with an allocation of two nodes with 60 GB main memory
+using the partition haswell. The following code snippet shows a job submission
+to haswell nodes with an allocation of two nodes with 60 GB main memory
 exclusively for one hour:
 
 ```console
 marie@login$ srun --partition=haswell -N 2 --mem=60g --exclusive --time=01:00:00 --pty bash -l
 ```
 
-The command for different resource allocation on the `ml` partition is
+The command for different resource allocation on the partition `ml` is
 similar, e. g. for a job submission to `ml` nodes with an allocation of one
 node, one task per node, two CPUs per task, one GPU per node, with 10000 MB for one hour:
 
@@ -94,7 +94,7 @@ The Spark processes should now be set up and you can start your
 application, e. g.:
 
 ```console
-marie@compute$ spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME/examples/jars/spark-examples_2.11-2.4.4.jar 1000
+marie@compute$ spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME/examples/jars/spark-examples_2.12-3.0.1.jar 1000
 ```
 
 !!! warning
@@ -127,7 +127,7 @@ in an interactive job with:
 marie@compute$ source framework-configure.sh spark my-config-template
 ```
 
-### Using Hadoop Distributed File System (HDFS)
+### Using Hadoop Distributed Filesystem (HDFS)
 
 If you want to use Spark and HDFS together (or in general more than one
 framework), a scheme similar to the following can be used:
@@ -156,43 +156,34 @@ Please use a [batch job](../jobs_and_resources/slurm.md) similar to
 
 There are two general options on how to work with Jupyter notebooks:
 There is [JupyterHub](../access/jupyterhub.md), where you can simply
-run your Jupyter notebook on HPC nodes (the preferable way). Also, you
-can run a remote Jupyter server manually within a GPU job using
-the modules and packages you need. You can find the manual server
-setup [here](deep_learning.md).
+run your Jupyter notebook on HPC nodes (the preferable way).
 
 ### Preparation
 
 If you want to run Spark in Jupyter notebooks, you have to prepare it first. This is comparable
-to the [description for custom environments](../access/jupyterhub.md#conda-environment).
+to [normal Python virtual environments](../software/python_virtual_environments.md#python-virtual-environment).
 You start with an allocation:
 
 ```console
 marie@login$ srun --pty -n 1 -c 2 --mem-per-cpu=2500 -t 01:00:00 bash -l
 ```
 
-When a node is allocated, install the required package with Anaconda:
+When a node is allocated, install he required packages:
 
 ```console
-marie@compute$ module load Anaconda3
 marie@compute$ cd
-marie@compute$ mkdir user-kernel
-marie@compute$ conda create --prefix $HOME/user-kernel/haswell-py3.6-spark python=3.6
-Collecting package metadata: done
-Solving environment: done [...]
-
-marie@compute$ conda activate $HOME/user-kernel/haswell-py3.6-spark
-marie@compute$ conda install ipykernel
-Collecting package metadata: done
-Solving environment: done [...]
-
-marie@compute$ python -m ipykernel install --user --name haswell-py3.6-spark --display-name="haswell-py3.6-spark"
-Installed kernelspec haswell-py3.6-spark in [...]
-
-marie@compute$ conda install -c conda-forge findspark
-marie@compute$ conda install pyspark
-
-marie@compute$ conda deactivate
+marie@compute$ mkdir jupyter-kernel
+marie@compute$ virtualenv --system-site-packages jupyter-kernel/env  #Create virtual environment
+[...]
+marie@compute$ source jupyter-kernel/env/bin/activate    #Activate virtual environment.
+marie@compute$ pip install ipykernel
+[...]
+marie@compute$ python -m ipykernel install --user --name haswell-py3.7-spark --display-name="haswell-py3.7-spark"
+Installed kernelspec haswell-py3.7-spark in [...]
+
+marie@compute$ pip install findspark
+
+marie@compute$ deactivate
 ```
 
 You are now ready to spawn a notebook with Spark.
@@ -206,7 +197,7 @@ to the field "Preload modules" and select one of the Spark modules.
 When your Jupyter instance is started, check whether the kernel that
 you created in the preparation phase (see above) is shown in the top
 right corner of the notebook. If it is not already selected, select the
-kernel `haswell-py3.6-spark`. Then, you can set up Spark. Since the setup
+kernel `haswell-py3.7-spark`. Then, you can set up Spark. Since the setup
 in the notebook requires more steps than in an interactive session, we
 have created an example notebook that you can use as a starting point
 for convenience: [SparkExample.ipynb](misc/SparkExample.ipynb)
@@ -214,7 +205,7 @@ for convenience: [SparkExample.ipynb](misc/SparkExample.ipynb)
 !!! note
 
     You could work with simple examples in your home directory but according to the
-    [storage concept](../data_lifecycle/hpc_storage_concept2019.md)
+    [storage concept](../data_lifecycle/overview.md)
     **please use [workspaces](../data_lifecycle/workspaces.md) for
     your study and work projects**. For this reason, you have to use
     advanced options of Jupyterhub and put "/" in "Workspace scope" field.
diff --git a/doc.zih.tu-dresden.de/docs/software/compilers.md b/doc.zih.tu-dresden.de/docs/software/compilers.md
index 4292602e02e77bf01ad04c8c01643aadcc8c580a..7bb9c3c4b9f3a65151d5292ff587decd306e35c9 100644
--- a/doc.zih.tu-dresden.de/docs/software/compilers.md
+++ b/doc.zih.tu-dresden.de/docs/software/compilers.md
@@ -55,10 +55,10 @@ pages or use the option `--help` to list all options of the compiler.
 | `-fprofile-use`      | `-prof-use`  | `-Mpfo`     | use profile data for optimization      |
 
 !!! note
-    We can not generally give advice as to which option should be used.
-    To gain maximum performance please test the compilers and a few combinations of
-    optimization flags.
-    In case of doubt, you can also contact [HPC support](../support.md) and ask the staff for help.
+
+    We can not generally give advice as to which option should be used. To gain maximum performance
+    please test the compilers and a few combinations of optimization flags. In case of doubt, you
+    can also contact [HPC support](../support/support.md) and ask the staff for help.
 
 ### Architecture-specific Optimizations
 
diff --git a/doc.zih.tu-dresden.de/docs/software/containers.md b/doc.zih.tu-dresden.de/docs/software/containers.md
index a67a4a986881ffe09a16582adfeda719e6f90ccd..93c2762667be1e5addecff38c3cf38d08ac60d7e 100644
--- a/doc.zih.tu-dresden.de/docs/software/containers.md
+++ b/doc.zih.tu-dresden.de/docs/software/containers.md
@@ -2,94 +2,112 @@
 
 [Containerization](https://www.ibm.com/cloud/learn/containerization) encapsulating or packaging up
 software code and all its dependencies to run uniformly and consistently on any infrastructure. On
-Taurus [Singularity](https://sylabs.io/) used as a standard container solution. Singularity enables
-users to have full control of their environment. This means that you don’t have to ask an HPC
-support to install anything for you - you can put it in a Singularity container and run! As opposed
-to Docker (the most famous container solution), Singularity is much more suited to being used in an
-HPC environment and more efficient in many cases. Docker containers can easily be used in
-Singularity.  Information about the use of Singularity on Taurus can be found [here]**todo link**.
-
-In some cases using Singularity requires a Linux machine with root privileges (e.g. using the ml
-partition), the same architecture and a compatible kernel. For many reasons, users on Taurus cannot
-be granted root permissions. A solution is a Virtual Machine (VM) on the ml partition which allows
-users to gain root permissions in an isolated environment. There are two main options on how to work
-with VM on Taurus:
-
-1. [VM tools]**todo link**. Automative algorithms for using virtual machines;
-1. [Manual method]**todo link**. It required more operations but gives you more flexibility and reliability.
+ZIH systems [Singularity](https://sylabs.io/) is used as a standard container solution. Singularity
+enables users to have full control of their environment. This means that you don’t have to ask the
+HPC support to install anything for you - you can put it in a Singularity container and run! As
+opposed to Docker (the most famous container solution), Singularity is much more suited to being
+used in an HPC environment and more efficient in many cases. Docker containers can easily be used in
+Singularity. Information about the use of Singularity on ZIH systems can be found on this page.
+
+In some cases using Singularity requires a Linux machine with root privileges (e.g. using the
+partition `ml`), the same architecture and a compatible kernel. For many reasons, users on ZIH
+systems cannot be granted root permissions. A solution is a Virtual Machine (VM) on the partition
+`ml` which allows users to gain root permissions in an isolated environment.  There are two main
+options on how to work with Virtual Machines on ZIH systems:
+
+1. [VM tools](virtual_machines_tools.md): Automative algorithms for using virtual machines;
+1. [Manual method](virtual_machines.md): It requires more operations but gives you more flexibility
+   and reliability.
 
 ## Singularity
 
-If you wish to containerize your workflow/applications, you can use Singularity containers on
-Taurus. As opposed to Docker, this solution is much more suited to being used in an HPC environment.
-Existing Docker containers can easily be converted.
+If you wish to containerize your workflow and/or applications, you can use Singularity containers on
+ZIH systems. As opposed to Docker, this solution is much more suited to being used in an HPC
+environment.
 
-ZIH wiki sites:
+!!! note
 
-- [Example Definitions](singularity_example_definitions.md)
-- [Building Singularity images on Taurus](vm_tools.md)
-- [Hints on Advanced usage](singularity_recipe_hints.md)
+    It is not possible for users to generate new custom containers on ZIH systems directly, because
+    creating a new container requires root privileges.
 
-It is available on Taurus without loading any module.
+However, new containers can be created on your local workstation and moved to ZIH systems for
+execution. Follow the instructions for [locally install Singularity](#local-installation) and
+[container creation](#container-creation).  Moreover, existing Docker container can easily be
+converted, which is documented [here](#importing-a-docker-container).
 
-### Local installation
+If you are already familar with Singularity, you might be more intressted in our [singularity
+recipes and hints](singularity_recipe_hints.md).
 
-One advantage of containers is that you can create one on a local machine (e.g. your laptop) and
-move it to the HPC system to execute it there. This requires a local installation of singularity.
-The easiest way to do so is:
+### Local Installation
 
-1. Check if go is installed by executing `go version`.  If it is **not**:
+The local installation of Singularity comprises two steps: Make `go` available and then follow the
+instructions from the official documentation to install Singularity.
 
-```Bash
-wget <https://storage.googleapis.com/golang/getgo/installer_linux> && chmod +x
-installer_linux && ./installer_linux && source $HOME/.bash_profile
-```
+1. Check if `go` is installed by executing `go version`.  If it is **not**:
 
-1. Follow the instructions to [install Singularity](https://github.com/sylabs/singularity/blob/master/INSTALL.md#clone-the-repo)
+    ```console
+    marie@local$ wget <https://storage.googleapis.com/golang/getgo/installer_linux> && chmod +x
+    installer_linux && ./installer_linux && source $HOME/.bash_profile
+    ```
 
-clone the repo
+1. Instructions to
+   [install Singularity](https://github.com/sylabs/singularity/blob/master/INSTALL.md#clone-the-repo)
+   from the official documentation:
 
-```Bash
-mkdir -p ${GOPATH}/src/github.com/sylabs && cd ${GOPATH}/src/github.com/sylabs && git clone <https://github.com/sylabs/singularity.git> && cd
-singularity
-```
+    Clone the repository
 
-Checkout the version you want (see the [Github releases page](https://github.com/sylabs/singularity/releases)
-for available releases), e.g.
+    ```console
+    marie@local$ mkdir -p ${GOPATH}/src/github.com/sylabs
+    marie@local$ cd ${GOPATH}/src/github.com/sylabs
+    marie@local$ git clone https://github.com/sylabs/singularity.git
+    marie@local$ cd singularity
+    ```
 
-```Bash
-git checkout v3.2.1\
-```
+    Checkout the version you want (see the [GitHub releases page](https://github.com/sylabs/singularity/releases)
+    for available releases), e.g.
 
-Build and install
+    ```console
+    marie@local$ git checkout v3.2.1
+    ```
 
-```Bash
-cd ${GOPATH}/src/github.com/sylabs/singularity && ./mconfig && cd ./builddir && make && sudo
-make install
-```
+    Build and install
 
-### Container creation
+    ```console
+    marie@local$ cd ${GOPATH}/src/github.com/sylabs/singularity
+    marie@local$ ./mconfig && cd ./builddir && make
+    marie@local$ sudo make install
+    ```
 
-Since creating a new container requires access to system-level tools and thus root privileges, it is
-not possible for users to generate new custom containers on Taurus directly. You can, however,
-import an existing container from, e.g., Docker.
+### Container Creation
 
-In case you wish to create a new container, you can do so on your own local machine where you have
-the necessary privileges and then simply copy your container file to Taurus and use it there.
+!!! note
 
-This does not work on our **ml** partition, as it uses Power9 as its architecture which is
-different to the x86 architecture in common computers/laptops. For that you can use the
-[VM Tools](vm_tools.md).
+    It is not possible for users to generate new custom containers on ZIH systems directly, because
+    creating a new container requires root privileges.
 
-#### Creating a container
+There are two possibilities:
 
-Creating a container is done by writing a definition file and passing it to
+1. Create a new container on your local workstation (where you have the necessary privileges), and
+   then copy the container file to ZIH systems for execution.
+1. You can, however, import an existing container from, e.g., Docker.
 
-```Bash
-singularity build myContainer.sif myDefinition.def
-```
+Both methods are outlined in the following.
+
+#### New Custom Container
+
+You can create a new custom container on your workstation, if you have root rights.
+
+!!! attention "Respect the micro-architectures"
 
-NOTE: This must be done on a machine (or [VM](virtual_machines.md) with root rights.
+    You cannot create containers for the partition `ml`, as it bases on Power9 micro-architecture
+    which is different to the x86 architecture in common computers/laptops. For that you can use
+    the [VM Tools](virtual_machines_tools.md).
+
+Creating a container is done by writing a **definition file** and passing it to
+
+```console
+marie@local$ singularity build myContainer.sif <myDefinition.def>
+```
 
 A definition file contains a bootstrap
 [header](https://sylabs.io/guides/3.2/user-guide/definition_files.html#header)
@@ -99,20 +117,26 @@ where you install your software.
 
 The most common approach is to start from an existing docker image from DockerHub. For example, to
 start from an [Ubuntu image](https://hub.docker.com/_/ubuntu) copy the following into a new file
-called ubuntu.def (or any other filename of your choosing)
+called `ubuntu.def` (or any other filename of your choice)
 
-```Bash
-Bootstrap: docker<br />From: ubuntu:trusty<br /><br />%runscript<br />   echo "This is what happens when you run the container..."<br /><br />%post<br />    apt-get install g++
+```bash
+Bootstrap: docker
+From: ubuntu:trusty
+
+%runscript
+    echo "This is what happens when you run the container..."
+
+%post
+    apt-get install g++
 ```
 
-Then you can call:
+Then you can call
 
-```Bash
-singularity build ubuntu.sif ubuntu.def
+```console
+marie@local$ singularity build ubuntu.sif ubuntu.def
 ```
 
 And it will install Ubuntu with g++ inside your container, according to your definition file.
-
 More bootstrap options are available. The following example, for instance, bootstraps a basic CentOS
 7 image.
 
@@ -131,23 +155,25 @@ Include: yum
 ```
 
 More examples of definition files can be found at
-https://github.com/singularityware/singularity/tree/master/examples
+https://github.com/singularityware/singularity/tree/master/examples.
+
+#### Import a Docker Container
+
+!!! hint
 
-#### Importing a docker container
+    As opposed to bootstrapping a container, importing from Docker does **not require root
+    privileges** and therefore works on ZIH systems directly.
 
 You can import an image directly from the Docker repository (Docker Hub):
 
-```Bash
-singularity build my-container.sif docker://ubuntu:latest
+```console
+marie@local$ singularity build my-container.sif docker://ubuntu:latest
 ```
 
-As opposed to bootstrapping a container, importing from Docker does **not require root privileges**
-and therefore works on Taurus directly.
-
-Creating a singularity container directly from a local docker image is possible but not recommended.
-Steps:
+Creating a singularity container directly from a local docker image is possible but not
+recommended. The steps are:
 
-```Bash
+```console
 # Start a docker registry
 $ docker run -d -p 5000:5000 --restart=always --name registry registry:2
 
@@ -165,109 +191,122 @@ From: alpine
 $ singularity build --nohttps alpine.sif example.def
 ```
 
-#### Starting from a Dockerfile
+#### Start from a Dockerfile
 
-As singularity definition files and Dockerfiles are very similar you can start creating a definition
+As Singularity definition files and Dockerfiles are very similar you can start creating a definition
 file from an existing Dockerfile by "translating" each section.
 
-There are tools to automate this. One of them is \<a
-href="<https://github.com/singularityhub/singularity-cli>"
-target="\_blank">spython\</a> which can be installed with \`pip\` (add
-\`--user\` if you don't want to install it system-wide):
+There are tools to automate this. One of them is
+[spython](https://github.com/singularityhub/singularity-cli) which can be installed with `pip`
+(add `--user` if you don't want to install it system-wide):
 
-`pip3 install -U spython`
+```console
+marie@local$ pip3 install -U spython
+```
+
+With this you can simply issue the following command to convert a Dockerfile in the current folder
+into a singularity definition file:
+
+```console
+marie@local$ spython recipe Dockerfile myDefinition.def
+```
 
-With this you can simply issue the following command to convert a
-Dockerfile in the current folder into a singularity definition file:
+Please **verify** your generated definition and adjust where required!
 
-`spython recipe Dockerfile myDefinition.def<br />`
+There are some notable changes between Singularity definitions and Dockerfiles:
 
-Now please **verify** your generated definition and adjust where
-required!
+1. Command chains in Dockerfiles (`apt-get update && apt-get install foo`) must be split into
+   separate commands (`apt-get update; apt-get install foo`). Otherwise a failing command before the
+   ampersand is considered "checked" and does not fail the build.
+1. The environment variables section in Singularity is only set on execution of the final image, not
+   during the build as with Docker. So `*ENV*` sections from Docker must be translated to an entry
+   in the `%environment` section and **additionally** set in the `%runscript` section if the
+   variable is used there.
+1. `*VOLUME*` sections from Docker cannot be represented in Singularity containers. Use the runtime
+   option \`-B\` to bind folders manually.
+1. `CMD` and `ENTRYPOINT` from Docker do not have a direct representation in Singularity.
+   The closest is to check if any arguments are given in the `%runscript` section and call the
+   command from `ENTRYPOINT` with those, if none are given call `ENTRYPOINT` with the
+   arguments of `CMD`:
 
-There are some notable changes between singularity definitions and
-Dockerfiles: 1 Command chains in Dockerfiles (\`apt-get update &&
-apt-get install foo\`) must be split into separate commands (\`apt-get
-update; apt-get install foo). Otherwise a failing command before the
-ampersand is considered "checked" and does not fail the build. 1 The
-environment variables section in Singularity is only set on execution of
-the final image, not during the build as with Docker. So \`*ENV*\`
-sections from Docker must be translated to an entry in the
-*%environment* section and **additionally** set in the *%runscript*
-section if the variable is used there. 1 \`*VOLUME*\` sections from
-Docker cannot be represented in Singularity containers. Use the runtime
-option \`-B\` to bind folders manually. 1 *\`CMD\`* and *\`ENTRYPOINT\`*
-from Docker do not have a direct representation in Singularity. The
-closest is to check if any arguments are given in the *%runscript*
-section and call the command from \`*ENTRYPOINT*\` with those, if none
-are given call \`*ENTRYPOINT*\` with the arguments of \`*CMD*\`:
-\<verbatim>if \[ $# -gt 0 \]; then \<ENTRYPOINT> "$@" else \<ENTRYPOINT>
-\<CMD> fi\</verbatim>
+  ```bash
+  if [ $# -gt 0 ]; then
+    <ENTRYPOINT> "$@"
+  else
+    <ENTRYPOINT> <CMD>
+  fi
+  ```
 
-### Using the containers
+### Use the Containers
 
-#### Entering a shell in your container
+#### Enter a Shell in Your Container
 
 A read-only shell can be entered as follows:
 
-```Bash
-singularity shell my-container.sif
+```console
+marie@login$ singularity shell my-container.sif
 ```
 
-**IMPORTANT:** In contrast to, for instance, Docker, this will mount various folders from the host
-system including $HOME. This may lead to problems with, e.g., Python that stores local packages in
-the home folder, which may not work inside the container. It also makes reproducibility harder. It
-is therefore recommended to use `--contain/-c` to not bind $HOME (and others like `/tmp`)
-automatically and instead set up your binds manually via `-B` parameter. Example:
+!!! note
 
-```Bash
-singularity shell --contain -B /scratch,/my/folder-on-host:/folder-in-container my-container.sif
-```
+    In contrast to, for instance, Docker, this will mount various folders from the host system
+    including $HOME. This may lead to problems with, e.g., Python that stores local packages in the
+    home folder, which may not work inside the container. It also makes reproducibility harder. It
+    is therefore recommended to use `--contain/-c` to not bind `$HOME` (and others like `/tmp`)
+    automatically and instead set up your binds manually via `-B` parameter. Example:
+
+    ```console
+    marie@login$ singularity shell --contain -B /scratch,/my/folder-on-host:/folder-in-container my-container.sif
+    ```
 
 You can write into those folders by default. If this is not desired, add an `:ro` for read-only to
 the bind specification (e.g. `-B /scratch:/scratch:ro\`).  Note that we already defined bind paths
 for `/scratch`, `/projects` and `/sw` in our global `singularity.conf`, so you needn't use the `-B`
 parameter for those.
 
-If you wish, for instance, to install additional packages, you have to use the `-w` parameter to
-enter your container with it being writable.  This, again, must be done on a system where you have
+If you wish to install additional packages, you have to use the `-w` parameter to
+enter your container with it being writable. This, again, must be done on a system where you have
 the necessary privileges, otherwise you can only edit files that your user has the permissions for.
 E.g:
 
-```Bash
-singularity shell -w my-container.sif
+```console
+marie@local$ singularity shell -w my-container.sif
 Singularity.my-container.sif> yum install htop
 ```
 
 The `-w` parameter should only be used to make permanent changes to your container, not for your
-productive runs (it can only be used writeable by one user at the same time). You should write your
-output to the usual Taurus file systems like `/scratch`. Launching applications in your container
+productive runs (it can only be used writable by one user at the same time). You should write your
+output to the usual ZIH filesystems like `/scratch`. Launching applications in your container
 
-#### Running a command inside the container
+#### Run a Command Inside the Container
 
-While the "shell" command can be useful for tests and setup, you can also launch your applications
+While the `shell` command can be useful for tests and setup, you can also launch your applications
 inside the container directly using "exec":
 
-```Bash
-singularity exec my-container.img /opt/myapplication/bin/run_myapp
+```console
+marie@login$ singularity exec my-container.img /opt/myapplication/bin/run_myapp
 ```
 
 This can be useful if you wish to create a wrapper script that transparently calls a containerized
 application for you. E.g.:
 
-```Bash
+```bash
 #!/bin/bash
 
 X=`which singularity 2>/dev/null`
 if [ "z$X" = "z" ] ; then
-        echo "Singularity not found. Is the module loaded?"
-        exit 1
+  echo "Singularity not found. Is the module loaded?"
+  exit 1
 fi
 
 singularity exec /scratch/p_myproject/my-container.sif /opt/myapplication/run_myapp "$@"
-The better approach for that however is to use `singularity run` for that, which executes whatever was set in the _%runscript_ section of the definition file with the arguments you pass to it.
-Example:
-Build the following definition file into an image:
+```
+
+The better approach is to use `singularity run`, which executes whatever was set in the `%runscript`
+section of the definition file with the arguments you pass to it. Example: Build the following
+definition file into an image:
+
+```bash
 Bootstrap: docker
 From: ubuntu:trusty
 
@@ -285,33 +324,32 @@ singularity build my-container.sif example.def
 
 Then you can run your application via
 
-```Bash
+```console
 singularity run my-container.sif first_arg 2nd_arg
 ```
 
-Alternatively you can execute the container directly which is
-equivalent:
+Alternatively you can execute the container directly which is equivalent:
 
-```Bash
+```console
 ./my-container.sif first_arg 2nd_arg
 ```
 
 With this you can even masquerade an application with a singularity container as if it was an actual
 program by naming the container just like the binary:
 
-```Bash
+```console
 mv my-container.sif myCoolAp
 ```
 
-### Use-cases
+### Use-Cases
 
-One common use-case for containers is that you need an operating system with a newer GLIBC version
-than what is available on Taurus. E.g., the bullx Linux on Taurus used to be based on RHEL6 having a
-rather dated GLIBC version 2.12, some binary-distributed applications didn't work on that anymore.
-You can use one of our pre-made CentOS 7 container images (`/scratch/singularity/centos7.img`) to
-circumvent this problem. Example:
+One common use-case for containers is that you need an operating system with a newer
+[glibc](https://www.gnu.org/software/libc/) version than what is available on ZIH systems. E.g., the
+bullx Linux on ZIH systems used to be based on RHEL 6 having a rather dated glibc version 2.12, some
+binary-distributed applications didn't work on that anymore. You can use one of our pre-made CentOS
+7 container images (`/scratch/singularity/centos7.img`) to circumvent this problem. Example:
 
-```Bash
-$ singularity exec /scratch/singularity/centos7.img ldd --version
+```console
+marie@login$ singularity exec /scratch/singularity/centos7.img ldd --version
 ldd (GNU libc) 2.17
 ```
diff --git a/doc.zih.tu-dresden.de/docs/software/dask.md b/doc.zih.tu-dresden.de/docs/software/dask.md
deleted file mode 100644
index d6f7d087e8f39fb884a85834f807a4a91d236216..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/software/dask.md
+++ /dev/null
@@ -1,136 +0,0 @@
-# Dask
-
-**Dask** is an open-source library for parallel computing. Dask is a flexible library for parallel
-computing in Python.
-
-Dask natively scales Python. It provides advanced parallelism for analytics, enabling performance at
-scale for some of the popular tools. For instance: Dask arrays scale Numpy workflows, Dask
-dataframes scale Pandas workflows, Dask-ML scales machine learning APIs like Scikit-Learn and
-XGBoost.
-
-Dask is composed of two parts:
-
-- Dynamic task scheduling optimized for computation and interactive
-  computational workloads.
-- Big Data collections like parallel arrays, data frames, and lists
-  that extend common interfaces like NumPy, Pandas, or Python
-  iterators to larger-than-memory or distributed environments. These
-  parallel collections run on top of dynamic task schedulers.
-
-Dask supports several user interfaces:
-
-High-Level:
-
-- Arrays: Parallel NumPy
-- Bags: Parallel lists
-- DataFrames: Parallel Pandas
-- Machine Learning : Parallel Scikit-Learn
-- Others from external projects, like XArray
-
-Low-Level:
-
-- Delayed: Parallel function evaluation
-- Futures: Real-time parallel function evaluation
-
-## Installation
-
-### Installation Using Conda
-
-Dask is installed by default in [Anaconda](https://www.anaconda.com/download/). To install/update
-Dask on a Taurus with using the [conda](https://www.anaconda.com/download/) follow the example:
-
-```Bash
-# Job submission in ml nodes with allocating: 1 node, 1 gpu per node, 4 hours
-srun -p ml -N 1 -n 1 --mem-per-cpu=5772 --gres=gpu:1 --time=04:00:00 --pty bash
-```
-
-Create a conda virtual environment. We would recommend using a workspace. See the example (use
-`--prefix` flag to specify the directory).
-
-**Note:** You could work with simple examples in your home directory (where you are loading by
-default). However, in accordance with the
-[HPC storage concept](../data_lifecycle/hpc_storage_concept2019.md) please use a
-[workspaces](../data_lifecycle/workspaces.md) for your study and work projects.
-
-```Bash
-conda create --prefix /scratch/ws/0/aabc1234-Workproject/conda-virtual-environment/dask-test python=3.6
-```
-
-By default, conda will locate the environment in your home directory:
-
-```Bash
-conda create -n dask-test python=3.6
-```
-
-Activate the virtual environment, install Dask and verify the installation:
-
-```Bash
-ml modenv/ml
-ml PythonAnaconda/3.6
-conda activate /scratch/ws/0/aabc1234-Workproject/conda-virtual-environment/dask-test python=3.6
-which python
-which conda
-conda install dask
-python
-
-from dask.distributed import Client, progress
-client = Client(n_workers=4, threads_per_worker=1)
-client
-```
-
-### Installation Using Pip
-
-You can install everything required for most common uses of Dask (arrays, dataframes, etc)
-
-```Bash
-srun -p ml -N 1 -n 1 --mem-per-cpu=5772 --gres=gpu:1 --time=04:00:00 --pty bash
-
-cd /scratch/ws/0/aabc1234-Workproject/python-virtual-environment/dask-test
-
-ml modenv/ml
-module load PythonAnaconda/3.6
-which python
-
-python3 -m venv --system-site-packages dask-test
-source dask-test/bin/activate
-python -m pip install "dask[complete]"
-
-python
-from dask.distributed import Client, progress
-client = Client(n_workers=4, threads_per_worker=1)
-client
-```
-
-Distributed scheduler
-
-?
-
-## Run Dask on Taurus
-
-The preferred and simplest way to run Dask on HPC systems today both for new, experienced users or
-administrator is to use [dask-jobqueue](https://jobqueue.dask.org/).
-
-You can install dask-jobqueue with `pip` or `conda`
-
-Installation with Pip
-
-```Bash
-srun -p haswell -N 1 -n 1 -c 4 --mem-per-cpu=2583 --time=01:00:00 --pty bash
-cd
-/scratch/ws/0/aabc1234-Workproject/python-virtual-environment/dask-test
-ml modenv/ml module load PythonAnaconda/3.6 which python
-
-source dask-test/bin/activate pip
-install dask-jobqueue --upgrade # Install everything from last released version
-```
-
-Installation with Conda
-
-```Bash
-srun -p haswell -N 1 -n 1 -c 4 --mem-per-cpu=2583 --time=01:00:00 --pty bash
-
-ml modenv/ml module load PythonAnaconda/3.6 source
-dask-test/bin/activate
-
-conda install dask-jobqueue -c conda-forge\</verbatim>
-```
diff --git a/doc.zih.tu-dresden.de/docs/software/data_analytics.md b/doc.zih.tu-dresden.de/docs/software/data_analytics.md
new file mode 100644
index 0000000000000000000000000000000000000000..245bd5ae1a8ea0f246bd578d4365b3d23aaaba64
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/software/data_analytics.md
@@ -0,0 +1,35 @@
+# Data Analytics
+
+On ZIH systems, there are many possibilities for working with tools from the field of data
+analytics. The boundaries between data analytics and machine learning are fluid.
+Therefore, it may be worthwhile to search for a specific issue within the data analytics and
+machine learning sections.
+
+The following tools are available on ZIH systems, among others:
+
+* [Python](data_analytics_with_python.md)
+* [R](data_analytics_with_r.md)
+* [RStudio](data_analytics_with_rstudio.md)
+* [Big Data framework Spark](big_data_frameworks_spark.md)
+* [MATLAB and Mathematica](mathematics.md)
+
+Detailed information about frameworks for machine learning, such as [TensorFlow](tensorflow.md)
+and [PyTorch](pytorch.md), can be found in the [machine learning](machine_learning.md) subsection.
+
+Other software, not listed here, can be searched with
+
+```console
+marie@compute$ module spider <software_name>
+```
+
+Refer to the section covering [modules](modules.md) for further information on the modules system.
+Additional software or special versions of [individual modules](custom_easy_build_environment.md)
+can be installed individually by each user. If possible, the use of virtual environments is
+recommended (e.g. for Python). Likewise, software can be used within [containers](containers.md).
+
+For the transfer of larger amounts of data into and within the system, the
+[export nodes and datamover](../data_transfer/overview.md) should be used.
+Data is stored in the [workspaces](../data_lifecycle/workspaces.md).
+Software modules or virtual environments can also be installed in workspaces to enable
+collaborative work even within larger groups. General recommendations for setting up workflows
+can be found in the [experiments](../data_lifecycle/experiments.md) section.
diff --git a/doc.zih.tu-dresden.de/docs/software/data_analytics_with_python.md b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_python.md
new file mode 100644
index 0000000000000000000000000000000000000000..a1974c5d288b275a33f621044209ec0e90ce201d
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_python.md
@@ -0,0 +1,205 @@
+# Python for Data Analytics
+
+Python is a high-level interpreted language widely used in research and science. Using ZIH system
+allows you to work with Python quicker and more effective. Here, a general introduction to working
+with Python on ZIH systems is given. Further documentation is available for specific
+[machine learning frameworks](machine_learning.md).
+
+## Python Console and Virtual Environments
+
+Often, it is useful to create an isolated development environment, which can be shared among
+a research group and/or teaching class. For this purpose,
+[Python virtual environments](python_virtual_environments.md) can be used.
+
+The interactive Python interpreter can also be used on ZIH systems via an interactive job:
+
+```console
+marie@login$ srun --partition=haswell --gres=gpu:1 --ntasks=1 --cpus-per-task=7 --pty --mem-per-cpu=8000 bash
+marie@haswell$ module load Python
+marie@haswell$ python
+Python 3.8.6 (default, Feb 17 2021, 11:48:51)
+[GCC 10.2.0] on linux
+Type "help", "copyright", "credits" or "license" for more information.
+>>>
+```
+
+## Jupyter Notebooks
+
+Jupyter notebooks allow to analyze data interactively using your web browser. One advantage of
+Jupyter is, that code, documentation and visualization can be included in a single notebook, so that
+it forms a unit. Jupyter notebooks can be used for many tasks, such as data cleaning and
+transformation, numerical simulation, statistical modeling, data visualization and also machine
+learning.
+
+On ZIH systems, a [JupyterHub](../access/jupyterhub.md) is available, which can be used to run a
+Jupyter notebook on a node, using a GPU when needed.
+
+## Parallel Computing with Python
+
+### Pandas with Pandarallel
+
+[Pandas](https://pandas.pydata.org/){:target="_blank"} is a widely used library for data
+analytics in Python.
+In many cases, an existing source code using Pandas can be easily modified for parallel execution by
+using the [pandarallel](https://github.com/nalepae/pandarallel/tree/v1.5.2) module. The number of
+threads that can be used in parallel depends on the number of cores (parameter `--cpus-per-task`)
+within the Slurm request, e.g.
+
+```console
+marie@login$ srun --partition=haswell --cpus-per-task=4 --mem=2G --hint=nomultithread --pty --time=8:00:00 bash
+```
+
+The above request allows to use 4 parallel threads.
+
+The following example shows how to parallelize the apply method for pandas dataframes with the
+pandarallel module. If the pandarallel module is not installed already, use a
+[virtual environment](python_virtual_environments.md) to install the module.
+
+??? example
+
+    ```python
+    import pandas as pd
+    import numpy as np
+    from pandarallel import pandarallel
+
+    pandarallel.initialize()
+    # unfortunately the initialize method gets the total number of physical cores without
+    # taking into account allocated cores by Slurm, but the choice of the -c parameter is of relevance here
+
+    N_rows = 10**5
+    N_cols = 5
+    df = pd.DataFrame(np.random.randn(N_rows, N_cols))
+
+    # here some function that needs to be executed in parallel
+    def transform(x):
+        return(np.mean(x))
+
+    print('calculate with normal apply...')
+    df.apply(func=transform, axis=1)
+
+    print('calculate with pandarallel...')
+    df.parallel_apply(func=transform, axis=1)
+    ```
+For more examples of using pandarallel check out
+[https://github.com/nalepae/pandarallel/blob/master/docs/examples.ipynb](https://github.com/nalepae/pandarallel/blob/master/docs/examples.ipynb).
+
+### Dask
+
+[Dask](https://dask.org/) is a flexible and open-source library for parallel computing in Python.
+It replaces some Python data structures with parallel versions in order to provide advanced
+parallelism for analytics, enabling performance at scale for some of the popular tools. For
+instance: Dask arrays replace NumPy arrays, Dask dataframes replace Pandas dataframes.
+Furthermore, Dask-ML scales machine learning APIs like Scikit-Learn and XGBoost.
+
+Dask is composed of two parts:
+
+- Dynamic task scheduling optimized for computation and interactive computational workloads.
+- Big Data collections like parallel arrays, data frames, and lists that extend common interfaces
+  like NumPy, Pandas, or Python iterators to larger-than-memory or distributed environments.
+  These parallel collections run on top of dynamic task schedulers.
+
+Dask supports several user interfaces:
+
+- High-Level
+    - Arrays: Parallel NumPy
+    - Bags: Parallel lists
+    - DataFrames: Parallel Pandas
+    - Machine Learning: Parallel Scikit-Learn
+    - Others from external projects, like XArray
+- Low-Level
+    - Delayed: Parallel function evaluation
+    - Futures: Real-time parallel function evaluation
+
+#### Dask Usage
+
+On ZIH systems, Dask is available as a module. Check available versions and load your preferred one:
+
+```console
+marie@compute$ module spider dask
+------------------------------------------------------------------------------------------
+    dask:
+----------------------------------------------------------------------------------------------
+    Versions:
+        dask/2.8.0-fosscuda-2019b-Python-3.7.4
+        dask/2.8.0-Python-3.7.4
+        dask/2.8.0 (E)
+[...]
+marie@compute$ module load dask/2.8.0-fosscuda-2019b-Python-3.7.4
+marie@compute$ python -c "import dask; print(dask.__version__)"
+2021.08.1
+```
+
+The preferred and simplest way to run Dask on ZIH system is using
+[dask-jobqueue](https://jobqueue.dask.org/).
+
+**TODO** create better example with jobqueue
+
+```python
+from dask.distributed import Client, progress
+client = Client(n_workers=4, threads_per_worker=1)
+client
+```
+
+### mpi4py -  MPI for Python
+
+Message Passing Interface (MPI) is a standardized and portable message-passing standard, designed to
+function on a wide variety of parallel computing architectures. The Message Passing Interface (MPI)
+is a library specification that allows HPC to pass information between its various nodes and
+clusters. MPI is designed to provide access to advanced parallel hardware for end-users, library
+writers and tool developers.
+
+mpi4py (MPI for Python) provides bindings of the MPI standard for the Python programming
+language, allowing any Python program to exploit multiple processors.
+
+mpi4py is based on MPI-2 C++ bindings. It supports almost all MPI calls. This implementation is
+popular on Linux clusters and in the SciPy community. Operations are primarily methods of
+communicator objects. It supports communication of pickle-able Python objects. mpi4py provides
+optimized communication of NumPy arrays.
+
+mpi4py is included in the SciPy-bundle modules on the ZIH system.
+
+```console
+marie@compute$ module load SciPy-bundle/2020.11-foss-2020b
+Module SciPy-bundle/2020.11-foss-2020b and 28 dependencies loaded.
+marie@compute$ pip list
+Package                       Version
+----------------------------- ----------
+[...]
+mpi4py                        3.0.3
+[...]
+```
+
+Other versions of the package can be found with
+
+```console
+marie@compute$ module spider mpi4py
+-----------------------------------------------------------------------------------------------------------------------------------------
+  mpi4py:
+-----------------------------------------------------------------------------------------------------------------------------------------
+     Versions:
+        mpi4py/1.3.1
+        mpi4py/2.0.0-impi
+        mpi4py/3.0.0 (E)
+        mpi4py/3.0.2 (E)
+        mpi4py/3.0.3 (E)
+
+Names marked by a trailing (E) are extensions provided by another module.
+
+-----------------------------------------------------------------------------------------------------------------------------------------
+  For detailed information about a specific "mpi4py" package (including how to load the modules) use the module's full name.
+  Note that names that have a trailing (E) are extensions provided by other modules.
+  For example:
+
+     $ module spider mpi4py/3.0.3
+-----------------------------------------------------------------------------------------------------------------------------------------
+```
+
+Check if mpi4py is running correctly
+
+```python
+from mpi4py import MPI
+comm = MPI.COMM_WORLD
+print("%d of %d" % (comm.Get_rank(), comm.Get_size()))
+```
+
+**TODO** verify mpi4py installation
diff --git a/doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md
index 9c1e092a72d6294a9c5b91f0cd3459bc8e215ebb..72224113fdf8a9c6f4727d47771283dc1d0c1baa 100644
--- a/doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md
+++ b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md
@@ -1,53 +1,41 @@
 # R for Data Analytics
 
 [R](https://www.r-project.org/about.html) is a programming language and environment for statistical
-computing and graphics. R provides a wide variety of statistical (linear and nonlinear modelling,
-classical statistical tests, time-series analysis, classification, etc) and graphical techniques. R
-is an integrated suite of software facilities for data manipulation, calculation and
-graphing.
+computing and graphics. It provides a wide variety of statistical (linear and nonlinear modeling,
+classical statistical tests, time-series analysis, classification, etc.), machine learning
+algorithms and graphical techniques.  R is an integrated suite of software facilities for data
+manipulation, calculation and graphing.
 
-R possesses an extensive catalogue of statistical and graphical methods.  It includes machine
-learning algorithms, linear regression, time series, statistical inference.
-
-We recommend using **Haswell** and/or **Romeo** partitions to work with R. For more details
-see [here](../jobs_and_resources/hardware_taurus.md).
+We recommend using the partitions Haswell and/or Romeo to work with R. For more details
+see our [hardware documentation](../jobs_and_resources/hardware_overview.md).
 
 ## R Console
 
-This is a quickstart example. The `srun` command is used to submit a real-time execution job
-designed for interactive use with monitoring the output. Please check
-[the Slurm page](../jobs_and_resources/slurm.md) for details.
-
-```Bash
-# job submission on haswell nodes with allocating: 1 task, 1 node, 4 CPUs per task with 2541 mb per CPU(core) for 1 hour
-tauruslogin$ srun --partition=haswell --ntasks=1 --nodes=1 --cpus-per-task=4 --mem-per-cpu=2541 --time=01:00:00 --pty bash
-
-# Ensure that you are using the scs5 environment
-module load modenv/scs5
-# Check all available modules for R with version 3.6
-module available R/3.6
-# Load default R module
-module load R
-# Checking the current R version
-which R
-# Start R console
-R
+In the following example, the `srun` command is used to start an interactive job, so that the output
+is visible to the user. Please check the [Slurm page](../jobs_and_resources/slurm.md) for details.
+
+```console
+marie@login$ srun --partition=haswell --ntasks=1 --nodes=1 --cpus-per-task=4 --mem-per-cpu=2541 --time=01:00:00 --pty bash
+marie@haswell$ module load modenv/scs5
+marie@haswell$ module load R/3.6
+[...]
+Module R/3.6.0-foss-2019a and 56 dependencies loaded.
+marie@haswell$ which R
+marie@haswell$ /sw/installed/R/3.6.0-foss-2019a/bin/R
 ```
 
-Using `srun` is recommended only for short test runs, while for larger runs batch jobs should be
-used. The examples can be found [here](get_started_with_hpcda.md) or
-[here](../jobs_and_resources/slurm.md).
+Using interactive sessions is recommended only for short test runs, while for larger runs batch jobs
+should be used. Examples can be found on the [Slurm page](../jobs_and_resources/slurm.md).
 
 It is also possible to run `Rscript` command directly (after loading the module):
 
-```Bash
-# Run Rscript directly. For instance: Rscript /scratch/ws/0/marie-study_project/my_r_script.R
-Rscript /path/to/script/your_script.R param1 param2
+```console
+marie@haswell$ Rscript </path/to/script/your_script.R> <param1> <param2>
 ```
 
 ## R in JupyterHub
 
-In addition to using interactive and batch jobs, it is possible to work with **R** using
+In addition to using interactive and batch jobs, it is possible to work with R using
 [JupyterHub](../access/jupyterhub.md).
 
 The production and test [environments](../access/jupyterhub.md#standard-environments) of
@@ -55,66 +43,49 @@ JupyterHub contain R kernel. It can be started either in the notebook or in the
 
 ## RStudio
 
-[RStudio](<https://rstudio.com/) is an integrated development environment (IDE) for R. It includes
-a console, syntax-highlighting editor that supports direct code execution, as well as tools for
-plotting, history, debugging and workspace management. RStudio is also available on Taurus.
-
-The easiest option is to run RStudio in JupyterHub directly in the browser. It can be started
-similarly to a new kernel from [JupyterLab](../access/jupyterhub.md#jupyterlab) launcher.
-
-![RStudio launcher in JupyterHub](misc/data_analytics_with_r_RStudio_launcher.png)
-{: align="center"}
-
-Please keep in mind that it is currently not recommended to use the interactive x11 job with the
-desktop version of RStudio, as described, for example, in introduction HPC-DA slides.
+For using R with RStudio please refer to the documentation on
+[Data Analytics with RStudio](data_analytics_with_rstudio.md).
 
 ## Install Packages in R
 
-By default, user-installed packages are saved in the users home in a subfolder depending on
-the architecture (x86 or PowerPC). Therefore the packages should be installed using interactive
+By default, user-installed packages are saved in the users home in a folder depending on
+the architecture (`x86` or `PowerPC`). Therefore the packages should be installed using interactive
 jobs on the compute node:
 
-```Bash
-srun -p haswell --ntasks=1 --nodes=1 --cpus-per-task=4 --mem-per-cpu=2541 --time=01:00:00 --pty bash
-
-module purge
-module load modenv/scs5
-module load R
-R -e 'install.packages("package_name")'  #For instance: 'install.packages("ggplot2")'
+```console
+marie@compute$ module load R
+[...]
+Module R/3.6.0-foss-2019a and 56 dependencies loaded.
+marie@compute$ R -e 'install.packages("ggplot2")'
+[...]
 ```
 
 ## Deep Learning with R
 
 The deep learning frameworks perform extremely fast when run on accelerators such as GPU.
-Therefore, using nodes with built-in GPUs ([ml](../jobs_and_resources/power9.md) or
-[alpha](../jobs_and_resources/alpha_centauri.md) partitions) is beneficial for the examples here.
+Therefore, using nodes with built-in GPUs, e.g., partitions [ml](../jobs_and_resources/power9.md)
+and [alpha](../jobs_and_resources/alpha_centauri.md), is beneficial for the examples here.
 
 ### R Interface to TensorFlow
 
 The ["TensorFlow" R package](https://tensorflow.rstudio.com/) provides R users access to the
-Tensorflow toolset. [TensorFlow](https://www.tensorflow.org/) is an open-source software library
+TensorFlow framework. [TensorFlow](https://www.tensorflow.org/) is an open-source software library
 for numerical computation using data flow graphs.
 
-```Bash
-srun --partition=ml --ntasks=1 --nodes=1 --cpus-per-task=7 --mem-per-cpu=5772 --gres=gpu:1 --time=04:00:00 --pty bash
+The respective modules can be loaded with the following
 
-module purge
-ml modenv/ml
-ml TensorFlow
-ml R
-
-which python
-mkdir python-virtual-environments  # Create a folder for virtual environments
-cd python-virtual-environments
-python3 -m venv --system-site-packages R-TensorFlow        #create python virtual environment
-source R-TensorFlow/bin/activate                           #activate environment
-module list
-which R
+```console
+marie@compute$ module load R/3.6.2-fosscuda-2019b
+[...]
+Module R/3.6.2-fosscuda-2019b and 63 dependencies loaded.
+marie@compute$ module load TensorFlow/2.3.1-fosscuda-2019b-Python-3.7.4
+Module TensorFlow/2.3.1-fosscuda-2019b-Python-3.7.4 and 15 dependencies loaded.
 ```
 
-Please allocate the job with respect to
-[hardware specification](../jobs_and_resources/hardware_taurus.md)! Note that the nodes on `ml`
-partition have 4way-SMT, so for every physical core allocated, you will always get 4\*1443Mb=5772mb.
+!!! warning
+
+     Be aware that for compatibility reasons it is important to choose [modules](modules.md) with
+     the same toolchain version (in this case `fosscuda/2019b`).
 
 In order to interact with Python-based frameworks (like TensorFlow) `reticulate` R library is used.
 To configure it to point to the correct Python executable in your virtual environment, create
@@ -122,23 +93,40 @@ a file named `.Rprofile` in your project directory (e.g. R-TensorFlow) with the
 contents:
 
 ```R
-Sys.setenv(RETICULATE_PYTHON = "/sw/installed/Anaconda3/2019.03/bin/python")    #assign the output of the 'which python' from above to RETICULATE_PYTHON
+Sys.setenv(RETICULATE_PYTHON = "/sw/installed/Python/3.7.4-GCCcore-8.3.0/bin/python")    #assign RETICULATE_PYTHON to the python executable
 ```
 
 Let's start R, install some libraries and evaluate the result:
 
-```R
-install.packages("reticulate")
-library(reticulate)
-reticulate::py_config()
-install.packages("tensorflow")
-library(tensorflow)
-tf$constant("Hello Tensorflow")         #In the output 'Tesla V100-SXM2-32GB' should be mentioned
+```rconsole
+> install.packages(c("reticulate", "tensorflow"))
+Installing packages into ‘~/R/x86_64-pc-linux-gnu-library/3.6’
+(as ‘lib’ is unspecified)
+> reticulate::py_config()
+python:         /software/rome/Python/3.7.4-GCCcore-8.3.0/bin/python
+libpython:      /sw/installed/Python/3.7.4-GCCcore-8.3.0/lib/libpython3.7m.so
+pythonhome:     /software/rome/Python/3.7.4-GCCcore-8.3.0:/software/rome/Python/3.7.4-GCCcore-8.3.0
+version:        3.7.4 (default, Mar 25 2020, 13:46:43)  [GCC 8.3.0]
+numpy:          /software/rome/SciPy-bundle/2019.10-fosscuda-2019b-Python-3.7.4/lib/python3.7/site-packages/numpy
+numpy_version:  1.17.3
+
+NOTE: Python version was forced by RETICULATE_PYTHON
+
+> library(tensorflow)
+2021-08-26 16:11:47.110548: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
+> tf$constant("Hello TensorFlow")
+2021-08-26 16:14:00.269248: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
+2021-08-26 16:14:00.674878: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
+pciBusID: 0000:0b:00.0 name: A100-SXM4-40GB computeCapability: 8.0
+coreClock: 1.41GHz coreCount: 108 deviceMemorySize: 39.59GiB deviceMemoryBandwidth: 1.41TiB/s
+[...]
+tf.Tensor(b'Hello TensorFlow', shape=(), dtype=string)
 ```
 
 ??? example
+
     The example shows the use of the TensorFlow package with the R for the classification problem
-    related to the MNIST dataset.
+    related to the MNIST data set.
     ```R
     library(tensorflow)
     library(keras)
@@ -214,20 +202,16 @@ tf$constant("Hello Tensorflow")         #In the output 'Tesla V100-SXM2-32GB' sh
 ## Parallel Computing with R
 
 Generally, the R code is serial. However, many computations in R can be made faster by the use of
-parallel computations. Taurus allows a vast number of options for parallel computations. Large
-amounts of data and/or use of complex models are indications to use parallelization.
-
-### General Information about the R Parallelism
-
-There are various techniques and packages in R that allow parallelization. This section
-concentrates on most general methods and examples. The Information here is Taurus-specific.
+parallel computations. This section concentrates on most general methods and examples.
 The [parallel](https://www.rdocumentation.org/packages/parallel/versions/3.6.2) library
 will be used below.
 
-**Warning:** Please do not install or update R packages related to parallelism as it could lead to
-conflicts with other pre-installed packages.
+!!! warning
 
-### Basic Lapply-Based Parallelism
+    Please do not install or update R packages related to parallelism as it could lead to
+    conflicts with other preinstalled packages.
+
+### Basic lapply-Based Parallelism
 
 `lapply()` function is a part of base R. lapply is useful for performing operations on list-objects.
 Roughly speaking, lapply is a vectorization of the source code and it is the first step before
@@ -243,6 +227,7 @@ This is a simple option for parallelization. It doesn't require much effort to r
 code to use `mclapply` function. Check out an example below.
 
 ??? example
+
     ```R
     library(parallel)
 
@@ -269,9 +254,9 @@ code to use `mclapply` function. Check out an example below.
     list_of_averages <- mclapply(X=sample_sizes, FUN=average, mc.cores=threads)  # apply function "average" 100 times
     ```
 
-The disadvantages of using shared-memory parallelism approach are, that the number of parallel
-tasks is limited to the number of cores on a single node. The maximum number of cores on a single
-node can be found [here](../jobs_and_resources/hardware_taurus.md).
+The disadvantages of using shared-memory parallelism approach are, that the number of parallel tasks
+is limited to the number of cores on a single node. The maximum number of cores on a single node can
+be found in our [hardware documentation](../jobs_and_resources/hardware_overview.md).
 
 Submitting a multicore R job to Slurm is very similar to submitting an
 [OpenMP Job](../jobs_and_resources/slurm.md#binding-and-distribution-of-tasks),
@@ -305,9 +290,10 @@ running in parallel. The desired type of the cluster can be specified with a par
 This way of the R parallelism uses the
 [Rmpi](http://cran.r-project.org/web/packages/Rmpi/index.html) package and the
 [MPI](https://en.wikipedia.org/wiki/Message_Passing_Interface) (Message Passing Interface) as a
-"backend" for its parallel operations. The MPI-based job in R is very similar to submitting an
+"back-end" for its parallel operations. The MPI-based job in R is very similar to submitting an
 [MPI Job](../jobs_and_resources/slurm.md#binding-and-distribution-of-tasks) since both are running
-multicore jobs on multiple nodes. Below is an example of running R script with the Rmpi on Taurus:
+multicore jobs on multiple nodes. Below is an example of running R script with the Rmpi on the ZIH
+system:
 
 ```Bash
 #!/bin/bash
@@ -315,8 +301,8 @@ multicore jobs on multiple nodes. Below is an example of running R script with t
 #SBATCH --ntasks=32              # this parameter determines how many processes will be spawned, please use >=8
 #SBATCH --cpus-per-task=1
 #SBATCH --time=01:00:00
-#SBATCH -o test_Rmpi.out
-#SBATCH -e test_Rmpi.err
+#SBATCH --output=test_Rmpi.out
+#SBATCH --error=test_Rmpi.err
 
 module purge
 module load modenv/scs5
@@ -333,10 +319,10 @@ However, in some specific cases, you can specify the number of nodes and the num
 tasks per node explicitly:
 
 ```Bash
-#!/bin/bash
 #SBATCH --nodes=2
 #SBATCH --tasks-per-node=16
 #SBATCH --cpus-per-task=1
+
 module purge
 module load modenv/scs5
 module load R
@@ -348,6 +334,7 @@ Use an example below, where 32 global ranks are distributed over 2 nodes with 16
 Each MPI rank has 1 core assigned to it.
 
 ??? example
+
     ```R
     library(Rmpi)
 
@@ -371,6 +358,7 @@ Each MPI rank has 1 core assigned to it.
 Another example:
 
 ??? example
+
     ```R
     library(Rmpi)
     library(parallel)
@@ -405,7 +393,7 @@ Another example:
     #snow::stopCluster(cl)  # usually it hangs over here with OpenMPI > 2.0. In this case this command may be avoided, Slurm will clean up after the job finishes
     ```
 
-To use Rmpi and MPI please use one of these partitions: **haswell**, **broadwell** or **rome**.
+To use Rmpi and MPI please use one of these partitions: `haswell`, `broadwell` or `rome`.
 
 Use `mpirun` command to start the R script. It is a wrapper that enables the communication
 between processes running on different nodes. It is important to use `-np 1` (the number of spawned
@@ -422,6 +410,7 @@ parallel workers, you have to manually specify the number of nodes according to
 hardware specification and parameters of your job.
 
 ??? example
+
     ```R
     library(parallel)
 
@@ -456,7 +445,7 @@ hardware specification and parameters of your job.
     print(paste("Program finished"))
     ```
 
-#### FORK cluster
+#### FORK Cluster
 
 The `type="FORK"` method behaves exactly like the `mclapply` function discussed in the previous
 section. Like `mclapply`, it can only use the cores available on a single node. However this method
@@ -464,7 +453,7 @@ requires exporting the workspace data to other processes. The FORK method in a c
 `parLapply` function might be used in situations, where different source code should run on each
 parallel process.
 
-### Other parallel options
+### Other Parallel Options
 
 - [foreach](https://cran.r-project.org/web/packages/foreach/index.html) library.
   It is functionally equivalent to the
@@ -476,7 +465,8 @@ parallel process.
   expression via futures
 - [Poor-man's parallelism](https://www.glennklockwood.com/data-intensive/r/alternative-parallelism.html#6-1-poor-man-s-parallelism)
   (simple data parallelism). It is the simplest, but not an elegant way to parallelize R code.
-  It runs several copies of the same R script where's each read different sectors of the input data
+  It runs several copies of the same R script where each copy reads a different part of the input
+  data.
 - [Hands-off (OpenMP)](https://www.glennklockwood.com/data-intensive/r/alternative-parallelism.html#6-2-hands-off-parallelism)
   method. R has [OpenMP](https://www.openmp.org/resources/) support. Thus using OpenMP is a simple
   method where you don't need to know much about the parallelism options in your code. Please be
diff --git a/doc.zih.tu-dresden.de/docs/software/data_analytics_with_rstudio.md b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_rstudio.md
new file mode 100644
index 0000000000000000000000000000000000000000..51d1068e3d1c32796859037e51a37e71810259b6
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_rstudio.md
@@ -0,0 +1,14 @@
+# Data Analytics with RStudio
+
+[RStudio](https://rstudio.com/) is an integrated development environment (IDE) for R. It includes
+a console, syntax-highlighting editor that supports direct code execution, as well as tools for
+plotting, history, debugging and workspace management. RStudio is also available on ZIH systems.
+
+The easiest option is to run RStudio in JupyterHub directly in the browser. It can be started
+similarly to a new kernel from [JupyterLab](../access/jupyterhub.md#jupyterlab) launcher.
+
+![RStudio launcher in JupyterHub](misc/data_analytics_with_rstudio_launcher.jpg)
+{: style="width:90%" }
+
+!!! tip
+    If an error "could not start RStudio in time" occurs, try reloading the web page with `F5`.
diff --git a/doc.zih.tu-dresden.de/docs/software/deep_learning.md b/doc.zih.tu-dresden.de/docs/software/deep_learning.md
deleted file mode 100644
index da8c9c461fddc3c870ef418bb7db2b1ed493abe8..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/software/deep_learning.md
+++ /dev/null
@@ -1,333 +0,0 @@
-# Deep learning
-
-**Prerequisites**: To work with Deep Learning tools you obviously need [Login](../access/ssh_login.md)
-for the Taurus system and basic knowledge about Python, Slurm manager.
-
-**Aim** of this page is to introduce users on how to start working with Deep learning software on
-both the ml environment and the scs5 environment of the Taurus system.
-
-## Deep Learning Software
-
-### TensorFlow
-
-[TensorFlow](https://www.tensorflow.org/guide/) is a free end-to-end open-source software library
-for dataflow and differentiable programming across a range of tasks.
-
-TensorFlow is available in both main partitions
-[ml environment and scs5 environment](modules.md#module-environments)
-under the module name "TensorFlow". However, for purposes of machine learning and deep learning, we
-recommend using Ml partition [HPC-DA](../jobs_and_resources/hpcda.md). For example:
-
-```Bash
-module load TensorFlow
-```
-
-There are numerous different possibilities on how to work with [TensorFlow](tensorflow.md) on
-Taurus. On this page, for all examples default, scs5 partition is used. Generally, the easiest way
-is using the [modules system](modules.md)
-and Python virtual environment (test case). However, in some cases, you may need directly installed
-TensorFlow stable or night releases. For this purpose use the
-[EasyBuild](custom_easy_build_environment.md), [Containers](tensorflow_container_on_hpcda.md) and see
-[the example](https://www.tensorflow.org/install/pip). For examples of using TensorFlow for ml partition
-with module system see [TensorFlow page for HPC-DA](tensorflow.md).
-
-Note: If you are going used manually installed TensorFlow release we recommend use only stable
-versions.
-
-## Keras
-
-[Keras](https://keras.io/) is a high-level neural network API, written in Python and capable of
-running on top of [TensorFlow](https://github.com/tensorflow/tensorflow) Keras is available in both
-environments [ml environment and scs5 environment](modules.md#module-environments) under the module
-name "Keras".
-
-On this page for all examples default scs5 partition used. There are numerous different
-possibilities on how to work with [TensorFlow](tensorflow.md) and Keras
-on Taurus. Generally, the easiest way is using the [module system](modules.md) and Python
-virtual environment (test case) to see TensorFlow part above.
-For examples of using Keras for ml partition with the module system see the
-[Keras page for HPC-DA](keras.md).
-
-It can either use TensorFlow as its backend. As mentioned in Keras documentation Keras capable of
-running on Theano backend. However, due to the fact that Theano has been abandoned by the
-developers, we don't recommend use Theano anymore. If you wish to use Theano backend you need to
-install it manually. To use the TensorFlow backend, please don't forget to load the corresponding
-TensorFlow module. TensorFlow should be loaded automatically as a dependency.
-
-Test case: Keras with TensorFlow on MNIST data
-
-Go to a directory on Taurus, get Keras for the examples and go to the examples:
-
-```Bash
-git clone https://github.com/fchollet/keras.git'>https://github.com/fchollet/keras.git
-cd keras/examples/
-```
-
-If you do not specify Keras backend, then TensorFlow is used as a default
-
-Job-file (schedule job with sbatch, check the status with 'squeue -u \<Username>'):
-
-```Bash
-#!/bin/bash
-#SBATCH --gres=gpu:1                         # 1 - using one gpu, 2 - for using 2 gpus
-#SBATCH --mem=8000
-#SBATCH -p gpu2                              # select the type of nodes (options: haswell, smp, sandy, west, gpu, ml) K80 GPUs on Haswell node
-#SBATCH --time=00:30:00
-#SBATCH -o HLR_&lt;name_of_your_script&gt;.out     # save output under HLR_${SLURMJOBID}.out
-#SBATCH -e HLR_&lt;name_of_your_script&gt;.err     # save error messages under HLR_${SLURMJOBID}.err
-
-module purge                                 # purge if you already have modules loaded
-module load modenv/scs5                      # load scs5 environment
-module load Keras                            # load Keras module
-module load TensorFlow                       # load TensorFlow module
-
-# if you see 'broken pipe error's (might happen in interactive session after the second srun
-command) uncomment line below
-# module load h5py
-
-python mnist_cnn.py
-```
-
-Keep in mind that you need to put the bash script to the same folder as an executable file or
-specify the path.
-
-Example output:
-
-```Bash
-x_train shape: (60000, 28, 28, 1) 60000 train samples 10000 test samples Train on 60000 samples,
-validate on 10000 samples Epoch 1/12
-
-128/60000 [..............................] - ETA: 12:08 - loss: 2.3064 - acc: 0.0781 256/60000
-[..............................] - ETA: 7:04 - loss: 2.2613 - acc: 0.1523 384/60000
-[..............................] - ETA: 5:22 - loss: 2.2195 - acc: 0.2005
-
-...
-
-60000/60000 [==============================] - 128s 2ms/step - loss: 0.0296 - acc: 0.9905 -
-val_loss: 0.0268 - val_acc: 0.9911 Test loss: 0.02677746053306255 Test accuracy: 0.9911
-```
-
-## Datasets
-
-There are many different datasets designed for research purposes. If you would like to download some
-of them, first of all, keep in mind that many machine learning libraries have direct access to
-public datasets without downloading it (for example
-[TensorFlow Datasets](https://www.tensorflow.org/datasets).
-
-If you still need to download some datasets, first of all, be careful with the size of the datasets
-which you would like to download (some of them have a size of few Terabytes). Don't download what
-you really not need to use! Use login nodes only for downloading small files (hundreds of the
-megabytes). For downloading huge files use [DataMover](../data_transfer/data_mover.md).
-For example, you can use command `dtwget` (it is an analogue of the general wget
-command). This command submits a job to the data transfer machines.  If you need to download or
-allocate massive files (more than one terabyte) please contact the support before.
-
-### The ImageNet dataset
-
-The [ImageNet](http://www.image-net.org/) project is a large visual database designed for use in
-visual object recognition software research. In order to save space in the file system by avoiding
-to have multiple duplicates of this lying around, we have put a copy of the ImageNet database
-(ILSVRC2012 and ILSVR2017) under `/scratch/imagenet` which you can use without having to download it
-again. For the future, the ImageNet dataset will be available in `/warm_archive`. ILSVR2017 also
-includes a dataset for recognition objects from a video. Please respect the corresponding
-[Terms of Use](https://image-net.org/download.php).
-
-## Jupyter Notebook
-
-Jupyter notebooks are a great way for interactive computing in your web browser. Jupyter allows
-working with data cleaning and transformation, numerical simulation, statistical modelling, data
-visualization and of course with machine learning.
-
-There are two general options on how to work Jupyter notebooks using HPC: remote Jupyter server and
-JupyterHub.
-
-These sections show how to run and set up a remote Jupyter server within a sbatch GPU job and which
-modules and packages you need for that.
-
-**Note:** On Taurus, there is a [JupyterHub](../access/jupyterhub.md), where you do not need the
-manual server setup described below and can simply run your Jupyter notebook on HPC nodes. Keep in
-mind, that, with JupyterHub, you can't work with some special instruments. However, general data
-analytics tools are available.
-
-The remote Jupyter server is able to offer more freedom with settings and approaches.
-
-### Preparation phase (optional)
-
-On Taurus, start an interactive session for setting up the
-environment:
-
-```Bash
-srun --pty -n 1 --cpus-per-task=2 --time=2:00:00 --mem-per-cpu=2500 --x11=first bash -l -i
-```
-
-Create a new subdirectory in your home, e.g. Jupyter
-
-```Bash
-mkdir Jupyter cd Jupyter
-```
-
-There are two ways how to run Anaconda. The easiest way is to load the Anaconda module. The second
-one is to download Anaconda in your home directory.
-
-1. Load Anaconda module (recommended):
-
-```Bash
-module load modenv/scs5 module load Anaconda3
-```
-
-1. Download latest Anaconda release (see example below) and change the rights to make it an
-executable script and run the installation script:
-
-```Bash
-wget https://repo.continuum.io/archive/Anaconda3-2019.03-Linux-x86_64.sh chmod 744
-Anaconda3-2019.03-Linux-x86_64.sh ./Anaconda3-2019.03-Linux-x86_64.sh
-
-(during installation you have to confirm the license agreement)
-```
-
-Next step will install the anaconda environment into the home
-directory (/home/userxx/anaconda3). Create a new anaconda environment with the name "jnb".
-
-```Bash
-conda create --name jnb
-```
-
-### Set environmental variables on Taurus
-
-In shell activate previously created python environment (you can
-deactivate it also manually) and install Jupyter packages for this python environment:
-
-```Bash
-source activate jnb conda install jupyter
-```
-
-If you need to adjust the configuration, you should create the template. Generate config files for
-Jupyter notebook server:
-
-```Bash
-jupyter notebook --generate-config
-```
-
-Find a path of the configuration file, usually in the home under `.jupyter` directory, e.g.
-`/home//.jupyter/jupyter_notebook_config.py`
-
-Set a password (choose easy one for testing), which is needed later on to log into the server
-in browser session:
-
-```Bash
-jupyter notebook password Enter password: Verify password:
-```
-
-You get a message like that:
-
-```Bash
-[NotebookPasswordApp] Wrote *hashed password* to
-/home/<zih_user>/.jupyter/jupyter_notebook_config.json
-```
-
-I order to create an SSL certificate for https connections, you can create a self-signed
-certificate:
-
-```Bash
-openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mykey.key -out mycert.pem
-```
-
-Fill in the form with decent values.
-
-Possible entries for your Jupyter config (`.jupyter/jupyter_notebook*config.py*`). Uncomment below
-lines:
-
-```Bash
-c.NotebookApp.certfile = u'<path-to-cert>/mycert.pem' c.NotebookApp.keyfile =
-u'<path-to-cert>/mykey.key'
-
-# set ip to '*' otherwise server is bound to localhost only c.NotebookApp.ip = '*'
-c.NotebookApp.open_browser = False
-
-# copy hashed password from the jupyter_notebook_config.json c.NotebookApp.password = u'<your
-hashed password here>' c.NotebookApp.port = 9999 c.NotebookApp.allow_remote_access = True
-```
-
-Note: `<path-to-cert>` - path to key and certificate files, for example:
-(`/home/\<username>/mycert.pem`)
-
-### Slurm job file to run the Jupyter server on Taurus with GPU (1x K80) (also works on K20)
-
-```Bash
-#!/bin/bash -l #SBATCH --gres=gpu:1 # request GPU #SBATCH --partition=gpu2 # use GPU partition
-SBATCH --output=notebook_output.txt #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --time=02:30:00
-SBATCH --mem=4000M #SBATCH -J "jupyter-notebook" # job-name #SBATCH -A <name_of_your_project>
-
-unset XDG_RUNTIME_DIR   # might be required when interactive instead of sbatch to avoid
-'Permission denied error' srun jupyter notebook
-```
-
-Start the script above (e.g. with the name jnotebook) with sbatch command:
-
-```Bash
-sbatch jnotebook.slurm
-```
-
-If you have a question about sbatch script see the article about [Slurm](../jobs_and_resources/slurm.md).
-
-Check by the command: `tail notebook_output.txt` the status and the **token** of the server. It
-should look like this:
-
-```Bash
-https://(taurusi2092.taurus.hrsk.tu-dresden.de or 127.0.0.1):9999/
-```
-
-You can see the **server node's hostname** by the command: `squeue -u <username>`.
-
-Remote connect to the server
-
-There are two options on how to connect to the server:
-
-1. You can create an ssh tunnel if you have problems with the
-solution above. Open the other terminal and configure ssh
-tunnel: (look up connection values in the output file of Slurm job, e.g.) (recommended):
-
-```Bash
-node=taurusi2092                      #see the name of the node with squeue -u <your_login>
-localport=8887                        #local port on your computer remoteport=9999
-#pay attention on the value. It should be the same value as value in the notebook_output.txt ssh
--fNL ${localport}:${node}:${remoteport} <zih_user>@taurus.hrsk.tu-dresden.de         #configure
-of the ssh tunnel for connection to your remote server pgrep -f "ssh -fNL ${localport}"
-#verify that tunnel is alive
-```
-
-2. On your client (local machine) you now can connect to the server.  You need to know the **node's
-   hostname**, the **port** of the server and the **token** to login (see paragraph above).
-
-You can connect directly if you know the IP address (just ping the node's hostname while logged on
-Taurus).
-
-```Bash
-#comand on remote terminal taurusi2092$> host taurusi2092 # copy IP address from output # paste
-IP to your browser or call on local terminal e.g.  local$> firefox https://<IP>:<PORT>  # https
-important to use SSL cert
-```
-
-To login into the Jupyter notebook site, you have to enter the **token**.
-(`https://localhost:8887`). Now you can create and execute notebooks on Taurus with GPU support.
-
-If you would like to use [JupyterHub](../access/jupyterhub.md) after using a remote manually configured
-Jupyter server (example above) you need to change the name of the configuration file
-(`/home//.jupyter/jupyter_notebook_config.py`) to any other.
-
-### F.A.Q
-
-**Q:** - I have an error to connect to the Jupyter server (e.g. "open failed: administratively
-prohibited: open failed")
-
-**A:** - Check the settings of your Jupyter config file. Is it all necessary lines uncommented, the
-right path to cert and key files, right hashed password from .json file? Check is the used local
-port [available](https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers)
-Check local settings e.g. (`/etc/ssh/sshd_config`, `/etc/hosts`).
-
-**Q:** I have an error during the start of the interactive session (e.g.  PMI2_Init failed to
-initialize. Return code: 1)
-
-**A:** Probably you need to provide `--mpi=none` to avoid ompi errors ().
-`srun --mpi=none --reservation \<...> -A \<...> -t 90 --mem=4000 --gres=gpu:1
---partition=gpu2-interactive --pty bash -l`
diff --git a/doc.zih.tu-dresden.de/docs/software/distributed_training.md b/doc.zih.tu-dresden.de/docs/software/distributed_training.md
new file mode 100644
index 0000000000000000000000000000000000000000..1c84d44b08f6311719802b9e22ce3c6c3af6cfe9
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/software/distributed_training.md
@@ -0,0 +1,184 @@
+# Distributed Training
+
+## Internal Distribution
+
+### Distributed TensorFlow
+
+TODO
+
+### Distributed PyTorch
+
+Hint: just copied some old content as starting point
+
+#### Using Multiple GPUs with PyTorch
+
+Effective use of GPUs is essential, and it implies using parallelism in
+your code and model. Data Parallelism and model parallelism are effective instruments
+to improve the performance of your code in case of GPU using.
+
+The data parallelism is a widely-used technique. It replicates the same model to all GPUs,
+where each GPU consumes a different partition of the input data. You could see this method [here](https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html).
+
+The example below shows how to solve that problem by using model
+parallel, which, in contrast to data parallelism, splits a single model
+onto different GPUs, rather than replicating the entire model on each
+GPU. The high-level idea of model parallel is to place different sub-networks of a model onto different
+devices. As the only part of a model operates on any individual device, a set of devices can
+collectively serve a larger model.
+
+It is recommended to use [DistributedDataParallel]
+(https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html),
+instead of this class, to do multi-GPU training, even if there is only a single node.
+See: Use nn.parallel.DistributedDataParallel instead of multiprocessing or nn.DataParallel.
+Check the [page](https://pytorch.org/docs/stable/notes/cuda.html#cuda-nn-ddp-instead) and
+[Distributed Data Parallel](https://pytorch.org/docs/stable/notes/ddp.html#ddp).
+
+Examples:
+
+1\. The parallel model. The main aim of this model to show the way how
+to effectively implement your neural network on several GPUs. It
+includes a comparison of different kinds of models and tips to improve
+the performance of your model. **Necessary** parameters for running this
+model are **2 GPU** and 14 cores (56 thread).
+
+(example_PyTorch_parallel.zip)
+
+Remember that for using [JupyterHub service](../access/jupyterhub.md)
+for PyTorch you need to create and activate
+a virtual environment (kernel) with loaded essential modules.
+
+Run the example in the same way as the previous examples.
+
+#### Distributed data-parallel
+
+[DistributedDataParallel](https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel)
+(DDP) implements data parallelism at the module level which can run across multiple machines.
+Applications using DDP should spawn multiple processes and create a single DDP instance per process.
+DDP uses collective communications in the [torch.distributed]
+(https://pytorch.org/tutorials/intermediate/dist_tuto.html)
+package to synchronize gradients and buffers.
+
+The tutorial could be found [here](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html).
+
+To use distributed data parallelization on ZIH system please use following
+parameters: `--ntasks-per-node` -parameter to the number of GPUs you use
+per node. Also, it could be useful to increase `memomy/cpu` parameters
+if you run larger models. Memory can be set up to:
+
+`--mem=250000` and `--cpus-per-task=7` for the partition `ml`.
+
+`--mem=60000` and `--cpus-per-task=6` for the partition `gpu2`.
+
+Keep in mind that only one memory parameter (`--mem-per-cpu` = <MB> or `--mem`=<MB>) can be
+specified
+
+## External Distribution
+
+### Horovod
+
+[Horovod](https://github.com/horovod/horovod) is the open source distributed training
+framework for TensorFlow, Keras, PyTorch. It is supposed to make it easy
+to develop distributed deep learning projects and speed them up with
+TensorFlow.
+
+#### Why use Horovod?
+
+Horovod allows you to easily take a single-GPU TensorFlow and PyTorch
+program and successfully train it on many GPUs! In
+some cases, the MPI model is much more straightforward and requires far
+less code changes than the distributed code from TensorFlow for
+instance, with parameter servers. Horovod uses MPI and NCCL which gives
+in some cases better results than pure TensorFlow and PyTorch.
+
+#### Horovod as a module
+
+Horovod is available as a module with **TensorFlow** or **PyTorch**for **all** module environments.
+Please check the [software module list](modules.md) for the current version of the software.
+Horovod can be loaded like other software on ZIH system:
+
+```Bash
+ml av Horovod            #Check available modules with Python
+module load Horovod      #Loading of the module
+```
+
+#### Horovod installation
+
+However, if it is necessary to use Horovod with **PyTorch** or use
+another version of Horovod it is possible to install it manually. To
+install Horovod you need to create a virtual environment and load the
+dependencies (e.g. MPI). Installing PyTorch can take a few hours and is
+not recommended
+
+**Note:** You could work with simple examples in your home directory but **please use workspaces
+for your study and work projects** (see the Storage concept).
+
+Setup:
+
+```Bash
+srun -N 1 --ntasks-per-node=6 -p ml --time=08:00:00 --pty bash                    #allocate a Slurm job allocation, which is a set of resources (nodes)
+module load modenv/ml                                                             #Load dependencies by using modules
+module load OpenMPI/3.1.4-gcccuda-2018b
+module load Python/3.6.6-fosscuda-2018b
+module load cuDNN/7.1.4.18-fosscuda-2018b
+module load CMake/3.11.4-GCCcore-7.3.0
+virtualenv --system-site-packages <location_for_your_environment>                 #create virtual environment
+source <location_for_your_environment>/bin/activate                               #activate virtual environment
+```
+
+Or when you need to use conda:
+
+```Bash
+srun -N 1 --ntasks-per-node=6 -p ml --time=08:00:00 --pty bash                            #allocate a Slurm job allocation, which is a set of resources (nodes)
+module load modenv/ml                                                                     #Load dependencies by using modules
+module load OpenMPI/3.1.4-gcccuda-2018b
+module load PythonAnaconda/3.6
+module load cuDNN/7.1.4.18-fosscuda-2018b
+module load CMake/3.11.4-GCCcore-7.3.0
+
+conda create --prefix=<location_for_your_environment> python=3.6 anaconda                 #create virtual environment
+
+conda activate  <location_for_your_environment>                                           #activate virtual environment
+```
+
+Install PyTorch (not recommended)
+
+```Bash
+cd /tmp
+git clone https://github.com/pytorch/pytorch                                  #clone PyTorch from the source
+cd pytorch                                                                    #go to folder
+git checkout v1.7.1                                                           #Checkout version (example: 1.7.1)
+git submodule update --init                                                   #Update dependencies
+python setup.py install                                                       #install it with python
+```
+
+##### Install Horovod for PyTorch with python and pip
+
+In the example presented installation for the PyTorch without
+TensorFlow. Adapt as required and refer to the Horovod documentation for
+details.
+
+```Bash
+HOROVOD_GPU_ALLREDUCE=MPI HOROVOD_WITHOUT_TENSORFLOW=1 HOROVOD_WITH_PYTORCH=1 HOROVOD_WITHOUT_MXNET=1 pip install --no-cache-dir horovod
+```
+
+##### Verify that Horovod works
+
+```Bash
+python                                           #start python
+import torch                                     #import pytorch
+import horovod.torch as hvd                      #import horovod
+hvd.init()                                       #initialize horovod
+hvd.size()
+hvd.rank()
+print('Hello from:', hvd.rank())
+```
+
+##### Horovod with NCCL
+
+If you want to use NCCL instead of MPI you can specify that in the
+install command after loading the NCCL module:
+
+```Bash
+module load NCCL/2.3.7-fosscuda-2018b
+HOROVOD_GPU_ALLREDUCE=NCCL HOROVOD_GPU_BROADCAST=NCCL HOROVOD_WITHOUT_TENSORFLOW=1 HOROVOD_WITH_PYTORCH=1 HOROVOD_WITHOUT_MXNET=1 pip install --no-cache-dir horovod
+```
diff --git a/doc.zih.tu-dresden.de/docs/software/get_started_with_hpcda.md b/doc.zih.tu-dresden.de/docs/software/get_started_with_hpcda.md
deleted file mode 100644
index ac90455f91a13a74023d9e767aa9f7bce538cf69..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/software/get_started_with_hpcda.md
+++ /dev/null
@@ -1,353 +0,0 @@
-# Get started with HPC-DA
-
-HPC-DA (High-Performance Computing and Data Analytics) is a part of TU-Dresden general purpose HPC
-cluster (Taurus). HPC-DA is the best **option** for **Machine learning, Deep learning** applications
-and tasks connected with the big data.
-
-**This is an introduction of how to run machine learning applications on the HPC-DA system.**
-
-The main **aim** of this guide is to help users who have started working with Taurus and focused on
-working with Machine learning frameworks such as TensorFlow or Pytorch.
-
-**Prerequisites:** To work with HPC-DA, you need [Login](../access/ssh_login.md) for the Taurus system
-and preferably have basic knowledge about High-Performance computers and Python.
-
-**Disclaimer:** This guide provides the main steps on the way of using Taurus, for details please
-follow links in the text.
-
-You can also find the information you need on the
-[HPC-Introduction] **todo** %ATTACHURL%/HPC-Introduction.pdf?t=1585216700 and
-[HPC-DA-Introduction] *todo** %ATTACHURL%/HPC-DA-Introduction.pdf?t=1585162693 presentation slides.
-
-## Why should I use HPC-DA? The architecture and feature of the HPC-DA
-
-HPC-DA built on the base of [Power9](https://www.ibm.com/it-infrastructure/power/power9)
-architecture from IBM. HPC-DA created from
-[AC922 IBM servers](https://www.ibm.com/ie-en/marketplace/power-systems-ac922), which was created
-for AI challenges, analytics and working with, Machine learning, data-intensive workloads,
-deep-learning frameworks and accelerated databases. POWER9 is the processor with state-of-the-art
-I/O subsystem technology, including next-generation NVIDIA NVLink, PCIe Gen4 and OpenCAPI.
-[Here](../jobs_and_resources/power9.md) you could find a detailed specification of the TU Dresden
-HPC-DA system.
-
-The main feature of the Power9 architecture (ppc64le) is the ability to work the
-[NVIDIA Tesla V100](https://www.nvidia.com/en-gb/data-center/tesla-v100/) GPU with **NV-Link**
-support. NV-Link technology allows increasing a total bandwidth of 300 gigabytes per second (GB/sec)
-
-- 10X the bandwidth of PCIe Gen 3. The bandwidth is a crucial factor for deep learning and machine
-    learning applications.
-
-**Note:** The Power9 architecture not so common as an x86 architecture. This means you are not so
-flexible with choosing applications for your projects. Even so, the main tools and applications are
-available. See available modules here.
-
-**Please use the ml partition if you need GPUs!** Otherwise using the x86 partitions (e.g Haswell)
-most likely would be more beneficial.
-
-## Login
-
-### SSH Access
-
-The recommended way to connect to the HPC login servers directly via ssh:
-
-```Bash
-ssh <zih-login>@taurus.hrsk.tu-dresden.de
-```
-
-Please put this command in the terminal and replace `<zih-login>` with your login that you received
-during the access procedure. Accept the host verifying and enter your password.
-
-This method requires two conditions:
-Linux OS, workstation within the campus network. For other options and
-details check the [login page](../access/ssh_login.md).
-
-## Data management
-
-### Workspaces
-
-As soon as you have access to HPC-DA you have to manage your data. The main method of working with
-data on Taurus is using Workspaces.  You could work with simple examples in your home directory
-(where you are loading by default). However, in accordance with the
-[storage concept](../data_lifecycle/hpc_storage_concept2019.md)
-**please use** a [workspace](../data_lifecycle/workspaces.md)
-for your study and work projects.
-
-You should create your workspace with a similar command:
-
-```Bash
-ws_allocate -F scratch Machine_learning_project 50    #allocating workspase in scratch directory for 50 days
-```
-
-After the command, you will have an output with the address of the workspace based on scratch. Use
-it to store the main data of your project.
-
-For different purposes, you should use different storage systems.  To work as efficient as possible,
-consider the following points:
-
-- Save source code etc. in `/home` or `/projects/...`
-- Store checkpoints and other massive but temporary data with
-  workspaces in: `/scratch/ws/...`
-- For data that seldom changes but consumes a lot of space, use
-  mid-term storage with workspaces: `/warm_archive/...`
-- For large parallel applications where using the fastest file system
-  is a necessity, use with workspaces: `/lustre/ssd/...`
-- Compilation in `/dev/shm`** or `/tmp`
-
-### Data moving
-
-#### Moving data to/from the HPC machines
-
-To copy data to/from the HPC machines, the Taurus [export nodes](../data_transfer/export_nodes.md)
-should be used. They are the preferred way to transfer your data. There are three possibilities to
-exchanging data between your local machine (lm) and the HPC machines (hm): **SCP, RSYNC, SFTP**.
-
-Type following commands in the local directory of the local machine. For example, the **`SCP`**
-command was used.
-
-#### Copy data from lm to hm
-
-```Bash
-scp <file> <zih-user>@taurusexport.hrsk.tu-dresden.de:<target-location>                  #Copy file from your local machine. For example: scp helloworld.txt mustermann@taurusexport.hrsk.tu-dresden.de:/scratch/ws/mastermann-Macine_learning_project/
-
-scp -r <directory> <zih-user>@taurusexport.hrsk.tu-dresden.de:<target-location>          #Copy directory from your local machine.
-```
-
-#### Copy data from hm to lm
-
-```Bash
-scp <zih-user>@taurusexport.hrsk.tu-dresden.de:<file> <target-location>                  #Copy file. For example: scp mustermann@taurusexport.hrsk.tu-dresden.de:/scratch/ws/mastermann-Macine_learning_project/helloworld.txt /home/mustermann/Downloads
-
-scp -r <zih-user>@taurusexport.hrsk.tu-dresden.de:<directory> <target-location>          #Copy directory
-```
-
-#### Moving data inside the HPC machines. Datamover
-
-The best way to transfer data inside the Taurus is the [data mover](../data_transfer/data_mover.md).
-It is the special data transfer machine providing the global file systems of each ZIH HPC system.
-Datamover provides the best data speed. To load, move, copy etc.  files from one file system to
-another file system, you have to use commands with **dt** prefix, such as:
-
-`dtcp, dtwget, dtmv, dtrm, dtrsync, dttar, dtls`
-
-These commands submit a job to the data transfer machines that execute the selected command. Except
-for the `dt` prefix, their syntax is the same as the shell command without the `dt`.
-
-```Bash
-dtcp -r /scratch/ws/<name_of_your_workspace>/results /lustre/ssd/ws/<name_of_your_workspace>;       #Copy from workspace in scratch to ssd.
-dtwget https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz                                   #Download archive CIFAR-100.
-```
-
-## BatchSystems. SLURM
-
-After logon and preparing your data for further work the next logical step is to start your job. For
-these purposes, SLURM is using. Slurm (Simple Linux Utility for Resource Management) is an
-open-source job scheduler that allocates compute resources on clusters for queued defined jobs.  By
-default, after your logging, you are using the login nodes. The intended purpose of these nodes
-speaks for oneself.  Applications on an HPC system can not be run there! They have to be submitted
-to compute nodes (ml nodes for HPC-DA) with dedicated resources for user jobs.
-
-Job submission can be done with the command: `-srun [options] <command>.`
-
-This is a simple example which you could use for your start. The `srun` command is used to submit a
-job for execution in real-time designed for interactive use, with monitoring the output. For some
-details please check [the Slurm page](../jobs_and_resources/slurm.md).
-
-```Bash
-srun -p ml -N 1 --gres=gpu:1 --time=01:00:00 --pty --mem-per-cpu=8000 bash   #Job submission in ml nodes with allocating: 1 node, 1 gpu per node, with 8000 mb on 1 hour.
-```
-
-However, using srun directly on the shell will lead to blocking and launch an interactive job. Apart
-from short test runs, it is **recommended to launch your jobs into the background by using batch
-jobs**. For that, you can conveniently put the parameters directly into the job file which you can
-submit using `sbatch [options] <job file>.`
-
-This is the example of the sbatch file to run your application:
-
-```Bash
-#!/bin/bash
-#SBATCH --mem=8GB                         # specify the needed memory
-#SBATCH -p ml                             # specify ml partition
-#SBATCH --gres=gpu:1                      # use 1 GPU per node (i.e. use one GPU per task)
-#SBATCH --nodes=1                         # request 1 node
-#SBATCH --time=00:15:00                   # runs for 10 minutes
-#SBATCH -c 1                              # how many cores per task allocated
-#SBATCH -o HLR_name_your_script.out       # save output message under HLR_${SLURMJOBID}.out
-#SBATCH -e HLR_name_your_script.err       # save error messages under HLR_${SLURMJOBID}.err
-
-module load modenv/ml
-module load TensorFlow
-
-python machine_learning_example.py
-
-## when finished writing, submit with:  sbatch <script_name> For example: sbatch machine_learning_script.slurm
-```
-
-The `machine_learning_example.py` contains a simple ml application based on the mnist model to test
-your sbatch file. It could be found as the [attachment] **todo**
-%ATTACHURL%/machine_learning_example.py in the bottom of the page.
-
-## Start your application
-
-As stated before HPC-DA was created for deep learning, machine learning applications. Machine
-learning frameworks as TensorFlow and PyTorch are industry standards now.
-
-There are three main options on how to work with Tensorflow and PyTorch:
-
-1. **Modules**
-1. **JupyterNotebook**
-1. **Containers**
-
-### Modules
-
-The easiest way is using the [modules system](modules.md) and Python virtual environment. Modules
-are a way to use frameworks, compilers, loader, libraries, and utilities. The module is a user
-interface that provides utilities for the dynamic modification of a user's environment without
-manual modifications. You could use them for srun , bath jobs (sbatch) and the Jupyterhub.
-
-A virtual environment is a cooperatively isolated runtime environment that allows Python users and
-applications to install and update Python distribution packages without interfering with the
-behaviour of other Python applications running on the same system. At its core, the main purpose of
-Python virtual environments is to create an isolated environment for Python projects.
-
-**Vitualenv (venv)** is a standard Python tool to create isolated Python environments. We recommend
-using venv to work with Tensorflow and Pytorch on Taurus. It has been integrated into the standard
-library under the [venv module](https://docs.python.org/3/library/venv.html). However, if you have
-reasons (previously created environments etc) you could easily use conda. The conda is the second
-way to use a virtual environment on the Taurus.
-[Conda](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html)
-is an open-source package management system and environment management system from the Anaconda.
-
-As was written in the previous chapter, to start the application (using
-modules) and to run the job exist two main options:
-
-- The `srun` command:**
-
-```Bash
-srun -p ml -N 1 -n 1 -c 2 --gres=gpu:1 --time=01:00:00 --pty --mem-per-cpu=8000 bash   #job submission in ml nodes with allocating: 1 node, 1 task per node, 2 CPUs per task, 1 gpu per node, with 8000 mb on 1 hour.
-
-module load modenv/ml                    #example output: The following have been reloaded with a version change:  1) modenv/scs5 => modenv/ml
-
-mkdir python-virtual-environments        #create folder for your environments
-cd python-virtual-environments           #go to folder
-module load TensorFlow                   #load TensorFlow module to use python. Example output: Module Module TensorFlow/2.1.0-fosscuda-2019b-Python-3.7.4 and 31 dependencies loaded.
-which python                             #check which python are you using
-python3 -m venv --system-site-packages env                         #create virtual environment "env" which inheriting with global site packages
-source env/bin/activate                                            #activate virtual environment "env". Example output: (env) bash-4.2$
-```
-
-The inscription (env) at the beginning of each line represents that now you are in the virtual
-environment.
-
-Now you can check the working capacity of the current environment.
-
-```Bash
-python                                                           # start python
-import tensorflow as tf
-print(tf.__version__)                                            # example output: 2.1.0
-```
-
-The second and main option is using batch jobs (`sbatch`). It is used to submit a job script for
-later execution. Consequently, it is **recommended to launch your jobs into the background by using
-batch jobs**. To launch your machine learning application as well to srun job you need to use
-modules. See the previous chapter with the sbatch file example.
-
-Versions: TensorFlow 1.14, 1.15, 2.0, 2.1; PyTorch 1.1, 1.3 are available. (25.02.20)
-
-Note: However in case of using sbatch files to send your job you usually don't need a virtual
-environment.
-
-### JupyterNotebook
-
-The Jupyter Notebook is an open-source web application that allows you to create documents
-containing live code, equations, visualizations, and narrative text. Jupyter notebook allows working
-with TensorFlow on Taurus with GUI (graphic user interface) in a **web browser** and the opportunity
-to see intermediate results step by step of your work. This can be useful for users who dont have
-huge experience with HPC or Linux.
-
-There is [JupyterHub](../access/jupyterhub.md) on Taurus, where you can simply run your Jupyter
-notebook on HPC nodes. Also, for more specific cases you can run a manually created remote jupyter
-server. You can find the manual server setup [here](deep_learning.md). However, the simplest option
-for beginners is using JupyterHub.
-
-JupyterHub is available at
-[taurus.hrsk.tu-dresden.de/jupyter](https://taurus.hrsk.tu-dresden.de/jupyter)
-
-After logging, you can start a new session and configure it. There are simple and advanced forms to
-set up your session. On the simple form, you have to choose the "IBM Power (ppc64le)" architecture.
-You can select the required number of CPUs and GPUs. For the acquaintance with the system through
-the examples below the recommended amount of CPUs and 1 GPU will be enough.
-With the advanced form, you can use
-the configuration with 1 GPU and 7 CPUs. To access for all your workspaces use " / " in the
-workspace scope. Please check updates and details [here](../access/jupyterhub.md).
-
-Several Tensorflow and PyTorch examples for the Jupyter notebook have been prepared based on some
-simple tasks and models which will give you an understanding of how to work with ML frameworks and
-JupyterHub. It could be found as the [attachment] **todo** %ATTACHURL%/machine_learning_example.py
-in the bottom of the page. A detailed explanation and examples for TensorFlow can be found
-[here](tensorflow_on_jupyter_notebook.md). For the Pytorch - [here](pytorch.md).  Usage information
-about the environments for the JupyterHub could be found [here](../access/jupyterhub.md) in the chapter
-*Creating and using your own environment*.
-
-Versions: TensorFlow 1.14, 1.15, 2.0, 2.1; PyTorch 1.1, 1.3 are
-available. (25.02.20)
-
-### Containers
-
-Some machine learning tasks such as benchmarking require using containers. A container is a standard
-unit of software that packages up code and all its dependencies so the application runs quickly and
-reliably from one computing environment to another.  Using containers gives you more flexibility
-working with modules and software but at the same time requires more effort.
-
-On Taurus [Singularity](https://sylabs.io/) is used as a standard container solution.  Singularity
-enables users to have full control of their environment.  This means that **you dont have to ask an
-HPC support to install anything for you - you can put it in a Singularity container and run!**As
-opposed to Docker (the beat-known container solution), Singularity is much more suited to being used
-in an HPC environment and more efficient in many cases. Docker containers also can easily be used by
-Singularity from the [DockerHub](https://hub.docker.com) for instance. Also, some containers are
-available in [Singularity Hub](https://singularity-hub.org/).
-
-The simplest option to start working with containers on HPC-DA is importing from Docker or
-SingularityHub container with TensorFlow. It does **not require root privileges** and so works on
-Taurus directly:
-
-```Bash
-srun -p ml -N 1 --gres=gpu:1 --time=02:00:00 --pty --mem-per-cpu=8000 bash           #allocating resourses from ml nodes to start the job to create a container.
-singularity build my-ML-container.sif docker://ibmcom/tensorflow-ppc64le             #create a container from the DockerHub with the last TensorFlow version
-singularity run --nv my-ML-container.sif                                            #run my-ML-container.sif container with support of the Nvidia's GPU. You could also entertain with your container by commands: singularity shell, singularity exec
-```
-
-There are two sources for containers for Power9 architecture with
-Tensorflow and PyTorch on the board:
-
-* [Tensorflow-ppc64le](https://hub.docker.com/r/ibmcom/tensorflow-ppc64le):
-  Community-supported ppc64le docker container for TensorFlow.
-* [PowerAI container](https://hub.docker.com/r/ibmcom/powerai/):
-  Official Docker container with Tensorflow, PyTorch and many other packages.
-  Heavy container. It requires a lot of space. Could be found on Taurus.
-
-Note: You could find other versions of software in the container on the "tag" tab on the docker web
-page of the container.
-
-To use not a pure Tensorflow, PyTorch but also with some Python packages
-you have to use the definition file to create the container
-(bootstrapping). For details please see the [Container](containers.md) page
-from our wiki. Bootstrapping **has required root privileges** and
-Virtual Machine (VM) should be used! There are two main options on how
-to work with VM on Taurus: [VM tools](vm_tools.md) - automotive algorithms
-for using virtual machines; [Manual method](virtual_machines.md) - it requires more
-operations but gives you more flexibility and reliability.
-
-- [machine_learning_example.py] **todo** %ATTACHURL%/machine_learning_example.py:
-  machine_learning_example.py
-- [example_TensofFlow_MNIST.zip] **todo** %ATTACHURL%/example_TensofFlow_MNIST.zip:
-  example_TensofFlow_MNIST.zip
-- [example_Pytorch_MNIST.zip] **todo** %ATTACHURL%/example_Pytorch_MNIST.zip:
-  example_Pytorch_MNIST.zip
-- [example_Pytorch_image_recognition.zip] **todo** %ATTACHURL%/example_Pytorch_image_recognition.zip:
-  example_Pytorch_image_recognition.zip
-- [example_TensorFlow_Automobileset.zip] **todo** %ATTACHURL%/example_TensorFlow_Automobileset.zip:
-  example_TensorFlow_Automobileset.zip
-- [HPC-Introduction.pdf] **todo** %ATTACHURL%/HPC-Introduction.pdf:
-  HPC-Introduction.pdf
-- [HPC-DA-Introduction.pdf] **todo** %ATTACHURL%/HPC-DA-Introduction.pdf :
-  HPC-DA-Introduction.pdf
diff --git a/doc.zih.tu-dresden.de/docs/software/hyperparameter_optimization.md b/doc.zih.tu-dresden.de/docs/software/hyperparameter_optimization.md
new file mode 100644
index 0000000000000000000000000000000000000000..38190764e6c9efedb275ec9ff4324d916c851566
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/software/hyperparameter_optimization.md
@@ -0,0 +1,365 @@
+# Hyperparameter Optimization (OmniOpt)
+
+Classical simulation methods as well as machine learning methods (e.g. neural networks) have a large
+number of hyperparameters that significantly determine the accuracy, efficiency, and transferability
+of the method. In classical simulations, the hyperparameters are usually determined by adaptation to
+measured values. Esp. in neural networks, the hyperparameters determine the network architecture:
+number and type of layers, number of neurons, activation functions, measures against overfitting
+etc. The most common methods to determine hyperparameters are intuitive testing, grid search or
+random search.
+
+The tool OmniOpt performs hyperparameter optimization within a broad range of applications as
+classical simulations or machine learning algorithms. OmniOpt is robust and it checks and installs
+all dependencies automatically and fixes many problems in the background. While OmniOpt optimizes,
+no further intervention is required. You can follow the ongoing output live in the console.
+Overhead of OmniOpt is minimal and virtually imperceptible.
+
+## Quick start with OmniOpt
+
+The following instructions demonstrate the basic usage of OmniOpt on the ZIH system, based on the
+hyperparameter optimization for a neural network.
+
+The typical OmniOpt workflow comprises at least the following steps:
+
+1. [Prepare application script and software environment](#prepare-application-script-and-software-environment)
+1. [Configure and run OmniOpt](#configure-and-run-omniopt)
+1. [Check and evaluate OmniOpt results](#check-and-evaluate-omniopt-results)
+
+### Prepare Application Script and Software Environment
+
+The following example application script was created from
+[https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html](https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html)
+as a starting point.
+Therein, a neural network is trained on the MNIST Fashion data set.
+
+There are the following script preparation steps for OmniOpt:
+
+1. Changing hard-coded hyperparameters (chosen here: batch size, epochs, size of layer 1 and 2) into
+   command line parameters.  Esp. for this example, the Python module `argparse` (see the docs at
+   [https://docs.python.org/3/library/argparse.html](https://docs.python.org/3/library/argparse.html)
+   is used.
+
+    ??? note "Parsing arguments in Python"
+        There are many ways for parsing arguments into Python scripts. The easiest approach is
+        the `sys` module (see
+        [www.geeksforgeeks.org/how-to-use-sys-argv-in-python](https://www.geeksforgeeks.org/how-to-use-sys-argv-in-python)),
+        which would be fully sufficient for usage with OmniOpt. Nevertheless, this basic approach
+        has no consistency checks or error handling etc.
+
+1. Mark the output of the optimization target (chosen here: average loss) by prefixing it with the
+   RESULT string. OmniOpt takes the **last appearing value** prefixed with the RESULT string. In
+   the example, different epochs are performed and the average from the last epoch is caught by
+   OmniOpt. Additionally, the `RESULT` output has to be a **single line**. After all these changes,
+   the final script is as follows (with the lines containing relevant changes highlighted).
+
+    ??? example "Final modified Python script: MNIST Fashion "
+
+        ```python linenums="1" hl_lines="18-33 52-53 66-68 72 74 76 85 125-126"
+        #!/usr/bin/env python
+        # coding: utf-8
+
+        # # Example for using OmniOpt
+        #
+        # source code taken from: https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html
+        # parameters under consideration:#
+        # 1. batch size
+        # 2. epochs
+        # 3. size output layer 1
+        # 4. size output layer 2
+
+        import torch
+        from torch import nn
+        from torch.utils.data import DataLoader
+        from torchvision import datasets
+        from torchvision.transforms import ToTensor, Lambda, Compose
+        import argparse
+
+        # parsing hpyerparameters as arguments
+        parser = argparse.ArgumentParser(description="Demo application for OmniOpt for hyperparameter optimization, example: neural network on MNIST fashion data.")
+
+        parser.add_argument("--out-layer1", type=int, help="the number of outputs of layer 1", default = 512)
+        parser.add_argument("--out-layer2", type=int, help="the number of outputs of layer 2", default = 512)
+        parser.add_argument("--batchsize", type=int, help="batchsize for training", default = 64)
+        parser.add_argument("--epochs", type=int, help="number of epochs", default = 5)
+
+        args = parser.parse_args()
+
+        batch_size = args.batchsize
+        epochs = args.epochs
+        num_nodes_out1 = args.out_layer1
+        num_nodes_out2 = args.out_layer2
+
+        # Download training data from open data sets.
+        training_data = datasets.FashionMNIST(
+            root="data",
+            train=True,
+            download=True,
+            transform=ToTensor(),
+        )
+
+        # Download test data from open data sets.
+        test_data = datasets.FashionMNIST(
+            root="data",
+            train=False,
+            download=True,
+            transform=ToTensor(),
+        )
+
+        # Create data loaders.
+        train_dataloader = DataLoader(training_data, batch_size=batch_size)
+        test_dataloader = DataLoader(test_data, batch_size=batch_size)
+
+        for X, y in test_dataloader:
+            print("Shape of X [N, C, H, W]: ", X.shape)
+            print("Shape of y: ", y.shape, y.dtype)
+            break
+
+        # Get cpu or gpu device for training.
+        device = "cuda" if torch.cuda.is_available() else "cpu"
+        print("Using {} device".format(device))
+
+        # Define model
+        class NeuralNetwork(nn.Module):
+            def __init__(self, out1, out2):
+                self.o1 = out1
+                self.o2 = out2
+                super(NeuralNetwork, self).__init__()
+                self.flatten = nn.Flatten()
+                self.linear_relu_stack = nn.Sequential(
+                    nn.Linear(28*28, out1),
+                    nn.ReLU(),
+                    nn.Linear(out1, out2),
+                    nn.ReLU(),
+                    nn.Linear(out2, 10),
+                    nn.ReLU()
+                )
+
+            def forward(self, x):
+                x = self.flatten(x)
+                logits = self.linear_relu_stack(x)
+                return logits
+
+        model = NeuralNetwork(out1=num_nodes_out1, out2=num_nodes_out2).to(device)
+        print(model)
+
+        loss_fn = nn.CrossEntropyLoss()
+        optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
+
+        def train(dataloader, model, loss_fn, optimizer):
+            size = len(dataloader.dataset)
+            for batch, (X, y) in enumerate(dataloader):
+                X, y = X.to(device), y.to(device)
+
+                # Compute prediction error
+                pred = model(X)
+                loss = loss_fn(pred, y)
+
+                # Backpropagation
+                optimizer.zero_grad()
+                loss.backward()
+                optimizer.step()
+
+                if batch % 200 == 0:
+                    loss, current = loss.item(), batch * len(X)
+                    print(f"loss: {loss:>7f}  [{current:>5d}/{size:>5d}]")
+
+        def test(dataloader, model, loss_fn):
+            size = len(dataloader.dataset)
+            num_batches = len(dataloader)
+            model.eval()
+            test_loss, correct = 0, 0
+            with torch.no_grad():
+                for X, y in dataloader:
+                    X, y = X.to(device), y.to(device)
+                    pred = model(X)
+                    test_loss += loss_fn(pred, y).item()
+                    correct += (pred.argmax(1) == y).type(torch.float).sum().item()
+            test_loss /= num_batches
+            correct /= size
+            print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
+
+
+            #print statement esp. for OmniOpt (single line!!)
+            print(f"RESULT: {test_loss:>8f} \n")
+
+        for t in range(epochs):
+            print(f"Epoch {t+1}\n-------------------------------")
+            train(train_dataloader, model, loss_fn, optimizer)
+            test(test_dataloader, model, loss_fn)
+        print("Done!")
+        ```
+
+1. Testing script functionality and determine software requirements for the chosen
+   [partition](../jobs_and_resources/partitions_and_limits.md). In the following, the alpha
+   partition is used. Please note the parameters `--out-layer1`, `--batchsize`, `--epochs` when
+   calling the Python script. Additionally, note the `RESULT` string with the output for OmniOpt.
+
+    ??? hint "Hint for installing Python modules"
+
+        Note that for this example the module `torchvision` is not available on the partition `alpha`
+        and it is installed by creating a [virtual environment](python_virtual_environments.md). It is
+        recommended to install such a virtual environment into a
+        [workspace](../data_lifecycle/workspaces.md).
+
+        ```console
+        marie@login$ module load modenv/hiera  GCC/10.2.0  CUDA/11.1.1 OpenMPI/4.0.5 PyTorch/1.9.0
+        marie@login$ mkdir </path/to/workspace/python-environments>    #create folder
+        marie@login$ virtualenv --system-site-packages </path/to/workspace/python-environments/torchvision_env>
+        marie@login$ source </path/to/workspace/python-environments/torchvision_env>/bin/activate #activate virtual environment
+        marie@login$ pip install torchvision #install torchvision module
+        ```
+
+    ```console
+    # Job submission on alpha nodes with 1 GPU on 1 node with 800 MB per CPU
+    marie@login$ srun -p alpha --gres=gpu:1 -n 1 -c 7 --pty --mem-per-cpu=800 bash
+    marie@alpha$ module load modenv/hiera  GCC/10.2.0  CUDA/11.1.1 OpenMPI/4.0.5 PyTorch/1.9.0
+    # Activate virtual environment
+    marie@alpha$ source </path/to/workspace/python-environments/torchvision_env>/bin/activate
+    The following have been reloaded with a version change:
+      1) modenv/scs5 => modenv/hiera
+
+    Module GCC/10.2.0, CUDA/11.1.1, OpenMPI/4.0.5, PyTorch/1.9.0 and 54 dependencies loaded.
+    marie@alpha$ python </path/to/your/script/mnistFashion.py> --out-layer1=200 --batchsize=10 --epochs=3
+    [...]
+    Epoch 3
+    -------------------------------
+    loss: 1.422406  [    0/60000]
+    loss: 0.852647  [10000/60000]
+    loss: 1.139685  [20000/60000]
+    loss: 0.572221  [30000/60000]
+    loss: 1.516888  [40000/60000]
+    loss: 0.445737  [50000/60000]
+    Test Error:
+     Accuracy: 69.5%, Avg loss: 0.878329
+
+    RESULT: 0.878329
+
+    Done!
+    ```
+
+Using the modified script within OmniOpt requires configuring and loading of the software
+environment. The recommended way is to wrap the necessary calls in a shell script.
+
+??? example "Example for wrapping with shell script"
+
+    ```bash
+    #!/bin/bash -l
+    # ^ Shebang-Line, so that it is known that this is a bash file
+    # -l means 'load this as login shell', so that /etc/profile gets loaded and you can use 'module load' or 'ml' as usual
+
+    # If you don't use this script via `./run.sh' or just `srun run.sh', but like `srun bash run.sh', please add the '-l' there too.
+    # Like this:
+    # srun bash -l run.sh
+
+    # Load modules your program needs, always specify versions!
+    module load modenv/hiera GCC/10.2.0 CUDA/11.1.1 OpenMPI/4.0.5 PyTorch/1.7.1
+    source </path/to/workspace/python-environments/torchvision_env>/bin/activate #activate virtual environment
+
+    # Load your script. $@ is all the parameters that are given to this shell file.
+    python </path/to/your/script/mnistFashion.py> $@
+    ```
+
+When the wrapped shell script is running properly, the preparations are finished and the next step
+is configuring OmniOpt.
+
+### Configure and Run OmniOpt
+
+Configuring OmniOpt is done via the GUI at
+[https://imageseg.scads.ai/omnioptgui/](https://imageseg.scads.ai/omnioptgui/).
+This GUI guides through the configuration process and as result a configuration file is created
+automatically according to the GUI input. If you are more familiar with using OmniOpt later on,
+this configuration file can be modified directly without using the GUI.
+
+A screenshot of the GUI, including a properly configuration for the MNIST fashion example is shown
+below. The GUI, in which the below displayed values are already entered, can be reached
+[here](https://imageseg.scads.ai/omnioptgui/?maxevalserror=5&mem_per_worker=1000&number_of_parameters=3&param_0_values=10%2C50%2C100&param_1_values=8%2C16%2C32&param_2_values=10%2C15%2C30&param_0_name=out-layer1&param_1_name=batchsize&param_2_name=batchsize&account=&projectname=mnist_fashion_optimization_set_1&partition=alpha&searchtype=tpe.suggest&param_0_type=hp.choice&param_1_type=hp.choice&param_2_type=hp.choice&max_evals=1000&objective_program=bash%20%3C%2Fpath%2Fto%2Fwrapper-script%2Frun-mnist-fashion.sh%3E%20--out-layer1%3D%28%24x_0%29%20--batchsize%3D%28%24x_1%29%20--epochs%3D%28%24x_2%29&workdir=%3C%2Fscratch%2Fws%2Fomniopt-workdir%2F%3E).
+
+Please modify the paths for `objective program` and `workdir` according to your needs.
+
+![GUI for configuring OmniOpt](misc/hyperparameter_optimization-OmniOpt-GUI.png)
+{: align="center"}
+
+Using OmniOpt for a first trial example, it is often sufficient to concentrate on the following
+configuration parameters:
+
+1. **Optimization run name:** A name for an OmniOpt run given a belonging configuration.
+1. **Partition:** Choose the partition on the ZIH system that fits the programs' needs.
+1. **Enable GPU:** Decide whether a program could benefit from GPU usage or not.
+1. **Workdir:** The directory where OmniOpt is saving its necessary files and all results. Derived
+   from the optimization run name, each configuration creates a single directory.
+   Make sure that this working directory is writable from the compute nodes. It is recommended to
+   use a [workspace](../data_lifecycle/workspaces.md).
+1. **Objective program:** Provide all information for program execution. Typically, this will
+   contain the command for executing a wrapper script.
+1. **Parameters:** The hyperparameters to be optimized with the names OmniOpt should use. For the
+   example here, the variable names are identical to the input parameters of the Python script.
+   However, these names can be chosen differently, since the connection to OmniOpt is realized via
+   the variables (`$x_0`), (`$x_1`), etc. from the GUI section "Objective program". Please note that
+   it is not necessary to name the parameters explicitly in your script but only within the OmniOpt
+   configuration.
+
+After all parameters are entered into the GUI, the call for OmniOpt is generated automatically and
+displayed on the right. This command contains all necessary instructions (including requesting
+resources with Slurm). **Thus, this command can be executed directly on a login node on the ZIH
+system.**
+
+![GUI for configuring OmniOpt](misc/hyperparameter_optimization-OmniOpt-final-command.png)
+{: align="center"}
+
+After executing this command OmniOpt is doing all the magic in the background and there are no
+further actions necessary.
+
+??? hint "Hints on the working directory"
+
+    1. Starting OmniOpt without providing a working directory will store OmniOpt into the present directory.
+    1. Within the given working directory, a new folder named "omniopt" as default, is created.
+    1. Within one OmniOpt working directory, there can be multiple optimization projects.
+    1. It is possible to have as many working directories as you want (with multiple optimization runs).
+    1. It is recommended to use a [workspace](../data_lifecycle/workspaces.md) as working directory, but not the home directory.
+
+### Check and Evaluate OmniOpt Results
+
+For getting informed about the current status of OmniOpt or for looking into results, the evaluation
+tool of OmniOpt is used. Switch to the OmniOpt folder and run `evaluate-run.sh`.
+
+``` console
+marie@login$ bash </scratch/ws/omniopt-workdir/>evaluate-run.sh
+```
+
+After initializing and checking for updates in the background, OmniOpt is asking to select the
+optimization run of interest.  After selecting the optimization run, there will be a menu with the
+items as shown below.  If OmniOpt has still running jobs there appear some menu items that refer to
+these running jobs (image shown below to the right).
+
+evaluation options (all jobs finished)                            |  evaluation options (still running jobs)
+:--------------------------------------------------------------:|:-------------------------:
+![GUI for configuring OmniOpt](misc/OmniOpt-evaluate-menu.png)  |  ![GUI for configuring OmniOpt](misc/OmniOpt-still-running-jobs.png)
+
+For now, we assume that OmniOpt has finished already.
+In order to look into the results, there are the following basic approaches.
+
+1. **Graphical approach:**
+    There are basically two graphical approaches: two dimensional scatter plots and parallel plots.
+
+    Below there is shown a parallel plot from the MNIST fashion example.
+    ![GUI for configuring OmniOpt](misc/OmniOpt-parallel-plot.png){: align="center"}
+
+    ??? hint "Hints on parallel plots"
+
+        Parallel plots are suitable especially for dealing with multiple dimensions. The parallel
+        plot created by OmniOpt is an interactive `html` file that is stored in the OminOpt working
+        directory under `projects/<name_of_optimization_run>/parallel-plot`. The interactivity
+        of this plot is intended to make optimal combinations of the hyperparameters visible more
+        easily. Get more information about this interactivity by clicking the "Help" button at the
+        top of the graphic (see red arrow on the image above).
+
+    After creating a 2D scatter plot or a parallel plot, OmniOpt will try to display the
+    corresponding file (`html`, `png`) directly on the ZIH system. Therefore, it is necessary to
+    login via ssh with the option `-X` (X11 forwarding), e.g., `ssh -X taurus.hrsk.tu-dresden.de`.
+    Nevertheless, because of latency using x11 forwarding, it is recommended to download the created
+    files and explore them on the local machine (esp. for the parallel plot). The created files are
+    saved at `projects/<name_of_optimization_run>/{2d-scatterplots,parallel-plot}`.
+
+1. **Getting the raw data:**
+    As a second approach, the raw data of the optimization process can be exported as a CSV file.
+    The created output files are stored in the folder `projects/<name_of_optimization_run>/csv`.
diff --git a/doc.zih.tu-dresden.de/docs/software/keras.md b/doc.zih.tu-dresden.de/docs/software/keras.md
deleted file mode 100644
index 356e5b17e0ed1a3224ef815629e456391192b5ba..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/software/keras.md
+++ /dev/null
@@ -1,237 +0,0 @@
-# Keras
-
-This is an introduction on how to run a
-Keras machine learning application on the new machine learning partition
-of Taurus.
-
-Keras is a high-level neural network API,
-written in Python and capable of running on top of
-[TensorFlow](https://github.com/tensorflow/tensorflow).
-In this page, [Keras](https://www.tensorflow.org/guide/keras) will be
-considered as a TensorFlow's high-level API for building and training
-deep learning models. Keras includes support for TensorFlow-specific
-functionality, such as [eager execution](https://www.tensorflow.org/guide/keras#eager_execution)
-, [tf.data](https://www.tensorflow.org/api_docs/python/tf/data) pipelines
-and [estimators](https://www.tensorflow.org/guide/estimator).
-
-On the machine learning nodes (machine learning partition), you can use
-the tools from [IBM Power AI](./power_ai.md). PowerAI is an enterprise
-software distribution that combines popular open-source deep learning
-frameworks, efficient AI development tools (Tensorflow, Caffe, etc).
-
-In machine learning partition (modenv/ml) Keras is available as part of
-the Tensorflow library at Taurus and also as a separate module named
-"Keras". For using Keras in machine learning partition you have two
-options:
-
-- use Keras as part of the TensorFlow module;
-- use Keras separately and use Tensorflow as an interface between
-    Keras and GPUs.
-
-**Prerequisites**: To work with Keras you, first of all, need
-[access](../access/ssh_login.md) for the Taurus system, loaded
-Tensorflow module on ml partition, activated Python virtual environment.
-Basic knowledge about Python, SLURM system also required.
-
-**Aim** of this page is to introduce users on how to start working with
-Keras and TensorFlow on the [HPC-DA](../jobs_and_resources/hpcda.md)
-system - part of the TU Dresden HPC system.
-
-There are three main options on how to work with Keras and Tensorflow on
-the HPC-DA: 1. Modules; 2. JupyterNotebook; 3. Containers. One of the
-main ways is using the **TODO LINK MISSING** (Modules
-system)(RuntimeEnvironment#Module_Environments) and Python virtual
-environment. Please see the
-[Python page](./python.md) for the HPC-DA
-system.
-
-The information about the Jupyter notebook and the **JupyterHub** could
-be found [here](../access/jupyterhub.md). The use of
-Containers is described [here](tensorflow_container_on_hpcda.md).
-
-Keras contains numerous implementations of commonly used neural-network
-building blocks such as layers,
-[objectives](https://en.wikipedia.org/wiki/Objective_function),
-[activation functions](https://en.wikipedia.org/wiki/Activation_function)
-[optimizers](https://en.wikipedia.org/wiki/Mathematical_optimization),
-and a host of tools
-to make working with image and text data easier. Keras, for example, has
-a library for preprocessing the image data.
-
-The core data structure of Keras is a
-**model**, a way to organize layers. The Keras functional API is the way
-to go for defining as simple (sequential) as complex models, such as
-multi-output models, directed acyclic graphs, or models with shared
-layers.
-
-## Getting started with Keras
-
-This example shows how to install and start working with TensorFlow and
-Keras (using the module system). To get started, import [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras)
-as part of your TensorFlow program setup.
-tf.keras is TensorFlow's implementation of the [Keras API
-specification](https://keras.io/). This is a modified example that we
-used for the [Tensorflow page](./tensorflow.md).
-
-```bash
-srun -p ml --gres=gpu:1 -n 1 --pty --mem-per-cpu=8000 bash
-
-module load modenv/ml                           #example output: The following have been reloaded with a version change:  1) modenv/scs5 => modenv/ml
-
-mkdir python-virtual-environments
-cd python-virtual-environments
-module load TensorFlow                          #example output: Module TensorFlow/1.10.0-PythonAnaconda-3.6 and 1 dependency loaded.
-which python
-python3 -m venv --system-site-packages env      #create virtual environment "env" which inheriting with global site packages
-source env/bin/activate                         #example output: (env) bash-4.2$
-module load TensorFlow
-python
-import tensorflow as tf
-from tensorflow.keras import layers
-
-print(tf.VERSION)                               #example output: 1.10.0
-print(tf.keras.__version__)                     #example output: 2.1.6-tf
-```
-
-As was said the core data structure of Keras is a **model**, a way to
-organize layers. In Keras, you assemble *layers* to build *models*. A
-model is (usually) a graph of layers. For our example we use the most
-common type of model is a stack of layers. The below [example](https://www.tensorflow.org/guide/keras#model_subclassing)
-of using the advanced model with model
-subclassing and custom layers illustrate using TF-Keras API.
-
-```python
-import tensorflow as tf
-from tensorflow.keras import layers
-import numpy as np
-
-# Numpy arrays to train and evaluate a model
-data = np.random.random((50000, 32))
-labels = np.random.random((50000, 10))
-
-# Create a custom layer by subclassing
-class MyLayer(layers.Layer):
-
-  def __init__(self, output_dim, **kwargs):
-    self.output_dim = output_dim
-    super(MyLayer, self).__init__(**kwargs)
-
-# Create the weights of the layer
-  def build(self, input_shape):
-    shape = tf.TensorShape((input_shape[1], self.output_dim))
-# Create a trainable weight variable for this layer
-    self.kernel = self.add_weight(name='kernel',
-                                  shape=shape,
-                                  initializer='uniform',
-                                  trainable=True)
-    super(MyLayer, self).build(input_shape)
-# Define the forward pass
-  def call(self, inputs):
-    return tf.matmul(inputs, self.kernel)
-
-# Specify how to compute the output shape of the layer given the input shape.
-  def compute_output_shape(self, input_shape):
-    shape = tf.TensorShape(input_shape).as_list()
-    shape[-1] = self.output_dim
-    return tf.TensorShape(shape)
-
-# Serializing the layer
-  def get_config(self):
-    base_config = super(MyLayer, self).get_config()
-    base_config['output_dim'] = self.output_dim
-    return base_config
-
-  @classmethod
-  def from_config(cls, config):
-    return cls(**config)
-# Create a model using your custom layer
-model = tf.keras.Sequential([
-    MyLayer(10),
-    layers.Activation('softmax')])
-
-# The compile step specifies the training configuration
-model.compile(optimizer=tf.compat.v1.train.RMSPropOptimizer(0.001),
-              loss='categorical_crossentropy',
-              metrics=['accuracy'])
-
-# Trains for 10 epochs(steps).
-model.fit(data, labels, batch_size=32, epochs=10)
-```
-
-## Running the sbatch script on ML modules (modenv/ml)
-
-Generally, for machine learning purposes ml partition is used but for
-some special issues, SCS5 partition can be useful. The following sbatch
-script will automatically execute the above Python script on ml
-partition. If you have a question about the sbatch script see the
-article about [SLURM](./../jobs_and_resources/binding_and_distribution_of_tasks.md).
-Keep in mind that you need to put the executable file (Keras_example) with
-python code to the same folder as bash script or specify the path.
-
-```bash
-#!/bin/bash
-#SBATCH --mem=4GB                         # specify the needed memory
-#SBATCH -p ml                             # specify ml partition
-#SBATCH --gres=gpu:1                      # use 1 GPU per node (i.e. use one GPU per task)
-#SBATCH --nodes=1                         # request 1 node
-#SBATCH --time=00:05:00                   # runs for 5 minutes
-#SBATCH -c 16                             # how many cores per task allocated
-#SBATCH -o HLR_Keras_example.out          # save output message under HLR_${SLURMJOBID}.out
-#SBATCH -e HLR_Keras_example.err          # save error messages under HLR_${SLURMJOBID}.err
-
-module load modenv/ml
-module load TensorFlow
-
-python Keras_example.py
-
-## when finished writing, submit with:  sbatch <script_name>
-```
-
-Output results and errors file you can see in the same folder in the
-corresponding files after the end of the job. Part of the example
-output:
-
-```
-......
-Epoch 9/10
-50000/50000 [==============================] - 2s 37us/sample - loss: 11.5159 - acc: 0.1000
-Epoch 10/10
-50000/50000 [==============================] - 2s 37us/sample - loss: 11.5159 - acc: 0.1020
-```
-
-## Tensorflow 2
-
-[TensorFlow 2.0](https://blog.tensorflow.org/2019/09/tensorflow-20-is-now-available.html)
-is a significant milestone for the
-TensorFlow and the community. There are multiple important changes for
-users.
-
-Tere are a number of TensorFlow 2 modules for both ml and scs5
-partitions in Taurus (2.0 (anaconda), 2.0 (python), 2.1 (python))
-(11.04.20). Please check **TODO MISSING DOC**(the software modules list)(./SoftwareModulesList.md
-for the information about available
-modules.
-
-<span style="color:red">**NOTE**</span>: Tensorflow 2 of the
-current version is loading by default as a Tensorflow module.
-
-TensorFlow 2.0 includes many API changes, such as reordering arguments,
-renaming symbols, and changing default values for parameters. Thus in
-some cases, it makes code written for the TensorFlow 1 not compatible
-with TensorFlow 2. However, If you are using the high-level APIs
-**(tf.keras)** there may be little or no action you need to take to make
-your code fully TensorFlow 2.0 [compatible](https://www.tensorflow.org/guide/migrate).
-It is still possible to run 1.X code,
-unmodified ([except for contrib](https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md)
-), in TensorFlow 2.0:
-
-```python
-import tensorflow.compat.v1 as tf
-tf.disable_v2_behavior()                  #instead of "import tensorflow as tf"
-```
-
-To make the transition to TF 2.0 as seamless as possible, the TensorFlow
-team has created the [tf_upgrade_v2](https://www.tensorflow.org/guide/upgrade)
-utility to help transition legacy code to the new API.
-
-## F.A.Q:
diff --git a/doc.zih.tu-dresden.de/docs/software/licenses.md b/doc.zih.tu-dresden.de/docs/software/licenses.md
index af7a4e376f22a0711df8eaff944bd7830367cacd..3173cf98a1b9987c87a74e5175fc7746236613d9 100644
--- a/doc.zih.tu-dresden.de/docs/software/licenses.md
+++ b/doc.zih.tu-dresden.de/docs/software/licenses.md
@@ -1,6 +1,6 @@
 # Use of External Licenses
 
-It is possible (please [contact the support team](../support.md) first) for users to install
+It is possible (please [contact the support team](../support/support.md) first) for users to install
 their own software and use their own license servers, e.g.  FlexLM. The outbound IP addresses from
 ZIH systems are:
 
diff --git a/doc.zih.tu-dresden.de/docs/software/machine_learning.md b/doc.zih.tu-dresden.de/docs/software/machine_learning.md
index e80e6c346dfbeff977fdf74fc251507cc171bbcb..f2e5f24aa9f4f8e5f8fb516310b842584d30a614 100644
--- a/doc.zih.tu-dresden.de/docs/software/machine_learning.md
+++ b/doc.zih.tu-dresden.de/docs/software/machine_learning.md
@@ -1,59 +1,169 @@
 # Machine Learning
 
-On the machine learning nodes, you can use the tools from [IBM Power
-AI](power_ai.md).
+This is an introduction of how to run machine learning applications on ZIH systems.
+For machine learning purposes, we recommend to use the partitions `alpha` and/or `ml`.
 
-## Interactive Session Examples
+## Partition `ml`
 
-### Tensorflow-Test
+The compute nodes of the partition ML are built on the base of
+[Power9 architecture](https://www.ibm.com/it-infrastructure/power/power9) from IBM. The system was created
+for AI challenges, analytics and working with data-intensive workloads and accelerated databases.
 
-    tauruslogin6 :~> srun -p ml --gres=gpu:1 -n 1 --pty --mem-per-cpu=10000 bash
-    srun: job 4374195 queued and waiting for resources
-    srun: job 4374195 has been allocated resources
-    taurusml22 :~> ANACONDA2_INSTALL_PATH='/opt/anaconda2'
-    taurusml22 :~> ANACONDA3_INSTALL_PATH='/opt/anaconda3'
-    taurusml22 :~> export PATH=$ANACONDA3_INSTALL_PATH/bin:$PATH
-    taurusml22 :~> source /opt/DL/tensorflow/bin/tensorflow-activate
-    taurusml22 :~> tensorflow-test
-    Basic test of tensorflow - A Hello World!!!...
+The main feature of the nodes is the ability to work with the
+[NVIDIA Tesla V100](https://www.nvidia.com/en-gb/data-center/tesla-v100/) GPU with **NV-Link**
+support that allows a total bandwidth with up to 300 GB/s. Each node on the
+partition ML has 6x Tesla V-100 GPUs. You can find a detailed specification of the partition in our
+[Power9 documentation](../jobs_and_resources/power9.md).
 
-    #or:
-    taurusml22 :~> module load TensorFlow/1.10.0-PythonAnaconda-3.6
+!!! note
 
-Or to use the whole node: `--gres=gpu:6 --exclusive --pty`
+    The partition ML is based on the Power9 architecture, which means that the software built
+    for x86_64 will not work on this partition. Also, users need to use the modules which are
+    specially build for this architecture (from `modenv/ml`).
 
-### In Singularity container:
+### Modules
 
-    rotscher@tauruslogin6:~&gt; srun -p ml --gres=gpu:6 --pty bash
-    [rotscher@taurusml22 ~]$ singularity shell --nv /scratch/singularity/powerai-1.5.3-all-ubuntu16.04-py3.img
-    Singularity powerai-1.5.3-all-ubuntu16.04-py3.img:~&gt; export PATH=/opt/anaconda3/bin:$PATH
-    Singularity powerai-1.5.3-all-ubuntu16.04-py3.img:~&gt; . /opt/DL/tensorflow/bin/tensorflow-activate
-    Singularity powerai-1.5.3-all-ubuntu16.04-py3.img:~&gt; tensorflow-test
+On the partition ML load the module environment:
 
-## Additional libraries
+```console
+marie@ml$ module load modenv/ml
+The following have been reloaded with a version change:  1) modenv/scs5 => modenv/ml
+```
+
+### Power AI
+
+There are tools provided by IBM, that work on partition ML and are related to AI tasks.
+For more information see our [Power AI documentation](power_ai.md).
+
+## Partition: Alpha
+
+Another partition for machine learning tasks is Alpha. It is mainly dedicated to
+[ScaDS.AI](https://scads.ai/) topics. Each node on Alpha has 2x AMD EPYC CPUs, 8x NVIDIA A100-SXM4
+GPUs, 1 TB RAM and 3.5 TB local space (`/tmp`) on an NVMe device. You can find more details of the
+partition in our [Alpha Centauri](../jobs_and_resources/alpha_centauri.md) documentation.
+
+### Modules
+
+On the partition alpha load the module environment:
+
+```console
+marie@alpha$ module load modenv/hiera
+The following have been reloaded with a version change:  1) modenv/ml => modenv/hiera
+```
+
+!!! note
+
+    On partition Alpha, the most recent modules are build in `hiera`. Alternative modules might be
+    build in `scs5`.
+
+## Machine Learning via Console
+
+### Python and Virtual Environments
+
+Python users should use a [virtual environment](python_virtual_environments.md) when conducting
+machine learning tasks via console.
+
+For more details on machine learning or data science with Python see
+[data analytics with Python](data_analytics_with_python.md).
+
+### R
+
+R also supports machine learning via console. It does not require a virtual environment due to a
+different package management.
+
+For more details on machine learning or data science with R see
+[data analytics with R](data_analytics_with_r.md#r-console).
+
+## Machine Learning with Jupyter
+
+The [Jupyter Notebook](https://jupyter.org/) is an open-source web application that allows you to
+create documents containing live code, equations, visualizations, and narrative text.
+[JupyterHub](../access/jupyterhub.md) allows to work with machine learning frameworks (e.g.
+TensorFlow or PyTorch) on ZIH systems and to run your Jupyter notebooks on HPC nodes.
+
+After accessing JupyterHub, you can start a new session and configure it. For machine learning
+purposes, select either partition **Alpha** or **ML** and the resources, your application requires.
+
+In your session you can use [Python](data_analytics_with_python.md#jupyter-notebooks),
+[R](data_analytics_with_r.md#r-in-jupyterhub) or [RStudio](data_analytics_with_rstudio.md) for your
+machine learning and data science topics.
+
+## Machine Learning with Containers
+
+Some machine learning tasks require using containers. In the HPC domain, the
+[Singularity](https://singularity.hpcng.org/) container system is a widely used tool. Docker
+containers can also be used by Singularity. You can find further information on working with
+containers on ZIH systems in our [containers documentation](containers.md).
+
+There are two sources for containers for Power9 architecture with TensorFlow and PyTorch on the
+board:
+
+* [TensorFlow-ppc64le](https://hub.docker.com/r/ibmcom/tensorflow-ppc64le):
+  Community-supported `ppc64le` docker container for TensorFlow.
+* [PowerAI container](https://hub.docker.com/r/ibmcom/powerai/):
+  Official Docker container with TensorFlow, PyTorch and many other packages.
+
+!!! note
+
+    You could find other versions of software in the container on the "tag" tab on the docker web
+    page of the container.
+
+In the following example, we build a Singularity container with TensorFlow from the DockerHub and
+start it:
+
+```console
+marie@ml$ singularity build my-ML-container.sif docker://ibmcom/tensorflow-ppc64le    #create a container from the DockerHub with the last TensorFlow version
+[...]
+marie@ml$ singularity run --nv my-ML-container.sif    #run my-ML-container.sif container supporting the Nvidia's GPU. You can also work with your container by: singularity shell, singularity exec
+[...]
+```
+
+## Additional Libraries for Machine Learning
 
 The following NVIDIA libraries are available on all nodes:
 
-|       |                                       |
-|-------|---------------------------------------|
-| NCCL  | /usr/local/cuda/targets/ppc64le-linux |
-| cuDNN | /usr/local/cuda/targets/ppc64le-linux |
+| Name  |  Path                                   |
+|-------|-----------------------------------------|
+| NCCL  | `/usr/local/cuda/targets/ppc64le-linux` |
+| cuDNN | `/usr/local/cuda/targets/ppc64le-linux` |
+
+!!! note
 
-Note: For optimal NCCL performance it is recommended to set the
-**NCCL_MIN_NRINGS** environment variable during execution. You can try
-different values but 4 should be a pretty good starting point.
+    For optimal NCCL performance it is recommended to set the
+    **NCCL_MIN_NRINGS** environment variable during execution. You can try
+    different values but 4 should be a pretty good starting point.
 
-    export NCCL_MIN_NRINGS=4
+```console
+marie@compute$ export NCCL_MIN_NRINGS=4
+```
 
-\<span style="color: #222222; font-size: 1.385em;">HPC\</span>
+### HPC-Related Software
 
 The following HPC related software is installed on all nodes:
 
-|                  |                        |
-|------------------|------------------------|
-| IBM Spectrum MPI | /opt/ibm/spectrum_mpi/ |
-| PGI compiler     | /opt/pgi/              |
-| IBM XLC Compiler | /opt/ibm/xlC/          |
-| IBM XLF Compiler | /opt/ibm/xlf/          |
-| IBM ESSL         | /opt/ibmmath/essl/     |
-| IBM PESSL        | /opt/ibmmath/pessl/    |
+| Name             |  Path                    |
+|------------------|--------------------------|
+| IBM Spectrum MPI | `/opt/ibm/spectrum_mpi/` |
+| PGI compiler     | `/opt/pgi/`              |
+| IBM XLC Compiler | `/opt/ibm/xlC/`          |
+| IBM XLF Compiler | `/opt/ibm/xlf/`          |
+| IBM ESSL         | `/opt/ibmmath/essl/`     |
+| IBM PESSL        | `/opt/ibmmath/pessl/`    |
+
+## Datasets for Machine Learning
+
+There are many different datasets designed for research purposes. If you would like to download some
+of them, keep in mind that many machine learning libraries have direct access to public datasets
+without downloading it, e.g. [TensorFlow Datasets](https://www.tensorflow.org/datasets). If you
+still need to download some datasets use [datamover](../data_transfer/datamover.md) machine.
+
+### The ImageNet Dataset
+
+The ImageNet project is a large visual database designed for use in visual object recognition
+software research. In order to save space in the filesystem by avoiding to have multiple duplicates
+of this lying around, we have put a copy of the ImageNet database (ILSVRC2012 and ILSVR2017) under
+`/scratch/imagenet` which you can use without having to download it again. For the future, the
+ImageNet dataset will be available in
+[Warm Archive](../data_lifecycle/workspaces.md#mid-term-storage). ILSVR2017 also includes a dataset
+for recognition objects from a video. Please respect the corresponding
+[Terms of Use](https://image-net.org/download.php).
diff --git a/doc.zih.tu-dresden.de/docs/software/mathematics.md b/doc.zih.tu-dresden.de/docs/software/mathematics.md
index 9edba02881bb0cf154ed3828b6a7d84b77fdb257..9629e76b77cd8779a993c6c1f3bc5b0fe68d1140 100644
--- a/doc.zih.tu-dresden.de/docs/software/mathematics.md
+++ b/doc.zih.tu-dresden.de/docs/software/mathematics.md
@@ -1,11 +1,9 @@
 # Mathematics Applications
 
-!!! cite
+!!! cite "Galileo Galilei"
 
     Nature is written in mathematical language.
 
-    (Galileo Galilei)
-
 <!--*Please do not run expensive interactive sessions on the login nodes.  Instead, use* `srun --pty-->
 <!--...` *to let the batch system place it on a compute node.*-->
 
@@ -16,8 +14,8 @@ interface capabilities within a document-like user interface paradigm.
 
 ### Fonts
 
-To remotely use the graphical frontend, you have to add the Mathematica fonts to the local
-fontmanager.
+To remotely use the graphical front-end, you have to add the Mathematica fonts to the local
+font manager.
 
 #### Linux Workstation
 
@@ -34,7 +32,7 @@ You have to add additional Mathematica fonts at your local PC
 [download fonts archive](misc/Mathematica-Fonts.zip).
 
 If you use **Xming** as X-server at your PC (refer to
-[remote access from Windows](../access/ssh_mit_putty.md), follow these steps:
+[remote access from Windows](../access/ssh_login.md), follow these steps:
 
 1. Create a new folder `Mathematica` in the directory `fonts` of the installation directory of Xming
    (mostly: `C:\\Programme\\Xming\\fonts\\`)
@@ -149,15 +147,15 @@ srun --pty matlab -nodisplay -r basename_of_your_matlab_script #NOTE: you must o
     While running your calculations as a script this way is possible, it is generally frowned upon,
     because you are occupying Matlab licenses for the entire duration of your calculation when doing so.
     Since the available licenses are limited, it is highly recommended you first compile your script via
-    the Matlab Compiler (mcc) before running it for a longer period of time on our systems.  That way,
+    the Matlab Compiler (`mcc`) before running it for a longer period of time on our systems.  That way,
     you only need to check-out a license during compile time (which is relatively short) and can run as
     many instances of your calculation as you'd like, since it does not need a license during runtime
     when compiled to a binary.
 
 You can find detailed documentation on the Matlab compiler at
-[Mathworks' help pages](https://de.mathworks.com/help/compiler/).
+[MathWorks' help pages](https://de.mathworks.com/help/compiler/).
 
-### Using the MATLAB Compiler (mcc)
+### Using the MATLAB Compiler
 
 Compile your `.m` script into a binary:
 
@@ -184,12 +182,12 @@ zih$ srun ./run_compiled_executable.sh $EBROOTMATLAB
 -   If you want to run your code in parallel, please request as many
     cores as you need!
 -   start a batch job with the number N of processes
--   example for N= 4: \<pre> srun -c 4 --pty --x11=first bash\</pre>
+-   example for N= 4: `srun -c 4 --pty --x11=first bash`
 -   run Matlab with the GUI or the CLI or with a script
--   inside use \<pre>matlabpool open 4\</pre> to start parallel
+-   inside use `matlabpool open 4` to start parallel
     processing
 
--   example for 1000\*1000 matrixmutliplication
+-   example for 1000*1000 matrix multiplication
 
 !!! example
 
@@ -201,13 +199,13 @@ zih$ srun ./run_compiled_executable.sh $EBROOTMATLAB
 -   to close parallel task:
 `matlabpool close`
 
-#### With Parfor
+#### With parfor
 
 - start a batch job with the number N of processes (e.g. N=12)
 - inside use `matlabpool open N` or
   `matlabpool(N)` to start parallel processing. It will use
   the 'local' configuration by default.
-- Use 'parfor' for a parallel loop, where the **independent** loop
+- Use `parfor` for a parallel loop, where the **independent** loop
   iterations are processed by N threads
 
 !!! example
diff --git a/doc.zih.tu-dresden.de/docs/software/misc/OmniOpt-evaluate-menu.png b/doc.zih.tu-dresden.de/docs/software/misc/OmniOpt-evaluate-menu.png
new file mode 100644
index 0000000000000000000000000000000000000000..6d425818925017b52e455ddfb92b00904a0f302d
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/software/misc/OmniOpt-evaluate-menu.png differ
diff --git a/doc.zih.tu-dresden.de/docs/software/misc/OmniOpt-graph-result.png b/doc.zih.tu-dresden.de/docs/software/misc/OmniOpt-graph-result.png
new file mode 100644
index 0000000000000000000000000000000000000000..8dbbec668465134bbd35a78d63052b7c7d253d0e
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/software/misc/OmniOpt-graph-result.png differ
diff --git a/doc.zih.tu-dresden.de/docs/software/misc/OmniOpt-parallel-plot.png b/doc.zih.tu-dresden.de/docs/software/misc/OmniOpt-parallel-plot.png
new file mode 100644
index 0000000000000000000000000000000000000000..3702d69383fe4cb248456102f97e8a7fc8127ca0
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/software/misc/OmniOpt-parallel-plot.png differ
diff --git a/doc.zih.tu-dresden.de/docs/software/misc/OmniOpt-still-running-jobs.png b/doc.zih.tu-dresden.de/docs/software/misc/OmniOpt-still-running-jobs.png
new file mode 100644
index 0000000000000000000000000000000000000000..d4cd05138805d13e6eedd61b3ad8b0c5c9416afe
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/software/misc/OmniOpt-still-running-jobs.png differ
diff --git a/doc.zih.tu-dresden.de/docs/software/misc/Pytorch_jupyter_module.png b/doc.zih.tu-dresden.de/docs/software/misc/Pytorch_jupyter_module.png
new file mode 100644
index 0000000000000000000000000000000000000000..5f3e324da2114dc24382f57dfeb14c10554d60f5
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/software/misc/Pytorch_jupyter_module.png differ
diff --git a/doc.zih.tu-dresden.de/docs/software/misc/data_analytics_with_r_RStudio_launcher.png b/doc.zih.tu-dresden.de/docs/software/misc/data_analytics_with_r_RStudio_launcher.png
deleted file mode 100644
index fd50be1824655ef7e39c2adf74287fa14a716148..0000000000000000000000000000000000000000
Binary files a/doc.zih.tu-dresden.de/docs/software/misc/data_analytics_with_r_RStudio_launcher.png and /dev/null differ
diff --git a/doc.zih.tu-dresden.de/docs/software/misc/data_analytics_with_rstudio_launcher.jpg b/doc.zih.tu-dresden.de/docs/software/misc/data_analytics_with_rstudio_launcher.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..8f12eb7e8afc8c1c12c1d772ccb391791ec3b550
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/software/misc/data_analytics_with_rstudio_launcher.jpg differ
diff --git a/doc.zih.tu-dresden.de/docs/software/misc/example-spark.sbatch b/doc.zih.tu-dresden.de/docs/software/misc/example-spark.sbatch
index 5a418a9c5e98f70b027618a4da1158010619556b..2fcf3aa39b8e66b004fa0fed621475e3200f9d76 100644
--- a/doc.zih.tu-dresden.de/docs/software/misc/example-spark.sbatch
+++ b/doc.zih.tu-dresden.de/docs/software/misc/example-spark.sbatch
@@ -3,10 +3,10 @@
 #SBATCH --partition=haswell
 #SBATCH --nodes=1
 #SBATCH --exclusive
-#SBATCH --mem=60G
+#SBATCH --mem=50G
 #SBATCH -J "example-spark"
 
-ml Spark
+ml Spark/3.0.1-Hadoop-2.7-Java-1.8-Python-3.7.4-GCCcore-8.3.0
 
 function myExitHandler () {
 	stop-all.sh
@@ -20,7 +20,7 @@ trap myExitHandler EXIT
 
 start-all.sh
 
-spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME/examples/jars/spark-examples_2.11-2.4.4.jar 1000
+spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME/examples/jars/spark-examples_2.12-3.0.1.jar 1000
 
 stop-all.sh
 
diff --git a/doc.zih.tu-dresden.de/docs/software/misc/hyperparameter_optimization-OmniOpt-GUI.png b/doc.zih.tu-dresden.de/docs/software/misc/hyperparameter_optimization-OmniOpt-GUI.png
new file mode 100644
index 0000000000000000000000000000000000000000..c292e7cefb46224585894acc8623e1bfa9878052
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/software/misc/hyperparameter_optimization-OmniOpt-GUI.png differ
diff --git a/doc.zih.tu-dresden.de/docs/software/misc/hyperparameter_optimization-OmniOpt-final-command.png b/doc.zih.tu-dresden.de/docs/software/misc/hyperparameter_optimization-OmniOpt-final-command.png
new file mode 100644
index 0000000000000000000000000000000000000000..b0b714462939f9acbd2e25e0d0eb39b431dba5de
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/software/misc/hyperparameter_optimization-OmniOpt-final-command.png differ
diff --git a/doc.zih.tu-dresden.de/docs/software/misc/tensorflow_jupyter_module.png b/doc.zih.tu-dresden.de/docs/software/misc/tensorflow_jupyter_module.png
new file mode 100644
index 0000000000000000000000000000000000000000..1327ee6304faf4b293c385981a750f362063ecbf
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/software/misc/tensorflow_jupyter_module.png differ
diff --git a/doc.zih.tu-dresden.de/docs/software/overview.md b/doc.zih.tu-dresden.de/docs/software/overview.md
index 835d22204fcda298899f49d4b2a95b092b7e3da1..f8f4bf32b66c73234ad6db3cb728662e0d33dd7e 100644
--- a/doc.zih.tu-dresden.de/docs/software/overview.md
+++ b/doc.zih.tu-dresden.de/docs/software/overview.md
@@ -29,11 +29,11 @@ list]**todo link**.
 
 <!--After logging in, you are on one of the login nodes. They are not meant for work, but only for the-->
 <!--login process and short tests. Allocating resources will be done by batch system-->
-<!--[SLURM](../jobs_and_resources/slurm.md).-->
+<!--[Slurm](../jobs_and_resources/slurm.md).-->
 
 ## Modules
 
-Usage of software on HPC systems, e.g., frameworks, compilers, loader and libraries, is
+Usage of software on ZIH systems, e.g., frameworks, compilers, loader and libraries, is
 almost always managed by a **modules system**. Thus, it is crucial to be familiar with the
 [modules concept and its commands](modules.md).  A module is a user interface that provides
 utilities for the dynamic modification of a user's environment without manual modifications.
@@ -47,8 +47,8 @@ The [Jupyter Notebook](https://jupyter.org/) is an open-source web application t
 documents containing live code, equations, visualizations, and narrative text. There is a
 [JupyterHub](../access/jupyterhub.md) service on ZIH systems, where you can simply run your Jupyter
 notebook on compute nodes using [modules](#modules), preloaded or custom virtual environments.
-Moreover, you can run a [manually created remote jupyter server](deep_learning.md) for more specific
-cases.
+Moreover, you can run a [manually created remote jupyter server](../archive/install_jupyter.md)
+for more specific cases.
 
 ## Containers
 
diff --git a/doc.zih.tu-dresden.de/docs/software/papi.md b/doc.zih.tu-dresden.de/docs/software/papi.md
new file mode 100644
index 0000000000000000000000000000000000000000..9d96cc58f4453692ad7b57abe3e56abda1539290
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/software/papi.md
@@ -0,0 +1,140 @@
+# PAPI Library
+
+## Introduction
+
+The **P**erformance **A**pplication **P**rogramming **I**nterface (PAPI) provides tool designers and
+application engineers with a consistent interface and methodology for the use of low-level
+performance counter hardware found across the entire compute system (i.e. CPUs, GPUs, on/off-chip
+memory, interconnects, I/O system, energy/power, etc.). PAPI enables users to see, in near real
+time, the relations between software performance and hardware events across the entire computer
+system.
+
+Only the basic usage is outlined in this compendium. For a comprehensive PAPI user manual please
+refer to the [PAPI wiki website](https://bitbucket.org/icl/papi/wiki/Home).
+
+## PAPI Counter Interfaces
+
+To collect performance events, PAPI provides two APIs, the *high-level* and *low-level* API.
+
+### High-Level API
+
+The high-level API provides the ability to record performance events inside instrumented regions of
+serial, multi-processing (MPI, SHMEM) and thread (OpenMP, Pthreads) parallel applications. It is
+designed for simplicity, not flexibility. For more details click
+[here](https://bitbucket.org/icl/papi/wiki/PAPI-HL.md).
+
+The following code example shows the use of the high-level API by marking a code section.
+
+??? example "C"
+
+    ```C
+    #include "papi.h"
+
+    int main()
+    {
+        int retval;
+
+        retval = PAPI_hl_region_begin("computation");
+        if ( retval != PAPI_OK )
+            handle_error(1);
+
+        /* Do some computation here */
+
+        retval = PAPI_hl_region_end("computation");
+        if ( retval != PAPI_OK )
+            handle_error(1);
+    }
+    ```
+
+??? example "Fortran"
+
+    ```fortran
+    #include "fpapi.h"
+
+    program main
+    integer retval
+
+    call PAPIf_hl_region_begin("computation", retval)
+    if ( retval .NE. PAPI_OK ) then
+       write (*,*) "PAPIf_hl_region_begin failed!"
+    end if
+
+    !do some computation here
+
+    call PAPIf_hl_region_end("computation", retval)
+    if ( retval .NE. PAPI_OK ) then
+       write (*,*) "PAPIf_hl_region_end failed!"
+    end if
+
+    end program main
+    ```
+
+Events to be recorded are determined via the environment variable `PAPI_EVENTS` that lists comma
+separated events for any component (see example below). The output is generated in the current
+directory by default. However, it is recommended to specify an output directory for larger
+measurements, especially for MPI applications via environment variable `PAPI_OUTPUT_DIRECTORY`.
+
+!!! example "Setting performance events and output directory"
+
+    ```bash
+    export PAPI_EVENTS="PAPI_TOT_INS,PAPI_TOT_CYC"
+    export PAPI_OUTPUT_DIRECTORY="/scratch/measurement"
+    ```
+
+This will generate a directory called `papi_hl_output` in `scratch/measurement` that contains one or
+more output files in JSON format.
+
+### Low-Level API
+
+The low-level API manages hardware events in user-defined groups
+called Event Sets. It is meant for experienced application programmers and tool developers wanting
+fine-grained measurement and control of the PAPI interface. It provides access to both PAPI preset
+and native events, and supports all installed components. For more details on the low-level API,
+click [here](https://bitbucket.org/icl/papi/wiki/PAPI-LL.md).
+
+## Usage on ZIH Systems
+
+Before you start a PAPI measurement, check which events are available on the desired architecture.
+For this purpose PAPI offers the tools `papi_avail` and `papi_native_avail`. If you want to measure
+multiple events, please check which events can be measured concurrently using the tool
+`papi_event_chooser`. For more details on the PAPI tools click
+[here](https://bitbucket.org/icl/papi/wiki/PAPI-Overview.md#markdown-header-papi-utilities).
+
+!!! hint
+
+    The PAPI tools must be run on the compute node, using an interactive shell or job.
+
+!!! example "Example: Determine the events on the partition `romeo` from a login node"
+
+    ```console
+    marie@login$ module load PAPI
+    marie@login$ salloc -A <project> --partition=romeo
+    [...]
+    marie@login$ srun papi_avail
+    marie@login$ srun papi_native_avail
+    [...]
+    # Exit with Ctrl+D
+    ```
+
+Instrument your application with either the high-level or low-level API. Load the PAPI module and
+compile your application against the  PAPI library.
+
+!!! example
+
+    ```console
+    marie@login$ module load PAPI
+    marie@login$ gcc app.c -o app -lpapi
+    marie@login$ salloc -A <project> --partition=romeo
+    marie@login$ srun ./app
+    [...]
+    # Exit with Ctrl+D
+    ```
+
+!!! hint
+
+    The PAPI modules on ZIH systems are only installed with the default `perf_event` component. If you
+    want to measure, e.g., GPU events, you have to install your own PAPI. Instructions on how to
+    download and install PAPI can be found
+    [here](https://bitbucket.org/icl/papi/wiki/Downloading-and-Installing-PAPI.md). To install PAPI
+    with additional components, you have to specify them during configure, for details click
+    [here](https://bitbucket.org/icl/papi/wiki/PAPI-Overview.md#markdown-header-components).
diff --git a/doc.zih.tu-dresden.de/docs/software/pika.md b/doc.zih.tu-dresden.de/docs/software/pika.md
index 6cfa085df5433aff220f1195f1b14d35887e0784..36aab905dbf33602c64333e2a695070ffc0ad9db 100644
--- a/doc.zih.tu-dresden.de/docs/software/pika.md
+++ b/doc.zih.tu-dresden.de/docs/software/pika.md
@@ -1,70 +1,76 @@
-# Performance Analysis of HPC Applications with Pika
+# Performance Analysis of HPC Applications with PIKA
 
-Pika is a hardware performance monitoring stack to identify inefficient HPC jobs. Taurus users have
-the possibility to visualize and analyze the efficiency of their jobs via the [Pika web
-interface](https://selfservice.zih.tu-dresden.de/l/index.php/hpcportal/jobmonitoring/z../jobs_and_resources).
+PIKA is a hardware performance monitoring stack to identify inefficient HPC jobs. Users of ZIH
+systems have the possibility to visualize and analyze the efficiency of their jobs via the
+[PIKA web interface](https://selfservice.zih.tu-dresden.de/l/index.php/hpcportal/jobmonitoring/z../jobs_and_resources).
 
-**Hint:** To understand this small guide, it is recommended to open the
-[web
-interface](https://selfservice.zih.tu-dresden.de/l/index.php/hpcportal/jobmonitoring/z../jobs_and_resources)
-in a separate window. Furthermore, at least one real HPC job should have been submitted on Taurus.
+!!! hint
+
+    To understand this small guide, it is recommended to open the
+    [web interface](https://selfservice.zih.tu-dresden.de/l/index.php/hpcportal/jobmonitoring/z../jobs_and_resources)
+    in a separate window. Furthermore, at least one real HPC job should have been submitted.
 
 ## Overview
 
-Pika consists of several components and tools.  It uses the collection daemon collectd, InfluxDB to
-store time-series data and MariaDB to store job metadata.  Furthermore, it provides a powerful [web
-interface](https://selfservice.zih.tu-dresden.de/l/index.php/hpcportal/jobmonitoring/z../jobs_and_resources)
+PIKA consists of several components and tools. It uses the collection daemon collectd, InfluxDB to
+store time-series data and MariaDB to store job metadata. Furthermore, it provides a powerful
+[web interface](https://selfservice.zih.tu-dresden.de/l/index.php/hpcportal/jobmonitoring/z../jobs_and_resources)
 for the visualization and analysis of job performance data.
 
 ## Table View and Job Search
 
-The analysis of HPC jobs in Pika is designed as a top-down approach. Starting from the table view,
+The analysis of HPC jobs in PIKA is designed as a top-down approach. Starting from the table view,
 users can either analyze running or completed jobs. They can navigate from groups of jobs with the
 same name to the metadata of an individual job and finally investigate the job’s runtime metrics in
 a timeline view.
 
-To find jobs with specific properties, the table can be sorted by any column, e.g. by consumed CPU
+To find jobs with specific properties, the table can be sorted by any column, e.g., by consumed CPU
 hours to find jobs where an optimization has a large impact on the system utilization. Additionally,
 there is a filter mask to find jobs that match several properties. When a job has been selected, the
 timeline view opens.
 
 ## Timeline Visualization
 
-Pika provides timeline charts to visualize the resource utilization of a job over time.  After a job
-is completed, timeline charts can help to identify periods of inefficient resource usage.  However,
-they are also suitable for the live assessment of performance during the job’s runtime.  In case of
+PIKA provides timeline charts to visualize the resource utilization of a job over time. After a job
+is completed, timeline charts can help to identify periods of inefficient resource usage. However,
+they are also suitable for the live assessment of performance during the job’s runtime. In case of
 unexpected performance behavior, users can cancel the job, thus avoiding long execution with subpar
 performance.
 
-Pika provides the following runtime metrics:
+PIKA provides the following runtime metrics:
 
 |Metric| Hardware Unit|
 |---|---|
 |CPU Usage|CPU core|
-|IPC|CPU core|
+|IPC (instructions per cycle)|CPU core|
 |FLOPS (normalized to single precision) |CPU core|
 |Main Memory Bandwidth|CPU socket|
 |CPU Power|CPU socket|
 |Main Memory Utilization|node|
-|IO Bandwidth (local, Lustre) |node|
-|IO Metadata (local, Lustre) |node|
+|I/O Bandwidth (local, Lustre) |node|
+|I/O Metadata (local, Lustre) |node|
 |GPU Usage|GPU device|
 |GPU Memory Utilization|GPU device|
 |GPU Power Consumption|GPU device|
 |GPU Temperature|GPU device|
 
 Each monitored metric is represented by a timeline, whereby metrics with the same unit and data
-source are displayed in a common chart, e.g. different Lustre metadata operations.  Each metric is
+source are displayed in a common chart, e.g., different Lustre metadata operations.  Each metric is
 measured with a certain granularity concerning the hardware, e.g. per hardware thread, per CPU
 socket or per node.
 
-**Be aware that CPU socket or node metrics can share the resources of other jobs running on the same
-CPU socket or node. This can result e.g. in cache perturbation and thus a sub-optimal performance.
-To get valid performance data for those metrics, it is recommended to submit an exclusive job!**
+!!! hint
+
+    Be aware that CPU socket or node metrics can share the resources of other jobs running on the
+    same CPU socket or node. This can result e.g., in cache perturbation and thus a sub-optimal
+    performance.  To get valid performance data for those metrics, it is recommended to submit an
+    exclusive job!
+
+!!! note
 
-**Note:** To reduce the amount of recorded data, Pika summarizes per hardware thread metrics to the
-corresponding physical core. In terms of simultaneous multithreading (SMT), Pika only provides
-performance data per physical core.
+    To reduce the amount of recorded data, PIKA summarizes per hardware thread metrics to the
+    corresponding physical core. In terms of simultaneous multithreading (SMT), PIKA only provides
+    performance data per physical core.
 
 The following table explains different timeline visualization modes.
 By default, each timeline shows the average value over all hardware units (HUs) per measured interval.
@@ -95,9 +101,9 @@ from the time series data for each job.  To limit the jobs displayed, a time per
 specified.
 
 To analyze the footprints of a larger number of jobs, a visualization with histograms and scatter
-plots can be used. Pika uses histograms to illustrate the number of jobs that fit into a category or
+plots can be used. PIKA uses histograms to illustrate the number of jobs that fit into a category or
 bin. For job states and job tags there is a fixed number of categories or values. For other
-footprint metrics Pika uses a binning with a user-configurable bin size, since the value range
+footprint metrics PIKA uses a binning with a user-configurable bin size, since the value range
 usually contains an unlimited number of values.  A scatter plot enables the combined view of two
 footprint metrics (except for job states and job tags), which is particularly useful for
 investigating their correlation.
@@ -105,7 +111,7 @@ investigating their correlation.
 ## Hints
 
 If users wish to perform their own measurement of performance counters using performance tools other
-than Pika, it is recommended to disable Pika monitoring. This can be done using the following slurm
+than PIKA, it is recommended to disable PIKA monitoring. This can be done using the following slurm
 flags in the job script:
 
 ```Bash
@@ -113,11 +119,11 @@ flags in the job script:
 #SBATCH --comment=no_monitoring
 ```
 
-**Note:** Disabling Pika monitoring is possible only for exclusive jobs!
+**Note:** Disabling PIKA monitoring is possible only for exclusive jobs!
 
 ## Known Issues
 
-The Pika metric FLOPS is not supported by the Intel Haswell cpu architecture.
-However, Pika provides this metric to show the computational intensity.
+The PIKA metric FLOPS is not supported by the Intel Haswell cpu architecture.
+However, PIKA provides this metric to show the computational intensity.
 **Do not rely on FLOPS on Haswell!** We use the event `AVX_INSTS_CALC` which counts the `insertf128`
 instruction.
diff --git a/doc.zih.tu-dresden.de/docs/software/power_ai.md b/doc.zih.tu-dresden.de/docs/software/power_ai.md
index dc0fa59b3fc53e180bd620dde71df5597c33298f..b4beda5cec2b8b2e1ede4729df7434b6e8c8e7d5 100644
--- a/doc.zih.tu-dresden.de/docs/software/power_ai.md
+++ b/doc.zih.tu-dresden.de/docs/software/power_ai.md
@@ -2,81 +2,56 @@
 
 There are different documentation sources for users to learn more about
 the PowerAI Framework for Machine Learning. In the following the links
-are valid for PowerAI version 1.5.4
+are valid for PowerAI version 1.5.4.
 
-## General Overview:
+!!! warning
+    The information provided here is available from IBM and can be used on partition ml only!
 
--   \<a
-    href="<https://www.ibm.com/support/knowledgecenter/en/SS5SF7_1.5.3/welcome/welcome.htm>"
-    target="\_blank" title="Landing Page">Landing Page\</a> (note that
-    you can select different PowerAI versions with the drop down menu
-    "Change Product or version")
--   \<a
-    href="<https://developer.ibm.com/linuxonpower/deep-learning-powerai/>"
-    target="\_blank" title="PowerAI Developer Portal">PowerAI Developer
-    Portal \</a>(Some Use Cases and examples)
--   \<a
-    href="<https://www.ibm.com/support/knowledgecenter/en/SS5SF7_1.5.4/navigation/pai_software_pkgs.html>"
-    target="\_blank" title="Included Software Packages">Included
-    Software Packages\</a> (note that you can select different PowerAI
-    versions with the drop down menu "Change Product or version")
-
-## Specific User Howtos. Getting started with...:
+## General Overview
 
--   \<a
-    href="<https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted.htm>"
-    target="\_blank" title="Getting Started with PowerAI">PowerAI\</a>
--   \<a
-    href="<https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted_caffe.html>"
-    target="\_blank" title="Caffe">Caffe\</a>
--   \<a
-    href="<https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted_tensorflow.html?view=kc>"
-    target="\_blank" title="Tensorflow">TensorFlow\</a>
--   \<a
-    href="<https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted_tensorflow_prob.html?view=kc>"
-    target="\_blank" title="Tensorflow Probability">TensorFlow
-    Probability\</a>\<br />This release of PowerAI includes TensorFlow
-    Probability. TensorFlow Probability is a library for probabilistic
-    reasoning and statistical analysis in TensorFlow.
--   \<a
-    href="<https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted_tensorboard.html?view=kc>"
-    target="\_blank" title="Tensorboard">TensorBoard\</a>
--   \<a
-    href="<https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted_snapml.html>"
-    target="\_blank">Snap ML\</a>\<br />This release of PowerAI includes
-    Snap Machine Learning (Snap ML). Snap ML is a library for training
-    generalized linear models. It is being developed at IBM with the
-    vision to remove training time as a bottleneck for machine learning
-    applications. Snap ML supports many classical machine learning
-    models and scales gracefully to data sets with billions of examples
-    or features. It also offers distributed training, GPU acceleration,
-    and supports sparse data structures.
--   \<a
-    href="<https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted_pytorch.html>"
-    target="\_blank">PyTorch\</a>\<br />This release of PowerAI includes
-    the community development preview of PyTorch 1.0 (rc1). PowerAI's
-    PyTorch includes support for IBM's Distributed Deep Learning (DDL)
-    and Large Model Support (LMS).
--   \<a
-    href="<https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted_caffe2ONNX.html>"
-    target="\_blank">Caffe2 and ONNX\</a>\<br />This release of PowerAI
-    includes a Technology Preview of Caffe2 and ONNX. Caffe2 is a
-    companion to PyTorch. PyTorch is great for experimentation and rapid
-    development, while Caffe2 is aimed at production environments. ONNX
-    (Open Neural Network Exchange) provides support for moving models
-    between those frameworks.
--   \<a
-    href="<https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted_ddl.html?view=kc>"
-    target="\_blank" title="Distributed Deep Learning">Distributed Deep
-    Learning\</a> (DDL). \<br />Works up to 4 TaurusML worker nodes.
-    (Larger models with more nodes are possible with PowerAI Enterprise)
+-   [PowerAI Introduction](https://www.ibm.com/support/knowledgecenter/en/SS5SF7_1.5.3/welcome/welcome.htm)
+    (note that you can select different PowerAI versions with the drop down menu
+    "Change Product or version")
+-   [PowerAI Developer Portal](https://developer.ibm.com/linuxonpower/deep-learning-powerai/)
+    (Some Use Cases and examples)
+-   [Included Software Packages](https://www.ibm.com/support/knowledgecenter/en/SS5SF7_1.5.4/navigation/pai_software_pkgs.html)
+    (note that you can select different PowerAI versions with the drop down menu "Change Product
+    or version")
+
+## Specific User Guides
+
+- [Getting Started with PowerAI](https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted.htm)
+- [Caffe](https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted_caffe.html)
+- [TensorFlow](https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted_tensorflow.html?view=kc)
+- [TensorFlow Probability](https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted_tensorflow_prob.html?view=kc)
+  This release of PowerAI includes TensorFlow Probability. TensorFlow Probability is a library
+  for probabilistic reasoning and statistical analysis in TensorFlow.
+- [TensorBoard](https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted_tensorboard.html?view=kc)
+- [Snap ML](https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted_snapml.html)
+  This release of PowerAI includes Snap Machine Learning (Snap ML). Snap ML is a library for
+  training generalized linear models. It is being developed at IBM with the
+  vision to remove training time as a bottleneck for machine learning
+  applications. Snap ML supports many classical machine learning
+  models and scales gracefully to data sets with billions of examples
+  or features. It also offers distributed training, GPU acceleration,
+  and supports sparse data structures.
+- [PyTorch](https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted_pytorch.html)
+  This release of PowerAI includes
+  the community development preview of PyTorch 1.0 (rc1). PowerAI's
+  PyTorch includes support for IBM's Distributed Deep Learning (DDL)
+  and Large Model Support (LMS).
+- [Caffe2 and ONNX](https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted_caffe2ONNX.html)
+  This release of PowerAI includes a Technology Preview of Caffe2 and ONNX. Caffe2 is a
+  companion to PyTorch. PyTorch is great for experimentation and rapid
+  development, while Caffe2 is aimed at production environments. ONNX
+  (Open Neural Network Exchange) provides support for moving models
+  between those frameworks.
+- [Distributed Deep Learning](https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted_ddl.html?view=kc)
+  Distributed Deep Learning (DDL). Works on up to 4 nodes on partition `ml`.
 
 ## PowerAI Container
 
 We have converted the official Docker container to Singularity. Here is
 a documentation about the Docker base container, including a table with
 the individual software versions of the packages installed within the
-container:
-
--   \<a href="<https://hub.docker.com/r/ibmcom/powerai/>"
-    target="\_blank">PowerAI Docker Container Docu\</a>
+container: [PowerAI Docker Container](https://hub.docker.com/r/ibmcom/powerai/).
diff --git a/doc.zih.tu-dresden.de/docs/software/python.md b/doc.zih.tu-dresden.de/docs/software/python.md
deleted file mode 100644
index 281d1fd99f175805d36fd5ba9d78776f92ea8b50..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/software/python.md
+++ /dev/null
@@ -1,298 +0,0 @@
-# Python for Data Analytics
-
-Python is a high-level interpreted language widely used in research and
-science. Using HPC allows you to work with python quicker and more
-effective. Taurus allows working with a lot of available packages and
-libraries which give more useful functionalities and allow use all
-features of Python and to avoid minuses.
-
-**Prerequisites:** To work with PyTorch you obviously need [access](../access/ssh_login.md) for the
-Taurus system and basic knowledge about Python, Numpy and SLURM system.
-
-**Aim** of this page is to introduce users on how to start working with Python on the
-[HPC-DA](../jobs_and_resources/power9.md) system -  part of the TU Dresden HPC system.
-
-There are three main options on how to work with Keras and Tensorflow on the HPC-DA: 1. Modules; 2.
-[JupyterNotebook](../access/jupyterhub.md); 3.[Containers](containers.md). The main way is using
-the [Modules system](modules.md) and Python virtual environment.
-
-Note: You could work with simple examples in your home directory but according to
-[HPCStorageConcept2019](../data_lifecycle/hpc_storage_concept2019.md) please use **workspaces**
-for your study and work projects.
-
-## Virtual environment
-
-There are two methods of how to work with virtual environments on
-Taurus:
-
-1. **Vitualenv** is a standard Python tool to create isolated Python environments.
-   It is the preferred interface for
-   managing installations and virtual environments on Taurus and part of the Python modules.
-
-2. **Conda** is an alternative method for managing installations and
-virtual environments on Taurus. Conda is an open-source package
-management system and environment management system from Anaconda. The
-conda manager is included in all versions of Anaconda and Miniconda.
-
-**Note:** Keep in mind that you **cannot** use virtualenv for working
-with the virtual environments previously created with conda tool and
-vice versa! Prefer virtualenv whenever possible.
-
-This example shows how to start working
-with **Virtualenv** and Python virtual environment (using the module system)
-
-```Bash
-srun -p ml -N 1 -n 1 -c 7 --mem-per-cpu=5772 --gres=gpu:1 --time=04:00:00 --pty bash   #Job submission in ml nodes with 1 gpu on 1 node.
-
-mkdir python-environments        # Optional: Create folder. Please use Workspaces!
-
-module load modenv/ml            # Changing the environment. Example output: The following have been reloaded with a version change: 1 modenv/scs5 => modenv/ml
-ml av Python                     #Check the available modules with Python
-module load Python               #Load default Python. Example output: Module Python/3.7 4-GCCcore-8.3.0 with 7 dependencies loaded
-which python                                                   #Check which python are you using
-virtualenv --system-site-packages python-environments/envtest  #Create virtual environment
-source python-environments/envtest/bin/activate                #Activate virtual environment. Example output: (envtest) bash-4.2$
-python                                                         #Start python
-
-from time import gmtime, strftime
-print(strftime("%Y-%m-%d %H:%M:%S", gmtime()))                 #Example output: 2019-11-18 13:54:16
-deactivate                                                     #Leave the virtual environment
-```
-
-The [virtualenv](https://virtualenv.pypa.io/en/latest/) Python module (Python 3) provides support
-for creating virtual environments with their own sitedirectories, optionally isolated from system
-site directories. Each virtual environment has its own Python binary (which matches the version of
-the binary that was used to create this environment) and can have its own independent set of
-installed Python packages in its site directories. This allows you to manage separate package
-installations for different projects. It essentially allows us to create a virtual isolated Python
-installation and install packages into that virtual installation. When you switch projects, you can
-simply create a new virtual environment and not have to worry about breaking the packages installed
-in other environments.
-
-In your virtual environment, you can use packages from the (Complete List of
-Modules)(SoftwareModulesList) or if you didn't find what you need you can install required packages
-with the command: `pip install`. With the command `pip freeze`, you can see a list of all installed
-packages and their versions.
-
-This example shows how to start working with **Conda** and virtual
-environment (with using module system)
-
-```Bash
-srun -p ml -N 1 -n 1 -c 7 --mem-per-cpu=5772 --gres=gpu:1 --time=04:00:00 --pty bash  # Job submission in ml nodes with 1 gpu on 1 node.
-
-module load modenv/ml
-mkdir conda-virtual-environments            #create a folder
-cd conda-virtual-environments               #go to folder
-which python                                #check which python are you using
-module load PythonAnaconda/3.6              #load Anaconda module
-which python                                #check which python are you using now
-
-conda create -n conda-testenv python=3.6        #create virtual environment with the name conda-testenv and Python version 3.6
-conda activate conda-testenv                    #activate conda-testenv virtual environment
-
-conda deactivate                                #Leave the virtual environment
-```
-
-You can control where a conda environment
-lives by providing a path to a target directory when creating the
-environment. For example, the following command will create a new
-environment in a workspace located in `scratch`
-
-```Bash
-conda create --prefix /scratch/ws/<name_of_your_workspace>/conda-virtual-environment/<name_of_your_environment>
-```
-
-Please pay attention,
-using srun directly on the shell will lead to blocking and launch an
-interactive job. Apart from short test runs, it is **recommended to
-launch your jobs into the background by using Slurm**. For that, you can conveniently put
-the parameters directly into the job file which you can submit using
-`sbatch [options] <job file>.`
-
-## Jupyter Notebooks
-
-Jupyter notebooks are a great way for interactive computing in your web
-browser. Jupyter allows working with data cleaning and transformation,
-numerical simulation, statistical modelling, data visualization and of
-course with machine learning.
-
-There are two general options on how to work Jupyter notebooks using
-HPC.
-
-On Taurus, there is [JupyterHub](../access/jupyterhub.md) where you can simply run your Jupyter
-notebook on HPC nodes. Also, you can run a remote jupyter server within a sbatch GPU job and with
-the modules and packages you need. The manual server setup you can find [here](deep_learning.md).
-
-With Jupyterhub you can work with general
-data analytics tools. This is the recommended way to start working with
-the Taurus. However, some special instruments could not be available on
-the Jupyterhub.
-
-**Keep in mind that the remote Jupyter server can offer more freedom with settings and approaches.**
-
-## MPI for Python
-
-Message Passing Interface (MPI) is a standardized and portable
-message-passing standard designed to function on a wide variety of
-parallel computing architectures. The Message Passing Interface (MPI) is
-a library specification that allows HPC to pass information between its
-various nodes and clusters. MPI designed to provide access to advanced
-parallel hardware for end-users, library writers and tool developers.
-
-### Why use MPI?
-
-MPI provides a powerful, efficient and portable way to express parallel
-programs.
-Among many parallel computational models, message-passing has proven to be an effective one.
-
-### Parallel Python with mpi4py
-
-Mpi4py(MPI for Python) package provides bindings of the MPI standard for
-the python programming language, allowing any Python program to exploit
-multiple processors.
-
-#### Why use mpi4py?
-
-Mpi4py based on MPI-2 C++ bindings. It supports almost all MPI calls.
-This implementation is popular on Linux clusters and in the SciPy
-community. Operations are primarily methods of communicator objects. It
-supports communication of pickleable Python objects. Mpi4py provides
-optimized communication of NumPy arrays.
-
-Mpi4py is included as an extension of the SciPy-bundle modules on
-taurus.
-
-Please check the SoftwareModulesList for the modules availability. The availability of the mpi4py
-in the module you can check by
-the `module whatis <name_of_the module>` command. The `module whatis`
-command displays a short information and included extensions of the
-module.
-
-Moreover, it is possible to install mpi4py in your local conda
-environment:
-
-```Bash
-srun -p ml --time=04:00:00 -n 1 --pty --mem-per-cpu=8000 bash                            #allocate recources
-module load modenv/ml
-module load PythonAnaconda/3.6                                                           #load module to use conda
-conda create --prefix=<location_for_your_environment> python=3.6 anaconda                #create conda virtual environment
-
-conda activate <location_for_your_environment>                                          #activate your virtual environment
-
-conda install -c conda-forge mpi4py                                                      #install mpi4py
-
-python                                                                                   #start python
-
-from mpi4py import MPI                                                                   #verify your mpi4py
-comm = MPI.COMM_WORLD
-print("%d of %d" % (comm.Get_rank(), comm.Get_size()))
-```
-
-### Horovod
-
-[Horovod](https://github.com/horovod/horovod) is the open source distributed training
-framework for TensorFlow, Keras, PyTorch. It is supposed to make it easy
-to develop distributed deep learning projects and speed them up with
-TensorFlow.
-
-#### Why use Horovod?
-
-Horovod allows you to easily take a single-GPU TensorFlow and Pytorch
-program and successfully train it on many GPUs! In
-some cases, the MPI model is much more straightforward and requires far
-less code changes than the distributed code from TensorFlow for
-instance, with parameter servers. Horovod uses MPI and NCCL which gives
-in some cases better results than pure TensorFlow and PyTorch.
-
-#### Horovod as a module
-
-Horovod is available as a module with **TensorFlow** or **PyTorch**for **all** module environments.
-Please check the [software module list](modules.md) for the current version of the software.
-Horovod can be loaded like other software on the Taurus:
-
-```Bash
-ml av Horovod            #Check available modules with Python
-module load Horovod      #Loading of the module
-```
-
-#### Horovod installation
-
-However, if it is necessary to use Horovod with **PyTorch** or use
-another version of Horovod it is possible to install it manually. To
-install Horovod you need to create a virtual environment and load the
-dependencies (e.g. MPI). Installing PyTorch can take a few hours and is
-not recommended
-
-**Note:** You could work with simple examples in your home directory but **please use workspaces
-for your study and work projects** (see the Storage concept).
-
-Setup:
-
-```Bash
-srun -N 1 --ntasks-per-node=6 -p ml --time=08:00:00 --pty bash                    #allocate a Slurm job allocation, which is a set of resources (nodes)
-module load modenv/ml                                                             #Load dependencies by using modules
-module load OpenMPI/3.1.4-gcccuda-2018b
-module load Python/3.6.6-fosscuda-2018b
-module load cuDNN/7.1.4.18-fosscuda-2018b
-module load CMake/3.11.4-GCCcore-7.3.0
-virtualenv --system-site-packages <location_for_your_environment>                 #create virtual environment
-source <location_for_your_environment>/bin/activate                               #activate virtual environment
-```
-
-Or when you need to use conda:
-
-```Bash
-srun -N 1 --ntasks-per-node=6 -p ml --time=08:00:00 --pty bash                            #allocate a Slurm job allocation, which is a set of resources (nodes)
-module load modenv/ml                                                                     #Load dependencies by using modules
-module load OpenMPI/3.1.4-gcccuda-2018b
-module load PythonAnaconda/3.6
-module load cuDNN/7.1.4.18-fosscuda-2018b
-module load CMake/3.11.4-GCCcore-7.3.0
-
-conda create --prefix=<location_for_your_environment> python=3.6 anaconda                 #create virtual environment
-
-conda activate  <location_for_your_environment>                                           #activate virtual environment
-```
-
-Install Pytorch (not recommended)
-
-```Bash
-cd /tmp
-git clone https://github.com/pytorch/pytorch                                  #clone Pytorch from the source
-cd pytorch                                                                    #go to folder
-git checkout v1.7.1                                                           #Checkout version (example: 1.7.1)
-git submodule update --init                                                   #Update dependencies
-python setup.py install                                                       #install it with python
-```
-
-##### Install Horovod for Pytorch with python and pip
-
-In the example presented installation for the Pytorch without
-TensorFlow. Adapt as required and refer to the horovod documentation for
-details.
-
-```Bash
-HOROVOD_GPU_ALLREDUCE=MPI HOROVOD_WITHOUT_TENSORFLOW=1 HOROVOD_WITH_PYTORCH=1 HOROVOD_WITHOUT_MXNET=1 pip install --no-cache-dir horovod
-```
-
-##### Verify that Horovod works
-
-```Bash
-python                                           #start python
-import torch                                     #import pytorch
-import horovod.torch as hvd                      #import horovod
-hvd.init()                                       #initialize horovod
-hvd.size()
-hvd.rank()
-print('Hello from:', hvd.rank())
-```
-
-##### Horovod with NCCL
-
-If you want to use NCCL instead of MPI you can specify that in the
-install command after loading the NCCL module:
-
-```Bash
-module load NCCL/2.3.7-fosscuda-2018b
-HOROVOD_GPU_ALLREDUCE=NCCL HOROVOD_GPU_BROADCAST=NCCL HOROVOD_WITHOUT_TENSORFLOW=1 HOROVOD_WITH_PYTORCH=1 HOROVOD_WITHOUT_MXNET=1 pip install --no-cache-dir horovod
-```
diff --git a/doc.zih.tu-dresden.de/docs/software/python_virtual_environments.md b/doc.zih.tu-dresden.de/docs/software/python_virtual_environments.md
new file mode 100644
index 0000000000000000000000000000000000000000..e19daeeb6731aa32eb993f2495e6ec443bebe2dd
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/software/python_virtual_environments.md
@@ -0,0 +1,126 @@
+# Python Virtual Environments
+
+Virtual environments allow users to install additional Python packages and create an isolated
+run-time environment. We recommend using `virtualenv` for this purpose. In your virtual environment,
+you can use packages from the [modules list](modules.md) or if you didn't find what you need you can
+install required packages with the command: `pip install`. With the command `pip freeze`, you can
+see a list of all installed packages and their versions.
+
+There are two methods of how to work with virtual environments on ZIH systems:
+
+1. **virtualenv** is a standard Python tool to create isolated Python environments.
+   It is the preferred interface for
+   managing installations and virtual environments on ZIH system and part of the Python modules.
+
+2. **conda** is an alternative method for managing installations and
+virtual environments on ZIH system. conda is an open-source package
+management system and environment management system from Anaconda. The
+conda manager is included in all versions of Anaconda and Miniconda.
+
+!!! warning
+
+    Keep in mind that you **cannot** use virtualenv for working
+    with the virtual environments previously created with conda tool and
+    vice versa! Prefer virtualenv whenever possible.
+
+## Python Virtual Environment
+
+This example shows how to start working with **virtualenv** and Python virtual environment (using
+the module system).
+
+!!! hint
+
+    We recommend to use [workspaces](../data_lifecycle/workspaces.md) for your virtual
+    environments.
+
+At first, we check available Python modules and load the preferred version:
+
+```console
+marie@compute$ module avail Python    #Check the available modules with Python
+[...]
+marie@compute$ module load Python    #Load default Python
+Module Python/3.7 2-GCCcore-8.2.0 with 10 dependencies loaded
+marie@compute$ which python    #Check which python are you using
+/sw/installed/Python/3.7.2-GCCcore-8.2.0/bin/python
+```
+
+Then create the virtual environment and activate it.
+
+```console
+marie@compute$ ws_allocate -F scratch python_virtual_environment 1
+Info: creating workspace.
+/scratch/ws/1/python_virtual_environment
+[...]
+marie@compute$ virtualenv --system-site-packages /scratch/ws/1/python_virtual_environment/env  #Create virtual environment
+[...]
+marie@compute$ source /scratch/ws/1/python_virtual_environment/env/bin/activate    #Activate virtual environment. Example output: (envtest) bash-4.2$
+```
+
+Now you can work in this isolated environment, without interfering with other tasks running on the
+system. Note that the inscription (env) at the beginning of each line represents that you are in
+the virtual environment. You can deactivate the environment as follows:
+
+```console
+(env) marie@compute$ deactivate    #Leave the virtual environment
+```
+
+## Conda Virtual Environment
+
+This example shows how to start working with **conda** and virtual environment (with using module
+system). At first, we use an interactive job and create a directory for the conda virtual
+environment:
+
+```console
+marie@compute$ ws_allocate -F scratch conda_virtual_environment 1
+Info: creating workspace.
+/scratch/ws/1/conda_virtual_environment
+[...]
+```
+
+Then, we load Anaconda, create an environment in our directory and activate the environment:
+
+```console
+marie@compute$ module load Anaconda3    #load Anaconda module
+marie@compute$ conda create --prefix /scratch/ws/1/conda_virtual_environment/conda-env python=3.6    #create virtual environment with Python version 3.6
+marie@compute$ conda activate /scratch/ws/1/conda_virtual_environment/conda-env    #activate conda-env virtual environment
+```
+
+Now you can work in this isolated environment, without interfering with other tasks running on the
+system. Note that the inscription (conda-env) at the beginning of each line represents that you
+are in the virtual environment. You can deactivate the conda environment as follows:
+
+```console
+(conda-env) marie@compute$ conda deactivate    #Leave the virtual environment
+```
+
+TODO: Link to this page from other DA/ML topics. insert link in alpha centauri
+
+??? example
+
+    This is an example on partition Alpha. The example creates a virtual environment, and installs
+    the package `torchvision` with pip.
+    ```console
+    marie@login$ srun --partition=alpha-interactive -N=1 --gres=gpu:1 --time=01:00:00 --pty bash
+    marie@alpha$ mkdir python-environments                               # please use workspaces
+    marie@alpha$ module load modenv/hiera GCC/10.2.0 CUDA/11.1.1 OpenMPI/4.0.5 PyTorch
+    Module GCC/10.2.0, CUDA/11.1.1, OpenMPI/4.0.5, PyTorch/1.9.0 and 54 dependencies loaded.
+    marie@alpha$ which python
+    /sw/installed/Python/3.8.6-GCCcore-10.2.0/bin/python
+    marie@alpha$ pip list
+    [...]
+    marie@alpha$ virtualenv --system-site-packages python-environments/my-torch-env
+    created virtual environment CPython3.8.6.final.0-64 in 42960ms
+    creator CPython3Posix(dest=~/python-environments/my-torch-env, clear=False, global=True)
+    seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=~/.local/share/virtualenv)
+        added seed packages: pip==21.1.3, setuptools==57.2.0, wheel==0.36.2
+    activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator
+    marie@alpha$ source python-environments/my-torch-env/bin/activate
+    (my-torch-env) marie@alpha$ pip install torchvision
+    [...]
+    Installing collected packages: torchvision
+    Successfully installed torchvision-0.10.0
+    [...]
+    (my-torch-env) marie@alpha$ python -c "import torchvision; print(torchvision.__version__)"
+    0.10.0+cu102
+    (my-torch-env) marie@alpha$ deactivate
+    ```
diff --git a/doc.zih.tu-dresden.de/docs/software/pytorch.md b/doc.zih.tu-dresden.de/docs/software/pytorch.md
index 043320376fe184f7477b19b37f0f39625d8424a9..3c2e88a6c9fc209c246ede0e50410771be541c3f 100644
--- a/doc.zih.tu-dresden.de/docs/software/pytorch.md
+++ b/doc.zih.tu-dresden.de/docs/software/pytorch.md
@@ -1,260 +1,98 @@
-# Pytorch for Data Analytics
+# PyTorch
 
-[PyTorch](https://pytorch.org/) is an open-source machine learning framework.
+[PyTorch](https://pytorch.org/){:target="_blank"} is an open-source machine learning framework.
 It is an optimized tensor library for deep learning using GPUs and CPUs.
-PyTorch is a machine learning tool developed by Facebooks AI division
-to process large-scale object detection, segmentation, classification, etc.
-PyTorch provides a core datastructure, the tensor, a multi-dimensional array that shares many
+PyTorch is a machine learning tool developed by Facebooks AI division to process large-scale
+object detection, segmentation, classification, etc.
+PyTorch provides a core data structure, the tensor, a multi-dimensional array that shares many
 similarities with Numpy arrays.
-PyTorch also consumed Caffe2 for its backend and added support of ONNX.
 
-**Prerequisites:** To work with PyTorch you obviously need [access](../access/ssh_login.md) for the
-Taurus system and basic knowledge about Python, Numpy and SLURM system.
+Please check the software modules list via
 
-**Aim** of this page is to introduce users on how to start working with PyTorch on the
-[HPC-DA](../jobs_and_resources/power9.md) system -  part of the TU Dresden HPC system.
+```console
+marie@login$ module spider pytorch
+```
 
-There are numerous different possibilities of how to work with PyTorch on Taurus.
-Here we will consider two main methods.
+to find out, which PyTorch modules are available on your partition.
 
-1\. The first option is using Jupyter notebook with HPC-DA nodes. The easiest way is by using
-[Jupyterhub](../access/jupyterhub.md).  It is a recommended way for beginners in PyTorch and users
-who are just starting their work with Taurus.
+We recommend using partitions alpha and/or ml when working with machine learning workflows
+and the PyTorch library.
+You can find detailed hardware specification in our
+[hardware documentation](../jobs_and_resources/hardware_overview.md).
 
-2\. The second way is using the Modules system and Python or conda virtual environment.
-See [the Python page](python.md) for the HPC-DA system.
+## PyTorch Console
 
-Note: The information on working with the PyTorch using Containers could be found
-[here](containers.md).
+On the partition `alpha`, load the module environment:
 
-## Get started with PyTorch
+```console
+marie@login$ srun -p alpha --gres=gpu:1 -n 1 -c 7 --pty --mem-per-cpu=800 bash #Job submission on alpha nodes with 1 gpu on 1 node with 800 Mb per CPU
+marie@alpha$ module load modenv/hiera  GCC/10.2.0  CUDA/11.1.1 OpenMPI/4.0.5 PyTorch/1.9.0
+Die folgenden Module wurden in einer anderen Version erneut geladen:
+  1) modenv/scs5 => modenv/hiera
 
-### Virtual environment
+Module GCC/10.2.0, CUDA/11.1.1, OpenMPI/4.0.5, PyTorch/1.9.0 and 54 dependencies loaded.
+```
 
-For working with PyTorch and python packages using virtual environments (kernels) is necessary.
+??? hint "Torchvision on partition `alpha`"
+    On the partition `alpha`, the module torchvision is not yet available within the module
+    system. (19.08.2021)
+    Torchvision can be made available by using a virtual environment:
 
-Creating and using your kernel (environment) has the benefit that you can install your preferred
-python packages and use them in your notebooks.
+    ```console
+    marie@alpha$ virtualenv --system-site-packages python-environments/torchvision_env
+    marie@alpha$ source python-environments/torchvision_env/bin/activate
+    marie@alpha$ pip install torchvision --no-deps
+    ```
 
-A virtual environment is a cooperatively isolated runtime environment that allows Python users and
-applications to install and upgrade Python distribution packages without interfering with
-the behaviour of other Python applications running on the same system. So the
-[Virtual environment](https://docs.python.org/3/glossary.html#term-virtual-environment)
-is a self-contained directory tree that contains a Python installation for a particular version of
-Python, plus several additional packages. At its core, the main purpose of
-Python virtual environments is to create an isolated environment for Python projects.
-Python virtual environment is the main method to work with Deep Learning software as PyTorch on the
-HPC-DA system.
+    Using the **--no-deps** option for "pip install" is necessary here as otherwise the PyTorch
+    version might be replaced and you will run into trouble with the cuda drivers.
 
-### Conda and Virtualenv
+On the partition `ml`:
 
-There are two methods of how to work with virtual environments on
-Taurus:
+```console
+marie@login$ srun -p ml --gres=gpu:1 -n 1 -c 7 --pty --mem-per-cpu=800 bash    #Job submission in ml nodes with 1 gpu on 1 node with 800 Mb per CPU
+```
 
-1.**Vitualenv (venv)** is a standard Python tool to create isolated Python environments.
-In general, It is the preferred interface for managing installations and virtual environments
-on Taurus.
-It has been integrated into the standard library under the
-[venv module](https://docs.python.org/3/library/venv.html).
-We recommend using **venv** to work with Python packages and Tensorflow on Taurus.
+After calling
 
-2\. The **conda** command is the interface for managing installations and virtual environments on
-Taurus.
-The **conda** is a tool for managing and deploying applications, environments and packages.
-Conda is an open-source package management system and environment management system from Anaconda.
-The conda manager is included in all versions of Anaconda and Miniconda.
-**Important note!** Due to the use of Anaconda to create PyTorch modules for the ml partition,
-it is recommended to use the conda environment for working with the PyTorch to avoid conflicts over
-the sources of your packages (pip or conda).
+```console
+marie@login$ module spider pytorch
+```
 
-**Note:** Keep in mind that you **cannot** use conda for working with the virtual environments
-previously created with Vitualenv tool and vice versa
+we know that we can load PyTorch (including torchvision) with
 
-This example shows how to install and start working with PyTorch (with
-using module system)
+```console
+marie@ml$ module load modenv/ml torchvision/0.7.0-fosscuda-2019b-Python-3.7.4-PyTorch-1.6.0
+Module torchvision/0.7.0-fosscuda-2019b-Python-3.7.4-PyTorch-1.6.0 and 55 dependencies loaded.
+```
 
-    srun -p ml -N 1 -n 1 -c 2 --gres=gpu:1 --time=01:00:00 --pty --mem-per-cpu=5772 bash #Job submission in ml nodes with 1 gpu on 1 node with 2 CPU and with 5772 mb for each cpu.
-    module load modenv/ml                        #Changing the environment. Example output: The following have been reloaded with a version change:  1) modenv/scs5 => modenv/ml
-    mkdir python-virtual-environments            #Create folder
-    cd python-virtual-environments               #Go to folder
-    module load PythonAnaconda/3.6                      #Load Anaconda with Python. Example output: Module Module PythonAnaconda/3.6 loaded.
-    which python                                                 #Check which python are you using
-    python3 -m venv --system-site-packages envtest               #Create virtual environment
-    source envtest/bin/activate                                  #Activate virtual environment. Example output: (envtest) bash-4.2$
-    module load PyTorch                                          #Load PyTorch module. Example output: Module PyTorch/1.1.0-PythonAnaconda-3.6 loaded.
-    python                                                       #Start python
-    import torch
-    torch.version.__version__                                    #Example output: 1.1.0
+Now, we check that we can access PyTorch:
 
-Keep in mind that using **srun** directly on the shell will lead to blocking and launch an
-interactive job. Apart from short test runs,
-it is **recommended to launch your jobs into the background by using batch jobs**.
-For that, you can conveniently put the parameters directly into the job file
-which you can submit using *sbatch [options] <job_file_name>*.
+```console
+marie@{ml,alpha}$ python -c "import torch; print(torch.__version__)"
+```
 
-## Running the model and examples
+The following example shows how to create a python virtual environment and import PyTorch.
 
-Below are examples of Jupyter notebooks with PyTorch models which you can run on ml nodes of HPC-DA.
+```console
+marie@ml$ mkdir python-environments    #create folder
+marie@ml$ which python    #check which python are you using
+/sw/installed/Python/3.7.4-GCCcore-8.3.0/bin/python
+marie@ml$ virtualenv --system-site-packages python-environments/env    #create virtual environment "env" which inheriting with global site packages
+[...]
+marie@ml$ source python-environments/env/bin/activate    #activate virtual environment "env". Example output: (env) bash-4.2$
+marie@ml$ python -c "import torch; print(torch.__version__)"
+```
 
-There are two ways how to work with the Jupyter notebook on HPC-DA system. You can use a
-[remote Jupyter server](deep_learning.md) or [JupyterHub](../access/jupyterhub.md).
-Jupyterhub is a simple and recommended way to use PyTorch.
-We are using Jupyterhub for our examples.
+## PyTorch in JupyterHub
 
-Prepared examples of PyTorch models give you an understanding of how to work with
-Jupyterhub and PyTorch models. It can be useful and instructive to start
-your acquaintance with PyTorch and HPC-DA system from these simple examples.
+In addition to using interactive and batch jobs, it is possible to work with PyTorch using JupyterHub.
+The production and test environments of JupyterHub contain Python kernels, that come with a PyTorch support.
 
-JupyterHub is available here: [taurus.hrsk.tu-dresden.de/jupyter](https://taurus.hrsk.tu-dresden.de/jupyter)
+![PyTorch module in JupyterHub](misc/Pytorch_jupyter_module.png)
+{: align="center"}
 
-After login, you can start a new session by clicking on the button.
+## Distributed PyTorch
 
-**Note:** Detailed guide (with pictures and instructions) how to run the Jupyterhub
-you could find on [the page](../access/jupyterhub.md).
-
-Please choose the "IBM Power (ppc64le)". You need to download an example
-(prepared as jupyter notebook file) that already contains all you need for the start of the work.
-Please put the file into your previously created virtual environment in your working directory or
-use the kernel for your notebook [see Jupyterhub page](../access/jupyterhub.md).
-
-Note: You could work with simple examples in your home directory but according to
-[HPCStorageConcept2019](../data_lifecycle/hpc_storage_concept2019.md) please use **workspaces**
-for your study and work projects.
-For this reason, you have to use advanced options of Jupyterhub and put "/" in "Workspace scope" field.
-
-To download the first example (from the list below) into your previously created
-virtual environment you could use the following command:
-
-    ws_list                                      #list of your workspaces
-    cd &lt;name_of_your_workspace&gt;                  #go to workspace
-
-    wget https://doc.zih.tu-dresden.de/hpc-wiki/pub/Compendium/PyTorch/example_MNIST_Pytorch.zip
-    unzip example_MNIST_Pytorch.zip
-
-Also, you could use kernels for all notebooks, not only for them which
-placed in your virtual environment. See the [jupyterhub](../access/jupyterhub.md) page.
-
-Examples:
-
-1\. Simple MNIST model. The MNIST database is a large database of handwritten digits that is
-commonly used for training various image processing systems. PyTorch allows us to import and
-download the MNIST dataset directly from the Torchvision - package consists of datasets,
-model architectures and transformations.
-The model contains a neural network with sequential architecture and typical modules
-for this kind of models. Recommended parameters for running this model are 1 GPU and 7 cores (28 thread)
-
-(example_MNIST_Pytorch.zip)
-
-### Running the model
-
-Open [JupyterHub](../access/jupyterhub.md) and follow instructions above.
-
-In Jupyterhub documents are organized with tabs and a very versatile split-screen feature.
-On the left side of the screen, you can open your file. Use 'File-Open from Path'
-to go to your workspace (e.g. `scratch/ws/<username-name_of_your_ws>`).
-You could run each cell separately step by step and analyze the result of each step.
-Default command for running one cell Shift+Enter'. Also, you could run all cells with the command '
-run all cells' in the 'Run' Tab.
-
-## Components and advantages of the PyTorch
-
-### Pre-trained networks
-
-The PyTorch gives you an opportunity to use pre-trained models and networks for your purposes
-(as a TensorFlow for instance) especially for computer vision and image recognition. As you know
-computer vision is one of the fields that have been most impacted by the advent of deep learning.
-
-We will use a network trained on ImageNet, taken from the TorchVision project,
-which contains a few of the best performing neural network architectures for computer vision,
-such as AlexNet, one of the early breakthrough networks for image recognition, and ResNet,
-which won the ImageNet classification, detection, and localization competitions, in 2015.
-[TorchVision](https://pytorch.org/vision/stable/index.html) also has easy access to datasets like
-ImageNet and other utilities for getting up
-to speed with computer vision applications in PyTorch.
-The pre-defined models can be found in torchvision.models.
-
-**Important note**: For the ml nodes only the Torchvision 0.2.2. is available (10.11.20).
-The last updates from IBM include only Torchvision 0.4.1 CPU version.
-Be careful some features from modern versions of Torchvision are not available in the 0.2.2
-(e.g. some kinds of `transforms`). Always check the version with: `print(torchvision.__version__)`
-
-Examples:
-
-1. Image recognition example. This PyTorch script is using Resnet to single image classification.
-Recommended parameters for running this model are 1 GPU and 7 cores (28 thread).
-
-(example_Pytorch_image_recognition.zip)
-
-Remember that for using [JupyterHub service](../access/jupyterhub.md)
-for PyTorch you need to create and activate
-a virtual environment (kernel) with loaded essential modules (see "envtest" environment form the virtual
-environment example.
-
-Run the example in the same way as the previous example (MNIST model).
-
-### Using Multiple GPUs with PyTorch
-
-Effective use of GPUs is essential, and it implies using parallelism in
-your code and model. Data Parallelism and model parallelism are effective instruments
-to improve the performance of your code in case of GPU using.
-
-The data parallelism is a widely-used technique. It replicates the same model to all GPUs,
-where each GPU consumes a different partition of the input data. You could see this method [here](https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html).
-
-The example below shows how to solve that problem by using model
-parallel, which, in contrast to data parallelism, splits a single model
-onto different GPUs, rather than replicating the entire model on each
-GPU. The high-level idea of model parallel is to place different sub-networks of a model onto different
-devices. As the only part of a model operates on any individual device, a set of devices can
-collectively serve a larger model.
-
-It is recommended to use [DistributedDataParallel]
-(https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html),
-instead of this class, to do multi-GPU training, even if there is only a single node.
-See: Use nn.parallel.DistributedDataParallel instead of multiprocessing or nn.DataParallel.
-Check the [page](https://pytorch.org/docs/stable/notes/cuda.html#cuda-nn-ddp-instead) and
-[Distributed Data Parallel](https://pytorch.org/docs/stable/notes/ddp.html#ddp).
-
-Examples:
-
-1\. The parallel model. The main aim of this model to show the way how
-to effectively implement your neural network on several GPUs. It
-includes a comparison of different kinds of models and tips to improve
-the performance of your model. **Necessary** parameters for running this
-model are **2 GPU** and 14 cores (56 thread).
-
-(example_PyTorch_parallel.zip)
-
-Remember that for using [JupyterHub service](../access/jupyterhub.md)
-for PyTorch you need to create and activate
-a virtual environment (kernel) with loaded essential modules.
-
-Run the example in the same way as the previous examples.
-
-#### Distributed data-parallel
-
-[DistributedDataParallel](https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel)
-(DDP) implements data parallelism at the module level which can run across multiple machines.
-Applications using DDP should spawn multiple processes and create a single DDP instance per process.
-DDP uses collective communications in the [torch.distributed]
-(https://pytorch.org/tutorials/intermediate/dist_tuto.html)
-package to synchronize gradients and buffers.
-
-The tutorial could be found [here](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html).
-
-To use distributed data parallelisation on Taurus please use following
-parameters: `--ntasks-per-node` -parameter to the number of GPUs you use
-per node. Also, it could be useful to increase `memomy/cpu` parameters
-if you run larger models. Memory can be set up to:
-
---mem=250000 and --cpus-per-task=7 for the **ml** partition.
-
---mem=60000 and --cpus-per-task=6 for the **gpu2** partition.
-
-Keep in mind that only one memory parameter (`--mem-per-cpu` = <MB> or `--mem`=<MB>) can be specified
-
-## F.A.Q
-
--   (example_MNIST_Pytorch.zip)
--   (example_Pytorch_image_recognition.zip)
--   (example_PyTorch_parallel.zip)
+For details on how to run PyTorch with multiple GPUs and/or multiple nodes, see
+[distributed training](distributed_training.md).
diff --git a/doc.zih.tu-dresden.de/docs/software/singularity_example_definitions.md b/doc.zih.tu-dresden.de/docs/software/singularity_example_definitions.md
deleted file mode 100644
index 28fe94a9d510e577148d7d0c2f526136e813d4ba..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/software/singularity_example_definitions.md
+++ /dev/null
@@ -1,110 +0,0 @@
-# Singularity Example Definitions
-
-## Basic example
-
-A usual workflow to create Singularity Definition consists of the
-following steps:
-
--   Start from base image
--   Install dependencies
-    -   Package manager
-    -   Other sources
--   Build & Install own binaries
--   Provide entrypoints & metadata
-
-An example doing all this:
-
-```Bash
-Bootstrap: docker
-From: alpine
-
-%post
-  . /.singularity.d/env/10-docker*.sh
-
-  apk add g++ gcc make wget cmake
-
-  wget https://github.com/fmtlib/fmt/archive/5.3.0.tar.gz
-  tar -xf 5.3.0.tar.gz
-  mkdir build && cd build
-  cmake ../fmt-5.3.0 -DFMT_TEST=OFF
-  make -j$(nproc) install
-  cd ..
-  rm -r fmt-5.3.0*
-
-  cat hello.cpp
-#include &lt;fmt/format.h&gt;
-
-int main(int argc, char** argv){
-  if(argc == 1) fmt::print("No arguments passed!\n");
-  else fmt::print("Hello {}!\n", argv[1]);
-}
-EOF
-
-  g++ hello.cpp -o hello -lfmt
-  mv hello /usr/bin/hello
-
-%runscript
-  hello "$@"
-
-%labels
-  Author Alexander Grund
-  Version 1.0.0
-
-%help
-  Display a greeting using the fmt library
-
-  Usage:
-    ./hello 
-```
-
-## CUDA + CuDNN + OpenMPI
-
-- Chosen CUDA version depends on installed driver of host
-- OpenMPI needs PMI for SLURM integration
-- OpenMPI needs CUDA for GPU copy-support
-- OpenMPI needs ibverbs libs for Infiniband
-- openmpi-mca-params.conf required to avoid warnings on fork (OK on
-  taurus)
-- Environment variables SLURM_VERSION, OPENMPI_VERSION can be set to
-  choose different version when building the container
-
-```
-Bootstrap: docker
-From: nvidia/cuda-ppc64le:10.1-cudnn7-devel-ubuntu18.04
-
-%labels
-    Author ZIH
-    Requires CUDA driver 418.39+.
-
-%post
-    . /.singularity.d/env/10-docker*.sh
-
-    apt-get update
-    apt-get install -y cuda-compat-10.1
-    apt-get install -y libibverbs-dev ibverbs-utils
-    # Install basic development tools
-    apt-get install -y gcc g++ make wget python
-    apt-get autoremove; apt-get clean
-
-    cd /tmp
-
-    : ${SLURM_VERSION:=17-02-11-1}
-    wget https://github.com/SchedMD/slurm/archive/slurm-${SLURM_VERSION}.tar.gz
-    tar -xf slurm-${SLURM_VERSION}.tar.gz
-        cd slurm-slurm-${SLURM_VERSION}
-        ./configure --prefix=/usr/ --sysconfdir=/etc/slurm --localstatedir=/var --disable-debug
-        make -C contribs/pmi2 -j$(nproc) install
-    cd ..
-    rm -rf slurm-*
-
-    : ${OPENMPI_VERSION:=3.1.4}
-    wget https://download.open-mpi.org/release/open-mpi/v${OPENMPI_VERSION%.*}/openmpi-${OPENMPI_VERSION}.tar.gz
-    tar -xf openmpi-${OPENMPI_VERSION}.tar.gz
-    cd openmpi-${OPENMPI_VERSION}/
-    ./configure --prefix=/usr/ --with-pmi --with-verbs --with-cuda
-    make -j$(nproc) install
-    echo "mpi_warn_on_fork = 0" >> /usr/etc/openmpi-mca-params.conf
-    echo "btl_openib_warn_default_gid_prefix = 0" >> /usr/etc/openmpi-mca-params.conf
-    cd ..
-    rm -rf openmpi-*
-```
diff --git a/doc.zih.tu-dresden.de/docs/software/singularity_recipe_hints.md b/doc.zih.tu-dresden.de/docs/software/singularity_recipe_hints.md
index 5e4388fcf95ed06370d7d633544ee685113df1a7..b8304b57de0f1ae5da98341c92f6d9067b838ecd 100644
--- a/doc.zih.tu-dresden.de/docs/software/singularity_recipe_hints.md
+++ b/doc.zih.tu-dresden.de/docs/software/singularity_recipe_hints.md
@@ -1,6 +1,117 @@
-# Singularity Recipe Hints
+# Singularity Recipes and Hints
 
-## GUI (X11) applications
+## Example Definitions
+
+### Basic Example
+
+A usual workflow to create Singularity Definition consists of the following steps:
+
+* Start from base image
+* Install dependencies
+    * Package manager
+    * Other sources
+* Build and install own binaries
+* Provide entry points and metadata
+
+An example doing all this:
+
+```bash
+Bootstrap: docker
+From: alpine
+
+%post
+  . /.singularity.d/env/10-docker*.sh
+
+  apk add g++ gcc make wget cmake
+
+  wget https://github.com/fmtlib/fmt/archive/5.3.0.tar.gz
+  tar -xf 5.3.0.tar.gz
+  mkdir build && cd build
+  cmake ../fmt-5.3.0 -DFMT_TEST=OFF
+  make -j$(nproc) install
+  cd ..
+  rm -r fmt-5.3.0*
+
+  cat hello.cpp
+#include &lt;fmt/format.h&gt;
+
+int main(int argc, char** argv){
+  if(argc == 1) fmt::print("No arguments passed!\n");
+  else fmt::print("Hello {}!\n", argv[1]);
+}
+EOF
+
+  g++ hello.cpp -o hello -lfmt
+  mv hello /usr/bin/hello
+
+%runscript
+  hello "$@"
+
+%labels
+  Author Alexander Grund
+  Version 1.0.0
+
+%help
+  Display a greeting using the fmt library
+
+  Usage:
+    ./hello
+```
+
+### CUDA + CuDNN + OpenMPI
+
+* Chosen CUDA version depends on installed driver of host
+* OpenMPI needs PMI for Slurm integration
+* OpenMPI needs CUDA for GPU copy-support
+* OpenMPI needs `ibverbs` library for Infiniband
+* `openmpi-mca-params.conf` required to avoid warnings on fork (OK on ZIH systems)
+* Environment variables `SLURM_VERSION` and `OPENMPI_VERSION` can be set to  choose different
+  version when building the container
+
+```bash
+Bootstrap: docker
+From: nvidia/cuda-ppc64le:10.1-cudnn7-devel-ubuntu18.04
+
+%labels
+    Author ZIH
+    Requires CUDA driver 418.39+.
+
+%post
+    . /.singularity.d/env/10-docker*.sh
+
+    apt-get update
+    apt-get install -y cuda-compat-10.1
+    apt-get install -y libibverbs-dev ibverbs-utils
+    # Install basic development tools
+    apt-get install -y gcc g++ make wget python
+    apt-get autoremove; apt-get clean
+
+    cd /tmp
+
+    : ${SLURM_VERSION:=17-02-11-1}
+    wget https://github.com/SchedMD/slurm/archive/slurm-${SLURM_VERSION}.tar.gz
+    tar -xf slurm-${SLURM_VERSION}.tar.gz
+        cd slurm-slurm-${SLURM_VERSION}
+        ./configure --prefix=/usr/ --sysconfdir=/etc/slurm --localstatedir=/var --disable-debug
+        make -C contribs/pmi2 -j$(nproc) install
+    cd ..
+    rm -rf slurm-*
+
+    : ${OPENMPI_VERSION:=3.1.4}
+    wget https://download.open-mpi.org/release/open-mpi/v${OPENMPI_VERSION%.*}/openmpi-${OPENMPI_VERSION}.tar.gz
+    tar -xf openmpi-${OPENMPI_VERSION}.tar.gz
+    cd openmpi-${OPENMPI_VERSION}/
+    ./configure --prefix=/usr/ --with-pmi --with-verbs --with-cuda
+    make -j$(nproc) install
+    echo "mpi_warn_on_fork = 0" >> /usr/etc/openmpi-mca-params.conf
+    echo "btl_openib_warn_default_gid_prefix = 0" >> /usr/etc/openmpi-mca-params.conf
+    cd ..
+    rm -rf openmpi-*
+```
+
+## Hints
+
+### GUI (X11) Applications
 
 Running GUI applications inside a singularity container is possible out of the box. Check the
 following definition:
@@ -15,25 +126,25 @@ yum install -y xeyes
 
 This image may be run with
 
-```Bash
+```console
 singularity exec xeyes.sif xeyes.
 ```
 
-This works because all the magic is done by singularity already like setting $DISPLAY to the outside
-display and mounting $HOME so $HOME/.Xauthority (X11 authentication cookie) is found. When you are
-using \`--contain\` or \`--no-home\` you have to set that cookie yourself or mount/copy it inside
-the container. Similar for \`--cleanenv\` you have to set $DISPLAY e.g. via
+This works because all the magic is done by Singularity already like setting `$DISPLAY` to the outside
+display and mounting `$HOME` so `$HOME/.Xauthority` (X11 authentication cookie) is found. When you are
+using `--contain` or `--no-home` you have to set that cookie yourself or mount/copy it inside
+the container. Similar for `--cleanenv` you have to set `$DISPLAY`, e.g., via
 
-```Bash
+```console
 export SINGULARITY_DISPLAY=$DISPLAY
 ```
 
-When you run a container as root (via \`sudo\`) you may need to allow root for your local display
+When you run a container as root (via `sudo`) you may need to allow root for your local display
 port: `xhost +local:root\`
 
-### Hardware acceleration
+### Hardware Acceleration
 
-If you want hardware acceleration you **may** need [VirtualGL](https://virtualgl.org). An example
+If you want hardware acceleration, you **may** need [VirtualGL](https://virtualgl.org). An example
 definition file is as follows:
 
 ```Bash
@@ -55,25 +166,28 @@ rm VirtualGL-*.rpm
 yum install -y mesa-dri-drivers # for e.g. intel integrated GPU drivers. Replace by your driver
 ```
 
-You can now run the application with vglrun:
+You can now run the application with `vglrun`:
 
-```Bash
+```console
 singularity exec vgl.sif vglrun glxgears
 ```
 
-**Attention:**Using VirtualGL may not be required at all and could even decrease the performance. To
-check install e.g. glxgears as above and your graphics driver (or use the VirtualGL image from
-above) and disable vsync:
+!!! warning
 
-```
+    Using VirtualGL may not be required at all and could even decrease the performance.
+
+To check install, e.g., `glxgears` as above and your graphics driver (or use the VirtualGL image
+from above) and disable `vsync`:
+
+```console
 vblank_mode=0 singularity exec vgl.sif glxgears
 ```
 
-Compare the FPS output with the glxgears prefixed by vglrun (see above) to see which produces more
+Compare the FPS output with the `glxgears` prefixed by `vglrun` (see above) to see which produces more
 FPS (or runs at all).
 
-**NVIDIA GPUs** need the `--nv` parameter for the singularity command:
+**NVIDIA GPUs** need the `--nv` parameter for the Singularity command:
 
-``Bash
+``console
 singularity exec --nv vgl.sif glxgears
 ```
diff --git a/doc.zih.tu-dresden.de/docs/software/tensorboard.md b/doc.zih.tu-dresden.de/docs/software/tensorboard.md
new file mode 100644
index 0000000000000000000000000000000000000000..d2c838d3961d8f48794e544ce1ca7846d24e7325
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/software/tensorboard.md
@@ -0,0 +1,84 @@
+# TensorBoard
+
+TensorBoard is a visualization toolkit for TensorFlow and offers a variety of functionalities such
+as presentation of loss and accuracy, visualization of the model graph or profiling of the
+application.
+
+## Using JupyterHub
+
+The easiest way to use TensorBoard is via [JupyterHub](../access/jupyterhub.md). The default
+TensorBoard log directory is set to `/tmp/<username>/tf-logs` on the compute node, where Jupyter
+session is running. In order to show your own directory with logs, it can be soft-linked to the
+default folder. Open a "New Launcher" menu (`Ctrl+Shift+L`) and select "Terminal" session. It
+will start new terminal on the respective compute node. Create a directory `/tmp/$USER/tf-logs`
+and link it with your log directory
+`ln -s <your-tensorboard-target-directory> <local-tf-logs-directory>`
+
+```Bash
+mkdir -p /tmp/$USER/tf-logs
+ln -s <your-tensorboard-target-directory> /tmp/$USER/tf-logs
+```
+
+Update TensorBoard tab if needed with `F5`.
+
+## Using TensorBoard from Module Environment
+
+On ZIH systems, TensorBoard is also available as an extension of the TensorFlow module. To check
+whether a specific TensorFlow module provides TensorBoard, use the following command:
+
+```console hl_lines="9"
+marie@compute$ module spider TensorFlow/2.3.1
+[...]
+        Included extensions
+        ===================
+        absl-py-0.10.0, astor-0.8.0, astunparse-1.6.3, cachetools-4.1.1, gast-0.3.3,
+        google-auth-1.21.3, google-auth-oauthlib-0.4.1, google-pasta-0.2.0,
+        grpcio-1.32.0, Keras-Preprocessing-1.1.2, Markdown-3.2.2, oauthlib-3.1.0, opt-
+        einsum-3.3.0, pyasn1-modules-0.2.8, requests-oauthlib-1.3.0, rsa-4.6,
+        tensorboard-2.3.0, tensorboard-plugin-wit-1.7.0, TensorFlow-2.3.1, tensorflow-
+        estimator-2.3.0, termcolor-1.1.0, Werkzeug-1.0.1, wrapt-1.12.1
+```
+
+If TensorBoard occurs in the `Included extensions` section of the output, TensorBoard is available.
+
+To use TensorBoard, you have to connect via ssh to the ZIH system as usual, schedule an interactive
+job and load a TensorFlow module:
+
+```console
+marie@compute$ module load TensorFlow/2.3.1
+Module TensorFlow/2.3.1-fosscuda-2019b-Python-3.7.4 and 47 dependencies loaded.
+```
+
+Then, create a workspace for the event data, that should be visualized in TensorBoard. If you
+already have an event data directory, you can skip that step.
+
+```console
+marie@compute$ ws_allocate -F scratch tensorboard_logdata 1
+Info: creating workspace.
+/scratch/ws/1/marie-tensorboard_logdata
+[...]
+```
+
+Now, you can run your TensorFlow application. Note that you might have to adapt your code to make it
+accessible for TensorBoard. Please find further information on the official [TensorBoard website](https://www.tensorflow.org/tensorboard/get_started)
+Then, you can start TensorBoard and pass the directory of the event data:
+
+```console
+marie@compute$ tensorboard --logdir /scratch/ws/1/marie-tensorboard_logdata --bind_all
+[...]
+TensorBoard 2.3.0 at http://taurusi8034.taurus.hrsk.tu-dresden.de:6006/
+[...]
+```
+
+TensorBoard then returns a server address on Taurus, e.g. `taurusi8034.taurus.hrsk.tu-dresden.de:6006`
+
+For accessing TensorBoard now, you have to set up some port forwarding via ssh to your local
+machine:
+
+```console
+marie@local$ ssh -N -f -L 6006:taurusi8034.taurus.hrsk.tu-dresden.de:6006 <zih-login>@taurus.hrsk.tu-dresden.de
+```
+
+Now, you can see the TensorBoard in your browser at `http://localhost:6006/`.
+
+Note that you can also use TensorBoard in an [sbatch file](../jobs_and_resources/slurm.md).
diff --git a/doc.zih.tu-dresden.de/docs/software/tensorflow.md b/doc.zih.tu-dresden.de/docs/software/tensorflow.md
index 346eb9a1da4e0728c2751773d656ac70d00a60c4..09a8352a32648178f3634a4099eee52ad6c0ccd0 100644
--- a/doc.zih.tu-dresden.de/docs/software/tensorflow.md
+++ b/doc.zih.tu-dresden.de/docs/software/tensorflow.md
@@ -1,264 +1,156 @@
 # TensorFlow
 
-## Introduction
-
-This is an introduction of how to start working with TensorFlow and run
-machine learning applications on the [HPC-DA](../jobs_and_resources/hpcda.md) system of Taurus.
-
-\<span style="font-size: 1em;">On the machine learning nodes (machine
-learning partition), you can use the tools from [IBM PowerAI](power_ai.md) or the other
-modules. PowerAI is an enterprise software distribution that combines popular open-source
-deep learning frameworks, efficient AI development tools (Tensorflow, Caffe, etc). For
-this page and examples was used [PowerAI version 1.5.4](https://www.ibm.com/support/knowledgecenter/en/SS5SF7_1.5.4/navigation/pai_software_pkgs.html)
-
-[TensorFlow](https://www.tensorflow.org/guide/) is a free end-to-end open-source
-software library for dataflow and differentiable programming across many
-tasks. It is a symbolic math library, used primarily for machine
-learning applications. It has a comprehensive, flexible ecosystem of tools, libraries and
-community resources. It is available on taurus along with other common machine
-learning packages like Pillow, SciPY, Numpy.
-
-**Prerequisites:** To work with Tensorflow on Taurus, you obviously need
-[access](../access/ssh_login.md) for the Taurus system and basic knowledge about Python, SLURM system.
-
-**Aim** of this page is to introduce users on how to start working with
-TensorFlow on the \<a href="HPCDA" target="\_self">HPC-DA\</a> system -
-part of the TU Dresden HPC system.
-
-There are three main options on how to work with Tensorflow on the
-HPC-DA: **1.** **Modules,** **2.** **JupyterNotebook, 3. Containers**. The best option is
-to use [module system](../software/runtime_environment.md#Module_Environments) and
-Python virtual environment. Please see the next chapters and the [Python page](python.md) for the
-HPC-DA system.
-
-The information about the Jupyter notebook and the **JupyterHub** could
-be found [here](../access/jupyterhub.md). The use of
-Containers is described [here](tensorflow_container_on_hpcda.md).
-
-On Taurus, there exist different module environments, each containing a set
-of software modules. The default is *modenv/scs5* which is already loaded,
-however for the HPC-DA system using the "ml" partition you need to use *modenv/ml*.
-To find out which partition are you using use: `ml list`.
-You can change the module environment with the command:
-
-    module load modenv/ml
-
-The machine learning partition is based on the PowerPC Architecture (ppc64le)
-(Power9 processors), which means that the software built for x86_64 will not
-work on this partition, so you most likely can't use your already locally
-installed packages on Taurus. Also, users need to use the modules which are
-specially made for the ml partition (from modenv/ml) and not for the rest
-of Taurus (e.g. from modenv/scs5).
-
-Each node on the ml partition has 6x Tesla V-100 GPUs, with 176 parallel threads
-on 44 cores per node (Simultaneous multithreading (SMT) enabled) and 256GB RAM.
-The specification could be found [here](../jobs_and_resources/power9.md).
-
-%RED%Note:<span class="twiki-macro ENDCOLOR"></span> Users should not
-reserve more than 28 threads per each GPU device so that other users on
-the same node still have enough CPUs for their computations left.
-
-## Get started with Tensorflow
-
-This example shows how to install and start working with TensorFlow
-(with using modules system) and the python virtual environment. Please,
-check the next chapter for the details about the virtual environment.
-
-    srun -p ml --gres=gpu:1 -n 1 -c 7 --pty --mem-per-cpu=8000 bash   #Job submission in ml nodes with 1 gpu on 1 node with 8000 mb.
-
-    module load modenv/ml                    #example output: The following have been reloaded with a version change:  1) modenv/scs5 => modenv/ml
-
-    mkdir python-environments                #create folder
-    module load TensorFlow                   #load TensorFlow module. Example output: Module TensorFlow/1.10.0-PythonAnaconda-3.6 and 1 dependency loaded.
-    which python                             #check which python are you using
-    virtualenvv --system-site-packages python-environments/env   #create virtual environment "env" which inheriting with global site packages
-    source python-environments/env/bin/activate                  #Activate virtual environment "env". Example output: (env) bash-4.2$
-    python                                                       #start python
-    import tensorflow as tf
-    print(tf.VERSION)                                            #example output: 1.10.0
-
-Keep in mind that using **srun** directly on the shell will be blocking
-and launch an interactive job. Apart from short test runs, it is
-recommended to launch your jobs into the background by using batch
-jobs:\<span> **sbatch \[options\] \<job file>** \</span>. The example
-will be presented later on the page.
-
-As a Tensorflow example, we will use a \<a
-href="<https://www.tensorflow.org/tutorials>" target="\_blank">simple
-mnist model\</a>. Even though this example is in Python, the information
-here will still apply to other tools.
-
-The ml partition has very efficacious GPUs to offer. Do not assume that
-more power means automatically faster computational speed. The GPU is
-only one part of a typical machine learning application. Do not forget
-that first the input data needs to be loaded and in most cases even
-rescaled or augmented. If you do not specify that you want to use more
-than the default one worker (=one CPU thread), then it is very likely
-that your GPU computes faster, than it receives the input data. It is,
-therefore, possible, that you will not be any faster, than on other GPU
-partitions. \<span style="font-size: 1em;">You can solve this by using
-multithreading when loading your input data. The \</span>\<a
-href="<https://keras.io/models/sequential/#fit_generator>"
-target="\_blank">fit_generator\</a>\<span style="font-size: 1em;">
-method supports multiprocessing, just set \`use_multiprocessing\` to
-\`True\`, \</span>\<a href="Slurm#Job_Submission"
-target="\_blank">request more Threads\</a>\<span style="font-size:
-1em;"> from SLURM and set the \`Workers\` amount accordingly.\</span>
-
-The example below with a \<a
-href="<https://www.tensorflow.org/tutorials>" target="\_blank">simple
-mnist model\</a> of the python script illustrates using TF-Keras API
-from TensorFlow. \<a href="<https://www.tensorflow.org/guide/keras>"
-target="\_top">Keras\</a> is TensorFlows high-level API.
-
-**You can read in detail how to work with Keras on Taurus \<a
-href="Keras" target="\_blank">here\</a>.**
-
-    import tensorflow as tf
-    # Load and prepare the MNIST dataset. Convert the samples from integers to floating-point numbers:
-    mnist = tf.keras.datasets.mnist
-
-    (x_train, y_train),(x_test, y_test) = mnist.load_data()
-    x_train, x_test = x_train / 255.0, x_test / 255.0
-
-    # Build the tf.keras model by stacking layers. Select an optimizer and loss function used for training
-    model = tf.keras.models.Sequential([
-      tf.keras.layers.Flatten(input_shape=(28, 28)),
-      tf.keras.layers.Dense(512, activation=tf.nn.relu),
-      tf.keras.layers.Dropout(0.2),
-      tf.keras.layers.Dense(10, activation=tf.nn.softmax)
-    ])
-    model.compile(optimizer='adam',
-                  loss='sparse_categorical_crossentropy',
-                  metrics=['accuracy'])
-
-    # Train and evaluate model
-    model.fit(x_train, y_train, epochs=5)
-    model.evaluate(x_test, y_test)
-
-The example can train an image classifier with \~98% accuracy based on
-this dataset.
-
-## Python virtual environment
-
-A virtual environment is a cooperatively isolated runtime environment
-that allows Python users and applications to install and update Python
-distribution packages without interfering with the behaviour of other
-Python applications running on the same system. At its core, the main
-purpose of Python virtual environments is to create an isolated
-environment for Python projects.
-
-**Vitualenv**is a standard Python tool to create isolated Python
-environments and part of the Python installation/module. We recommend
-using virtualenv to work with Tensorflow and Pytorch on Taurus.\<br
-/>However, if you have reasons (previously created environments etc) you
-can also use conda which is the second way to use a virtual environment
-on the Taurus. \<a
-href="<https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html>"
-target="\_blank">Conda\</a> is an open-source package management system
-and environment management system. Note that using conda means that
-working with other modules from taurus will be harder or impossible.
-Hence it is highly recommended to use virtualenv.
-
-## Running the sbatch script on ML modules (modenv/ml) and SCS5 modules (modenv/scs5)
-
-Generally, for machine learning purposes the ml partition is used but
-for some special issues, the other partitions can be useful also. The
-following sbatch script can execute the above Python script both on ml
-partition or gpu2 partition.\<br /> When not using the
-TensorFlow-Anaconda modules you may need some additional modules that
-are not included (e.g. when using the TensorFlow module from modenv/scs5
-on gpu2).\<br />If you have a question about the sbatch script see the
-article about \<a href="Slurm" target="\_blank">SLURM\</a>. Keep in mind
-that you need to put the executable file (machine_learning_example.py)
-with python code to the same folder as the bash script file
-\<script_name>.sh (see below) or specify the path.
-
-    #!/bin/bash
-    #SBATCH --mem=8GB                         # specify the needed memory
-    #SBATCH -p ml                             # specify ml partition or gpu2 partition
-    #SBATCH --gres=gpu:1                      # use 1 GPU per node (i.e. use one GPU per task)
-    #SBATCH --nodes=1                         # request 1 node
-    #SBATCH --time=00:10:00                   # runs for 10 minutes
-    #SBATCH -c 7                              # how many cores per task allocated
-    #SBATCH -o HLR_<name_your_script>.out     # save output message under HLR_${SLURMJOBID}.out
-    #SBATCH -e HLR_<name_your_script>.err     # save error messages under HLR_${SLURMJOBID}.err
-
-    if [ "$SLURM_JOB_PARTITION" == "ml" ]; then
-        module load modenv/ml
-        module load TensorFlow/2.0.0-PythonAnaconda-3.7
-    else
-        module load modenv/scs5
-        module load TensorFlow/2.0.0-fosscuda-2019b-Python-3.7.4
-        module load Pillow/6.2.1-GCCcore-8.3.0               # Optional
-        module load h5py/2.10.0-fosscuda-2019b-Python-3.7.4  # Optional
-    fi
-
-    python machine_learning_example.py
-
-    ## when finished writing, submit with:  sbatch <script_name>
-
-Output results and errors file can be seen in the same folder in the
-corresponding files after the end of the job. Part of the example
-output:
-
-     1600/10000 [===>..........................] - ETA: 0s
-     3168/10000 [========>.....................] - ETA: 0s
-     4736/10000 [=============>................] - ETA: 0s
-     6304/10000 [=================>............] - ETA: 0s
-     7872/10000 [======================>.......] - ETA: 0s
-     9440/10000 [===========================>..] - ETA: 0s
-    10000/10000 [==============================] - 0s 38us/step
-
-## TensorFlow 2
-
-[TensorFlow
-2.0](https://blog.tensorflow.org/2019/09/tensorflow-20-is-now-available.html)
-is a significant milestone for TensorFlow and the community. There are
-multiple important changes for users. TensorFlow 2.0 removes redundant
-APIs, makes APIs more consistent (Unified RNNs, Unified Optimizers), and
-better integrates with the Python runtime with Eager execution. Also,
-TensorFlow 2.0 offers many performance improvements on GPUs.
-
-There are a number of TensorFlow 2 modules for both ml and scs5 modenvs
-on Taurus. Please check\<a href="SoftwareModulesList" target="\_blank">
-the software modules list\</a> for the information about available
-modules or use
-
-    module spider TensorFlow
-
-%RED%Note:<span class="twiki-macro ENDCOLOR"></span> Tensorflow 2 will
-be loaded by default when loading the Tensorflow module without
-specifying the version.
-
-\<span style="font-size: 1em;">TensorFlow 2.0 includes many API changes,
-such as reordering arguments, renaming symbols, and changing default
-values for parameters. Thus in some cases, it makes code written for the
-TensorFlow 1 not compatible with TensorFlow 2. However, If you are using
-the high-level APIs (tf.keras) there may be little or no action you need
-to take to make your code fully TensorFlow 2.0 \<a
-href="<https://www.tensorflow.org/guide/migrate>"
-target="\_blank">compatible\</a>. It is still possible to run 1.X code,
-unmodified ( [except for
-contrib](https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md)),
-in TensorFlow 2.0:\</span>
-
-    import tensorflow.compat.v1 as tf
-    tf.disable_v2_behavior()                                  #instead of "import tensorflow as tf"
-
-To make the transition to TF 2.0 as seamless as possible, the TensorFlow
-team has created the
-[`tf_upgrade_v2`](https://www.tensorflow.org/guide/upgrade) utility to
-help transition legacy code to the new API.
-
-## FAQ:
-
-Q: Which module environment should I use? modenv/ml, modenv/scs5,
-modenv/hiera
-
-A: On the ml partition use modenv/ml, on rome and gpu3 use modenv/hiera,
-else stay with the default of modenv/scs5.
-
-Q: How to change the module environment and know more about modules?
-
-A: [Modules](../software/runtime_environment.md#Modules)
+[TensorFlow](https://www.tensorflow.org) is a free end-to-end open-source software library for data
+flow and differentiable programming across many tasks. It is a symbolic math library, used primarily
+for machine learning applications. It has a comprehensive, flexible ecosystem of tools, libraries
+and community resources.
+
+Please check the software modules list via
+
+```console
+marie@compute$ module spider TensorFlow
+[...]
+```
+
+to find out, which TensorFlow modules are available on your partition.
+
+On ZIH systems, TensorFlow 2 is the default module version. For compatibility hints between
+TensorFlow 2 and TensorFlow 1, see the corresponding [section below](#compatibility-tf2-and-tf1).
+
+We recommend using partitions **Alpha** and/or **ML** when working with machine learning workflows
+and the TensorFlow library. You can find detailed hardware specification in our
+[Hardware](../jobs_and_resources/hardware_overview.md) documentation.
+
+## TensorFlow Console
+
+On the partition Alpha, load the module environment:
+
+```console
+marie@alpha$ module load modenv/scs5
+```
+
+Alternatively you can use `modenv/hiera` module environment, where the newest versions are
+available
+
+```console
+marie@alpha$ module load modenv/hiera  GCC/10.2.0  CUDA/11.1.1  OpenMPI/4.0.5
+
+The following have been reloaded with a version change:
+  1) modenv/scs5 => modenv/hiera
+
+Module GCC/10.2.0, CUDA/11.1.1, OpenMPI/4.0.5 and 15 dependencies loaded.
+marie@alpha$ module avail TensorFlow
+
+-------------- /sw/modules/hiera/all/MPI/GCC-CUDA/10.2.0-11.1.1/OpenMPI/4.0.5 -------------------
+   Horovod/0.21.1-TensorFlow-2.4.1    TensorFlow/2.4.1
+
+[...]
+```
+
+On the partition ML load the module environment:
+
+```console
+marie@ml$ module load modenv/ml
+The following have been reloaded with a version change:  1) modenv/scs5 => modenv/ml
+```
+
+This example shows how to install and start working with TensorFlow using the modules system.
+
+```console
+marie@ml$ module load TensorFlow
+Module TensorFlow/2.3.1-fosscuda-2019b-Python-3.7.4 and 47 dependencies loaded.
+```
+
+Now we can use TensorFlow. Nevertheless when working with Python in an interactive job, we recommend
+to use a virtual environment. In the following example, we create a python virtual environment and
+import TensorFlow:
+
+!!! example
+
+    ```console
+    marie@ml$ ws_allocate -F scratch python_virtual_environment 1
+    Info: creating workspace.
+    /scratch/ws/1/python_virtual_environment
+    [...]
+    marie@ml$ which python    #check which python are you using
+    /sw/installed/Python/3.7.2-GCCcore-8.2.0
+    marie@ml$ virtualenv --system-site-packages /scratch/ws/1/python_virtual_environment/env
+    [...]
+    marie@ml$ source /scratch/ws/1/python_virtual_environment/env/bin/activate
+    marie@ml$ python -c "import tensorflow as tf; print(tf.__version__)"
+    [...]
+    2.3.1
+    ```
+
+## TensorFlow in JupyterHub
+
+In addition to interactive and batch jobs, it is possible to work with TensorFlow using
+JupyterHub. The production and test environments of JupyterHub contain Python and R kernels, that
+both come with TensorFlow support. However, you can specify the TensorFlow version when spawning
+the notebook by pre-loading a specific TensorFlow module:
+
+![TensorFlow module in JupyterHub](misc/tensorflow_jupyter_module.png)
+{: align="center"}
+
+!!! hint
+
+    You can also define your own Jupyter kernel for more specific tasks. Please read about Jupyter
+    kernels and virtual environments in our
+    [JupyterHub](../access/jupyterhub.md#creating-and-using-your-own-environment) documentation.
+
+## TensorFlow in Containers
+
+Another option to use TensorFlow are containers. In the HPC domain, the
+[Singularity](https://singularity.hpcng.org/) container system is a widely used tool. In the
+following example, we use the tensorflow-test in a Singularity container:
+
+```console
+marie@ml$ singularity shell --nv /scratch/singularity/powerai-1.5.3-all-ubuntu16.04-py3.img
+Singularity>$ export PATH=/opt/anaconda3/bin:$PATH
+Singularity>$ source activate /opt/anaconda3    #activate conda environment
+(base) Singularity>$ . /opt/DL/tensorflow/bin/tensorflow-activate
+(base) Singularity>$ tensorflow-test
+Basic test of tensorflow - A Hello World!!!...
+[...]
+```
+
+## TensorFlow with Python or R
+
+For further information on TensorFlow in combination with Python see
+[data analytics with Python](data_analytics_with_python.md), for R see
+[data analytics with R](data_analytics_with_r.md).
+
+## Distributed TensorFlow
+
+For details on how to run TensorFlow with multiple GPUs and/or multiple nodes, see
+[distributed training](distributed_training.md).
+
+## Compatibility TF2 and TF1
+
+TensorFlow 2.0 includes many API changes, such as reordering arguments, renaming symbols, and
+changing default values for parameters. Thus in some cases, it makes code written for the TensorFlow
+1.X not compatible with TensorFlow 2.X. However, If you are using the high-level APIs (`tf.keras`)
+there may be little or no action you need to take to make your code fully
+[TensorFlow 2.0](https://www.tensorflow.org/guide/migrate) compatible. It is still possible to
+run 1.X code, unmodified (except for contrib), in TensorFlow 2.0:
+
+```python
+import tensorflow.compat.v1 as tf
+tf.disable_v2_behavior()    #instead of "import tensorflow as tf"
+```
+
+To make the transition to TensorFlow 2.0 as seamless as possible, the TensorFlow team has created
+the tf_upgrade_v2 utility to help transition legacy code to the new API.
+
+## Keras
+
+[Keras](https://keras.io) is a high-level neural network API, written in Python and capable
+of running on top of TensorFlow. Please check the software modules list via
+
+```console
+marie@compute$ module spider Keras
+[...]
+```
+
+to find out, which Keras modules are available on your partition. TensorFlow should be automatically
+loaded as a dependency. After loading the module, you can use Keras as usual.
diff --git a/doc.zih.tu-dresden.de/docs/software/tensorflow_container_on_hpcda.md b/doc.zih.tu-dresden.de/docs/software/tensorflow_container_on_hpcda.md
deleted file mode 100644
index 7b77f7da32f720efa0145971b1d3b9b9612a3e92..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/software/tensorflow_container_on_hpcda.md
+++ /dev/null
@@ -1,85 +0,0 @@
-# Container on HPC-DA (TensorFlow, PyTorch)
-
-<span class="twiki-macro RED"></span> **Note: This page is under
-construction** <span class="twiki-macro ENDCOLOR"></span>
-
-\<span style="font-size: 1em;">A container is a standard unit of
-software that packages up code and all its dependencies so the
-application runs quickly and reliably from one computing environment to
-another.\</span>
-
-**Prerequisites:** To work with Tensorflow, you need \<a href="Login"
-target="\_blank">access\</a> for the Taurus system and basic knowledge
-about containers, Linux systems.
-
-**Aim** of this page is to introduce users on how to use Machine
-Learning Frameworks such as TensorFlow or PyTorch on the \<a
-href="HPCDA" target="\_self">HPC-DA\</a> system - part of the TU Dresden
-HPC system.
-
-Using a container is one of the options to use Machine learning
-workflows on Taurus. Using containers gives you more flexibility working
-with modules and software but at the same time required more effort.
-
-\<span style="font-size: 1em;">On Taurus \</span>\<a
-href="<https://sylabs.io/>" target="\_blank">Singularity\</a>\<span
-style="font-size: 1em;"> used as a standard container solution.
-Singularity enables users to have full control of their environment.
-Singularity containers can be used to package entire scientific
-workflows, software and libraries, and even data. This means that
-\</span>**you dont have to ask an HPC support to install anything for
-you - you can put it in a Singularity container and run!**\<span
-style="font-size: 1em;">As opposed to Docker (the most famous container
-solution), Singularity is much more suited to being used in an HPC
-environment and more efficient in many cases. Docker containers also can
-easily be used in Singularity.\</span>
-
-Future information is relevant for the HPC-DA system (ML partition)
-based on Power9 architecture.
-
-In some cases using Singularity requires a Linux machine with root
-privileges, the same architecture and a compatible kernel. For many
-reasons, users on Taurus cannot be granted root permissions. A solution
-is a Virtual Machine (VM) on the ml partition which allows users to gain
-root permissions in an isolated environment. There are two main options
-on how to work with VM on Taurus:
-
-1\. [VM tools](vm_tools.md). Automative algorithms for using virtual
-machines;
-
-2\. [Manual method](virtual_machines.md). It required more operations but gives you
-more flexibility and reliability.
-
-Short algorithm to run the virtual machine manually:
-
-    srun -p ml -N 1 -c 4 --hint=nomultithread --cloud=kvm --pty /bin/bash<br />cat ~/.cloud_$SLURM_JOB_ID                                                          #Example output: ssh root@192.168.0.1<br />ssh root@192.168.0.1                                                                #Copy and paste output from the previous command     <br />./mount_host_data.sh 
-
-with VMtools:
-
-VMtools contains two main programs:
-**\<span>buildSingularityImage\</span>** and
-**\<span>startInVM.\</span>**
-
-Main options on how to create a container on ML nodes:
-
-1\. Create a container from the definition
-
-1.1 Create a Singularity definition from the Dockerfile.
-
-\<span style="font-size: 1em;">2. Importing container from the \</span>
-[DockerHub](https://hub.docker.com/search?q=ppc64le&type=image&page=1)\<span
-style="font-size: 1em;"> or \</span>
-[SingularityHub](https://singularity-hub.org/)
-
-Two main sources for the Tensorflow containers for the Power9
-architecture:
-
-<https://hub.docker.com/r/ibmcom/tensorflow-ppc64le>
-
-<https://hub.docker.com/r/ibmcom/powerai>
-
-Pytorch:
-
-<https://hub.docker.com/r/ibmcom/powerai>
-
--- Main.AndreiPolitov - 2020-01-03
diff --git a/doc.zih.tu-dresden.de/docs/software/tensorflow_on_jupyter_notebook.md b/doc.zih.tu-dresden.de/docs/software/tensorflow_on_jupyter_notebook.md
deleted file mode 100644
index a8dee14a25a9e7c82ed1977ad3e573defd4e791a..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/software/tensorflow_on_jupyter_notebook.md
+++ /dev/null
@@ -1,252 +0,0 @@
-# Tensorflow on Jupyter Notebook
-
-%RED%Note: This page is under construction<span
-class="twiki-macro ENDCOLOR"></span>
-
-Disclaimer: This page dedicates a specific question. For more general
-questions please check the JupyterHub webpage.
-
-The Jupyter Notebook is an open-source web application that allows you
-to create documents that contain live code, equations, visualizations,
-and narrative text. \<span style="font-size: 1em;">Jupyter notebook
-allows working with TensorFlow on Taurus with GUI (graphic user
-interface) and the opportunity to see intermediate results step by step
-of your work. This can be useful for users who dont have huge experience
-with HPC or Linux. \</span>
-
-**Prerequisites:** To work with Tensorflow and jupyter notebook you need
-\<a href="Login" target="\_blank">access\</a> for the Taurus system and
-basic knowledge about Python, SLURM system and the Jupyter notebook.
-
-\<span style="font-size: 1em;"> **This page aims** to introduce users on
-how to start working with TensorFlow on the [HPCDA](../jobs_and_resources/hpcda.md) system - part
-of the TU Dresden HPC system with a graphical interface.\</span>
-
-## Get started with Jupyter notebook
-
-Jupyter notebooks are a great way for interactive computing in your web
-browser. Jupyter allows working with data cleaning and transformation,
-numerical simulation, statistical modelling, data visualization and of
-course with machine learning.
-
-\<span style="font-size: 1em;">There are two general options on how to
-work Jupyter notebooks using HPC. \</span>
-
--   \<span style="font-size: 1em;">There is \</span>**\<a
-    href="JupyterHub" target="\_self">jupyterhub\</a>** on Taurus, where
-    you can simply run your Jupyter notebook on HPC nodes. JupyterHub is
-    available [here](https://taurus.hrsk.tu-dresden.de/jupyter)
--   For more specific cases you can run a manually created **remote
-    jupyter server.** \<span style="font-size: 1em;"> You can find the
-    manual server setup [here](deep_learning.md).
-
-\<span style="font-size: 13px;">Keep in mind that with Jupyterhub you
-can't work with some special instruments. However general data analytics
-tools are available. Still and all, the simplest option for beginners is
-using JupyterHub.\</span>
-
-## Virtual environment
-
-\<span style="font-size: 1em;">For working with TensorFlow and python
-packages using virtual environments (kernels) is necessary.\</span>
-
-Interactive code interpreters that are used by Jupyter Notebooks are
-called kernels.\<br />Creating and using your kernel (environment) has
-the benefit that you can install your preferred python packages and use
-them in your notebooks.
-
-A virtual environment is a cooperatively isolated runtime environment
-that allows Python users and applications to install and upgrade Python
-distribution packages without interfering with the behaviour of other
-Python applications running on the same system. So the [Virtual
-environment](https://docs.python.org/3/glossary.html#term-virtual-environment)
-is a self-contained directory tree that contains a Python installation
-for a particular version of Python, plus several additional packages. At
-its core, the main purpose of Python virtual environments is to create
-an isolated environment for Python projects. Python virtual environment is
-the main method to work with Deep Learning software as TensorFlow on the
-[HPCDA](../jobs_and_resources/hpcda.md) system.
-
-### Conda and Virtualenv
-
-There are two methods of how to work with virtual environments on
-Taurus. **Vitualenv (venv)** is a
-standard Python tool to create isolated Python environments. We
-recommend using venv to work with Tensorflow and Pytorch on Taurus. It
-has been integrated into the standard library under
-the [venv](https://docs.python.org/3/library/venv.html).
-However, if you have reasons (previously created environments etc) you
-could easily use conda. The conda is the second way to use a virtual
-environment on the Taurus.
-[Conda](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html)
-is an open-source package management system and environment management system
-from the Anaconda.
-
-**Note:** Keep in mind that you **can not** use conda for working with
-the virtual environments previously created with Vitualenv tool and vice
-versa!
-
-This example shows how to start working with environments and prepare
-environment (kernel) for working with Jupyter server
-
-    srun -p ml --gres=gpu:1 -n 1 --pty --mem-per-cpu=8000 bash   #Job submission in ml nodes with 1 gpu on 1 node with 8000 mb.
-
-    module load modenv/ml                    #example output: The following have been reloaded with a version change:  1) modenv/scs5 => modenv/ml
-
-    mkdir python-virtual-environments        #create folder for your environments
-    cd python-virtual-environments           #go to folder
-    module load TensorFlow                   #load TensorFlow module. Example output: Module TensorFlow/1.10.0-PythonAnaconda-3.6 and 1 dependency loaded.
-    which python                             #check which python are you using
-    python3 -m venv --system-site-packages env               #create virtual environment "env" which inheriting with global site packages
-    source env/bin/activate                                  #Activate virtual environment "env". Example output: (env) bash-4.2$
-    module load TensorFlow                                   #load TensorFlow module in the virtual environment
-
-The inscription (env) at the beginning of each line represents that now
-you are in the virtual environment.
-
-Now you can check the working capacity of the current environment.
-
-    python                                                       #start python
-    import tensorflow as tf
-    print(tf.VERSION)                                            #example output: 1.14.0
-
-### Install Ipykernel
-
-Ipykernel is an interactive Python shell and a Jupyter kernel to work
-with Python code in Jupyter notebooks. The IPython kernel is the Python
-execution backend for Jupyter. The Jupyter Notebook
-automatically ensures that the IPython kernel is available.
-
-```
-    (env) bash-4.2$ pip install ipykernel                        #example output: Collecting ipykernel
-    ...
-                                                                 #example output: Successfully installed ... ipykernel-5.1.0 ipython-7.5.0 ...
-
-    (env) bash-4.2$ python -m ipykernel install --user --name env --display-name="env"
-
-                                              #example output: Installed kernelspec my-kernel in .../.local/share/jupyter/kernels/env
-    [install now additional packages for your notebooks]
-```
-
-Deactivate the virtual environment
-
-    (env) bash-4.2$ deactivate
-
-So now you have a virtual environment with included TensorFlow module.
-You can use this workflow for your purposes particularly for the simple
-running of your jupyter notebook with Tensorflow code.
-
-## Examples and running the model
-
-Below are brief explanations examples of Jupyter notebooks with
-Tensorflow models which you can run on ml nodes of HPC-DA. Prepared
-examples of TensorFlow models give you an understanding of how to work
-with jupyterhub and tensorflow models. It can be useful and instructive
-to start your acquaintance with Tensorflow and HPC-DA system from these
-simple examples.
-
-You can use a [remote Jupyter server](../access/jupyterhub.md). For simplicity, we
-will recommend using Jupyterhub for our examples.
-
-JupyterHub is available [here](https://taurus.hrsk.tu-dresden.de/jupyter)
-
-Please check updates and details [JupyterHub](../access/jupyterhub.md). However,
-the general pipeline can be briefly explained as follows.
-
-After logging, you can start a new session and configure it. There are
-simple and advanced forms to set up your session. On the simple form,
-you have to choose the "IBM Power (ppc64le)" architecture. You can
-select the required number of CPUs and GPUs. For the acquaintance with
-the system through the examples below the recommended amount of CPUs and
-1 GPU will be enough. With the advanced form, you can use the
-configuration with 1 GPU and 7 CPUs. To access all your workspaces
-use " / " in the workspace scope.
-
-You need to download the file with a jupyter notebook that already
-contains all you need for the start of the work. Please put the file
-into your previously created virtual environment in your working
-directory or use the kernel for your notebook.
-
-Note: You could work with simple examples in your home directory but according to
-[new storage concept](../data_lifecycle/hpc_storage_concept2019.md) please use
-[workspaces](../data_lifecycle/workspaces.md) for your study and work projects**.
-For this reason, you have to use advanced options and put "/" in "Workspace scope" field.
-
-To download the first example (from the list below) into your previously
-created virtual environment you could use the following command:
-
-```
-    ws_list
-    cd <name_of_your_workspace>                  #go to workspace
-
-    wget https://doc.zih.tu-dresden.de/hpc-wiki/pub/Compendium/TensorFlowOnJupyterNotebook/Mnistmodel.zip
-    unzip Example_TensorFlow_Automobileset.zip
-```
-
-Also, you could use kernels for all notebooks, not only for them which placed
-in your virtual environment. See the [jupyterhub](../access/jupyterhub.md) page.
-
-### Examples:
-
-1\. Simple MNIST model. The MNIST database is a large database of
-handwritten digits that is commonly used for \<a
-href="<https://en.wikipedia.org/wiki/Training_set>" title="Training
-set">t\</a>raining various image processing systems. This model
-illustrates using TF-Keras API. \<a
-href="<https://www.tensorflow.org/guide/keras>"
-target="\_top">Keras\</a> is TensorFlow's high-level API. Tensorflow and
-Keras allow us to import and download the MNIST dataset directly from
-their API. Recommended parameters for running this model is 1 GPU and 7
-cores (28 thread)
-
-[doc.zih.tu-dresden.de/hpc-wiki/pub/Compendium/TensorFlowOnJupyterNotebook/Mnistmodel.zip]**todo**(Mnistmodel.zip)
-
-### Running the model
-
-\<span style="font-size: 1em;">Documents are organized with tabs and a
-very versatile split-screen feature. On the left side of the screen, you
-can open your file. Use 'File-Open from Path' to go to your workspace
-(e.g. /scratch/ws/\<username-name_of_your_ws>). You could run each cell
-separately step by step and analyze the result of each step. Default
-command for running one cell Shift+Enter'. Also, you could run all cells
-with the command 'run all cells' how presented on the picture
-below\</span>
-
-**todo** \<img alt="Screenshot_from_2019-09-03_15-20-16.png" height="250"
-src="Screenshot_from_2019-09-03_15-20-16.png"
-title="Screenshot_from_2019-09-03_15-20-16.png" width="436" />
-
-#### Additional advanced models
-
-1\. A simple regression model uses [Automobile
-dataset](https://archive.ics.uci.edu/ml/datasets/Automobile). In a
-regression problem, we aim to predict the output of a continuous value,
-in this case, we try to predict fuel efficiency. This is the simple
-model created to present how to work with a jupyter notebook for the
-TensorFlow models. Recommended parameters for running this model is 1
-GPU and 7 cores (28 thread)
-
-[doc.zih.tu-dresden.de/hpc-wiki/pub/Compendium/TensorFlowOnJupyterNotebook/Example_TensorFlow_Automobileset.zip]**todo**(Example_TensorFlow_Automobileset.zip)
-
-2\. The regression model uses the
-[dataset](https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data)
-with meteorological data from the Beijing airport and the US embassy.
-The data set contains almost 50 thousand on instances and therefore
-needs more computational effort. Recommended parameters for running this
-model is 1 GPU and 7 cores (28 threads)
-
-[doc.zih.tu-dresden.de/hpc-wiki/pub/Compendium/TensorFlowOnJupyterNotebook/Example_TensorFlow_Meteo_airport.zip]**todo**(Example_TensorFlow_Meteo_airport.zip)
-
-**Note**: All examples created only for study purposes. The main aim is
-to introduce users of the HPC-DA system of TU-Dresden with TensorFlow
-and Jupyter notebook. Examples do not pretend to completeness or
-science's significance. Feel free to improve the models and use them for
-your study.
-
--   [Mnistmodel.zip]**todo**(Mnistmodel.zip): Mnistmodel.zip
--   [Example_TensorFlow_Automobileset.zip]**todo**(Example_TensorFlow_Automobileset.zip):
-    Example_TensorFlow_Automobileset.zip
--   [Example_TensorFlow_Meteo_airport.zip]**todo**(Example_TensorFlow_Meteo_airport.zip):
-    Example_TensorFlow_Meteo_airport.zip
--   [Example_TensorFlow_3D_road_network.zip]**todo**(Example_TensorFlow_3D_road_network.zip):
-    Example_TensorFlow_3D_road_network.zip
diff --git a/doc.zih.tu-dresden.de/docs/software/virtual_machines.md b/doc.zih.tu-dresden.de/docs/software/virtual_machines.md
index 5104c7b35587aaeaca86d64419ffd8965d2fa27b..c6c660d3c5ac052f3362ad950f6ad395e4420bdf 100644
--- a/doc.zih.tu-dresden.de/docs/software/virtual_machines.md
+++ b/doc.zih.tu-dresden.de/docs/software/virtual_machines.md
@@ -1,88 +1,89 @@
-# Virtual machine on Taurus
+# Virtual Machines
 
-The following instructions are primarily aimed at users who want to build their
-[Singularity](containers.md) containers on Taurus.
+The following instructions are primarily aimed at users who want to build their own
+[Singularity](containers.md) containers on ZIH systems.
 
 The Singularity container setup requires a Linux machine with root privileges, the same architecture
 and a compatible kernel. If some of these requirements can not be fulfilled, then there is
-also the option of using the provided virtual machines on Taurus.
+also the option of using the provided virtual machines (VM) on ZIH systems.
 
-Currently, starting VMs is only possible on ML and HPDLF nodes.  The VMs on the ML nodes are used to
-build singularity containers for the Power9 architecture and the HPDLF nodes to build singularity
-containers for the x86 architecture.
+Currently, starting VMs is only possible on partitions `ml` and HPDLF. The VMs on the ML nodes are
+used to build singularity containers for the Power9 architecture and the HPDLF nodes to build
+Singularity containers for the x86 architecture.
 
-## Create a virtual machine
+## Create a Virtual Machine
 
-The `--cloud=kvm` SLURM parameter specifies that a virtual machine should be started.
+The `--cloud=kvm` Slurm parameter specifies that a virtual machine should be started.
 
-### On Power9 architecture
+### On Power9 Architecture
 
-```Bash
-rotscher@tauruslogin3:~&gt; srun -p ml -N 1 -c 4 --hint=nomultithread --cloud=kvm --pty /bin/bash
+```console
+marie@login$ srun -p ml -N 1 -c 4 --hint=nomultithread --cloud=kvm --pty /bin/bash
 srun: job 6969616 queued and waiting for resources
 srun: job 6969616 has been allocated resources
 bash-4.2$
 ```
 
-### On x86 architecture
+### On x86 Architecture
 
-```Bash
-rotscher@tauruslogin3:~&gt; srun -p hpdlf -N 1 -c 4 --hint=nomultithread --cloud=kvm --pty /bin/bash
+```console
+marie@login$ srun -p hpdlf -N 1 -c 4 --hint=nomultithread --cloud=kvm --pty /bin/bash
 srun: job 2969732 queued and waiting for resources
 srun: job 2969732 has been allocated resources
 bash-4.2$
 ```
 
-## Access virtual machine
+## Access a Virtual Machine
 
-Since the security issue on Taurus, we restricted the file system permissions.  Now you have to wait
-until the file /tmp/${SLURM_JOB_USER}\_${SLURM_JOB_ID}/activate is created, then you can try to ssh
-into the virtual machine (VM), but it could be that the VM needs some more seconds to boot and start
-the SSH daemon. So you may need to try the `ssh` command multiple times till it succeeds.
+Since the a security issue on ZIH systems, we restricted the filesystem permissions. Now you have to
+wait until the file `/tmp/${SLURM_JOB_USER}\_${SLURM_JOB_ID}/activate` is created, then you can try
+to connect via `ssh` into the virtual machine, but it could be that the virtual machine needs some
+more seconds to boot and start the SSH daemon. So you may need to try the `ssh` command multiple
+times till it succeeds.
 
-```Bash
-bash-4.2$ cat /tmp/rotscher_2759627/activate 
+```console
+bash-4.2$ cat /tmp/marie_2759627/activate
 #!/bin/bash
 
-if ! grep -q -- "Key for the VM on the ml partition" "/home/rotscher/.ssh/authorized_keys" &gt;& /dev/null; then
-  cat "/tmp/rotscher_2759627/kvm.pub" &gt;&gt; "/home/rotscher/.ssh/authorized_keys"
+if ! grep -q -- "Key for the VM on the partition ml" "/home/rotscher/.ssh/authorized_keys" &gt;& /dev/null; then
+  cat "/tmp/marie_2759627/kvm.pub" >> "/home/marie/.ssh/authorized_keys"
 else
-  sed -i "s|.*Key for the VM on the ml partition.*|ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC3siZfQ6vQ6PtXPG0RPZwtJXYYFY73TwGYgM6mhKoWHvg+ZzclbBWVU0OoU42B3Ddofld7TFE8sqkHM6M+9jh8u+pYH4rPZte0irw5/27yM73M93q1FyQLQ8Rbi2hurYl5gihCEqomda7NQVQUjdUNVc6fDAvF72giaoOxNYfvqAkw8lFyStpqTHSpcOIL7pm6f76Jx+DJg98sXAXkuf9QK8MurezYVj1qFMho570tY+83ukA04qQSMEY5QeZ+MJDhF0gh8NXjX/6+YQrdh8TklPgOCmcIOI8lwnPTUUieK109ndLsUFB5H0vKL27dA2LZ3ZK+XRCENdUbpdoG2Czz Key for the VM on the ml partition|" "/home/rotscher/.ssh/authorized_keys"
+  sed -i "s|.*Key for the VM on the partition ml.*|ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC3siZfQ6vQ6PtXPG0RPZwtJXYYFY73TwGYgM6mhKoWHvg+ZzclbBWVU0OoU42B3Ddofld7TFE8sqkHM6M+9jh8u+pYH4rPZte0irw5/27yM73M93q1FyQLQ8Rbi2hurYl5gihCEqomda7NQVQUjdUNVc6fDAvF72giaoOxNYfvqAkw8lFyStpqTHSpcOIL7pm6f76Jx+DJg98sXAXkuf9QK8MurezYVj1qFMho570tY+83ukA04qQSMEY5QeZ+MJDhF0gh8NXjX/6+YQrdh8TklPgOCmcIOI8lwnPTUUieK109ndLsUFB5H0vKL27dA2LZ3ZK+XRCENdUbpdoG2Czz Key for the VM on the partition ml|" "/home/marie/.ssh/authorized_keys"
 fi
 
-ssh -i /tmp/rotscher_2759627/kvm root@192.168.0.6
-bash-4.2$ source /tmp/rotscher_2759627/activate 
+ssh -i /tmp/marie_2759627/kvm root@192.168.0.6
+bash-4.2$ source /tmp/marie_2759627/activate
 Last login: Fri Jul 24 13:53:48 2020 from gateway
-[root@rotscher_2759627 ~]#
+[root@marie_2759627 ~]#
 ```
 
-## Example usage
+## Example Usage
 
 ## Automation
 
-We provide [Tools](vm_tools.md) to automate these steps. You may just type `startInVM --arch=power9`
-on a tauruslogin node and you will be inside the VM with everything mounted.
+We provide [tools](virtual_machines_tools.md) to automate these steps. You may just type `startInVM
+--arch=power9` on a login node and you will be inside the VM with everything mounted.
 
 ## Known Issues
 
 ### Temporary Memory
 
-The available space inside the VM can be queried with `df -h`. Currently the whole VM has 8G and
-with the installed operating system, 6.6GB of available space.
+The available space inside the VM can be queried with `df -h`. Currently the whole VM has 8 GB and
+with the installed operating system, 6.6 GB of available space.
 
 Sometimes the Singularity build might fail because of a disk out-of-memory error. In this case it
 might be enough to delete leftover temporary files from Singularity:
 
-```Bash
+```console
 rm -rf /tmp/sbuild-*
 ```
 
 If that does not help, e.g., because one build alone needs more than the available disk memory, then
 it will be necessary to use the tmp folder on scratch. In order to ensure that the files in the
-temporary folder will be owned by root, it is necessary to set up an image inside /scratch/tmp
-instead of using it directly. E.g., to create a 25GB of temporary memory image:
+temporary folder will be owned by root, it is necessary to set up an image inside `/scratch/tmp`
+instead of using it directly. E.g., to create a 25 GB of temporary memory image:
 
-```Bash
+```console
 tmpDir="$( mktemp -d --tmpdir=/host_data/tmp )" && tmpImg="$tmpDir/singularity-build-temp-dir"
 export LANG_BACKUP=$LANG
 unset LANG
@@ -90,13 +91,17 @@ truncate -s 25G "$tmpImg.ext4" && echo yes | mkfs.ext4 "$tmpImg.ext4"
 export LANG=$LANG_BACKUP
 ```
 
-The image can now be mounted and with the **SINGULARITY_TMPDIR** environment variable can be
+The image can now be mounted and with the `SINGULARITY_TMPDIR` environment variable can be
 specified as the temporary directory for Singularity builds. Unfortunately, because of an open
 Singularity [bug](https://github.com/sylabs/singularity/issues/32) it is should be avoided to mount
-the image using **/dev/loop0**.
+the image using `/dev/loop0`.
 
-```Bash
-mkdir -p "$tmpImg" && i=1 && while test -e "/dev/loop$i"; do (( ++i )); done && mknod -m 0660 "/dev/loop$i" b 7 "$i"<br />mount -o loop="/dev/loop$i" "$tmpImg"{.ext4,}<br /><br />export SINGULARITY_TMPDIR="$tmpImg"<br /><br />singularity build my-container.{sif,def}
+```console
+mkdir -p "$tmpImg" && i=1 && while test -e "/dev/loop$i"; do (( ++i )); done && mknod -m 0660 "/dev/loop$i" b 7 "$i"
+mount -o loop="/dev/loop$i" "$tmpImg"{.ext4,}
+
+export SINGULARITY_TMPDIR="$tmpImg"
+singularity build my-container.{sif,def}
 ```
 
 The architecture of the base image is automatically chosen when you use an image from DockerHub.
@@ -106,4 +111,4 @@ Bootstraps **shub** and **library** should be avoided.
 ### Transport Endpoint is not Connected
 
 This happens when the SSHFS mount gets unmounted because it is not very stable. It is sufficient to
-run `\~/mount_host_data.sh` again or just the sshfs command inside that script.
+run `\~/mount_host_data.sh` again or just the SSHFS command inside that script.
diff --git a/doc.zih.tu-dresden.de/docs/software/vm_tools.md b/doc.zih.tu-dresden.de/docs/software/virtual_machines_tools.md
similarity index 50%
rename from doc.zih.tu-dresden.de/docs/software/vm_tools.md
rename to doc.zih.tu-dresden.de/docs/software/virtual_machines_tools.md
index 5a4d58a7e2ac7a1532d5029312e3ff3b479d7939..0b03ddf927aeed68d8726797ed04db373d24b9b3 100644
--- a/doc.zih.tu-dresden.de/docs/software/vm_tools.md
+++ b/doc.zih.tu-dresden.de/docs/software/virtual_machines_tools.md
@@ -1,71 +1,70 @@
-# Singularity on Power9 / ml partition
+# Singularity on Partition `ml`
 
-Building Singularity containers from a recipe on Taurus is normally not possible due to the
-requirement of root (administrator) rights, see [Containers](containers.md). For obvious reasons
-users on Taurus cannot be granted root permissions.
+!!! note "Root privileges"
 
-The solution is to build your container on your local Linux machine by executing something like
+    Building Singularity containers from a recipe on ZIH system is normally not possible due to the
+    requirement of root (administrator) rights, see [Containers](containers.md). For obvious reasons
+    users cannot be granted root permissions.
 
-```Bash
-sudo singularity build myContainer.sif myDefinition.def
-```
-
-Then you can copy the resulting myContainer.sif to Taurus and execute it there.
+The solution is to build your container on your local Linux workstation using Singularity and copy
+it to ZIH systems for execution.
 
-This does **not** work on the ml partition as it uses the Power9 architecture which your laptop
-likely doesn't.
+**This does not work on the partition `ml`** as it uses the Power9 architecture which your
+workstation likely doesn't.
 
-For this we provide a Virtual Machine (VM) on the ml partition which allows users to gain root
+For this we provide a Virtual Machine (VM) on the partition `ml` which allows users to gain root
 permissions in an isolated environment. The workflow to use this manually is described at
-[another page](virtual_machines.md) but is quite cumbersome.
+[this page](virtual_machines.md) but is quite cumbersome.
 
 To make this easier two programs are provided: `buildSingularityImage` and `startInVM` which do what
 they say. The latter is for more advanced use cases so you should be fine using
-*buildSingularityImage*, see the following section.
+`buildSingularityImage`, see the following section.
 
-**IMPORTANT:** You need to have your default SSH key without a password for the scripts to work as
-entering a password through the scripts is not supported.
+!!! note "SSH key without password"
+
+    You need to have your default SSH key without a password for the scripts to work as
+    entering a password through the scripts is not supported.
 
 **The recommended workflow** is to create and test a definition file locally. You usually start from
 a base Docker container. Those typically exist for different architectures but with a common name
-(e.g.  'ubuntu:18.04'). Singularity automatically uses the correct Docker container for your current
+(e.g.  `ubuntu:18.04`). Singularity automatically uses the correct Docker container for your current
 architecture when building. So in most cases you can write your definition file, build it and test
-it locally, then move it to Taurus and build it on Power9 without any further changes.  However,
-sometimes Docker containers for different architectures have different suffixes, in which case you'd
-need to change that when moving to Taurus.
+it locally, then move it to ZIH systems and build it on Power9 (partition `ml`) without any further
+changes. However, sometimes Docker containers for different architectures have different suffixes,
+in which case you'd need to change that when moving to ZIH systems.
 
-## Building a Singularity container in a job
+## Build a Singularity Container in a Job
 
-To build a singularity container on Taurus simply run:
+To build a Singularity container on ZIH systems simply run:
 
-```Bash
-buildSingularityImage --arch=power9 myContainer.sif myDefinition.def
+```console
+marie@login$ buildSingularityImage --arch=power9 myContainer.sif myDefinition.def
 ```
 
-This command will submit a batch job and immediately return. Note that while "power9" is currently
+This command will submit a batch job and immediately return. Note that while Power9 is currently
 the only supported architecture, the parameter is still required. If you want it to block while the
-image is built and see live output, use the parameter `--interactive`:
+image is built and see live output, add the option `--interactive`:
 
-```Bash
-buildSingularityImage --arch=power9 --interactive myContainer.sif myDefinition.def
+```console
+marie@login$ buildSingularityImage --arch=power9 --interactive myContainer.sif myDefinition.def
 ```
 
 There are more options available which can be shown by running `buildSingularityImage --help`. All
 have reasonable defaults.The most important ones are:
 
-- `--time <time>`: Set a higher job time if the default time is not
-  enough to build your image and your job is cancelled before completing. The format is the same
-  as for SLURM.
-- `--tmp-size=<size in GB>`: Set a size used for the temporary
+* `--time <time>`: Set a higher job time if the default time is not
+  enough to build your image and your job is canceled before completing. The format is the same as
+  for Slurm.
+* `--tmp-size=<size in GB>`: Set a size used for the temporary
   location of the Singularity container. Basically the size of the extracted container.
-- `--output=<file>`: Path to a file used for (log) output generated
+* `--output=<file>`: Path to a file used for (log) output generated
   while building your container.
-- Various singularity options are passed through. E.g.
+* Various Singularity options are passed through. E.g.
   `--notest, --force, --update`. See, e.g., `singularity --help` for details.
 
 For **advanced users** it is also possible to manually request a job with a VM (`srun -p ml
 --cloud=kvm ...`) and then use this script to build a Singularity container from within the job. In
-this case the `--arch` and other SLURM related parameters are not required. The advantage of using
+this case the `--arch` and other Slurm related parameters are not required. The advantage of using
 this script is that it automates the waiting for the VM and mounting of host directories into it
 (can also be done with `startInVM`) and creates a temporary directory usable with Singularity inside
 the VM controlled by the `--tmp-size` parameter.
@@ -78,31 +77,31 @@ As the build starts in a VM you may not have access to all your files.  It is us
 to refer to local files from inside a definition file anyway as this reduces reproducibility.
 However common directories are available by default. For others, care must be taken. In short:
 
-- `/home/$USER`, `/scratch/$USER` are available and should be used `/scratch/\<group>` also works for
-- all groups the users is in `/projects/\<group>` similar, but is read-only! So don't use this to
+* `/home/$USER`, `/scratch/$USER` are available and should be used `/scratch/\<group>` also works for
+* all groups the users is in `/projects/\<group>` similar, but is read-only! So don't use this to
   store your generated container directly, but rather move it here afterwards
-- /tmp is the VM local temporary directory. All files put here will be lost!
+* /tmp is the VM local temporary directory. All files put here will be lost!
 
 If the current directory is inside (or equal to) one of the above (except `/tmp`), then relative paths
 for container and definition work as the script changes to the VM equivalent of the current
 directory.  Otherwise you need to use absolute paths. Using `~` in place of `$HOME` does work too.
 
-Under the hood, the filesystem of Taurus is mounted via SSHFS at `/host_data`, so if you need any
+Under the hood, the filesystem of ZIH systems is mounted via SSHFS at `/host_data`, so if you need any
 other files they can be found there.
 
-There is also a new SSH key named "kvm" which is created by the scripts and authorized inside the VM
-to allow for password-less access to SSHFS.  This is stored at `~/.ssh/kvm` and regenerated if it
+There is also a new SSH key named `kvm` which is created by the scripts and authorized inside the VM
+to allow for password-less access to SSHFS. This is stored at `~/.ssh/kvm` and regenerated if it
 does not exist. It is also added to `~/.ssh/authorized_keys`. Note that removing the key file does
 not remove it from `authorized_keys`, so remove it manually if you need to. It can be easily
-identified by the comment on the key.  However, removing this key is **NOT** recommended, as it
+identified by the comment on the key. However, removing this key is **NOT** recommended, as it
 needs to be re-generated on every script run.
 
-## Starting a Job in a VM
+## Start a Job in a VM
 
 Especially when developing a Singularity definition file it might be useful to get a shell directly
 on a VM. To do so simply run:
 
-```Bash
+```console
 startInVM --arch=power9
 ```
 
@@ -114,10 +113,11 @@ build` commands.
 As usual more options can be shown by running `startInVM --help`, the most important one being
 `--time`.
 
-There are 2 special use cases for this script: 1 Execute an arbitrary command inside the VM instead
-of getting a bash by appending the command to the script. Example: \<pre>startInVM --arch=power9
-singularity build \~/myContainer.sif \~/myDefinition.def\</pre> 1 Use the script in a job manually
-allocated via srun/sbatch. This will work the same as when running outside a job but will **not**
-start a new job. This is useful for using it inside batch scripts, when you already have an
-allocation or need special arguments for the job system. Again you can run an arbitrary command by
-passing it to the script.
+There are two special use cases for this script:
+
+1. Execute an arbitrary command inside the VM instead of getting a bash by appending the command to
+   the script. Example: `startInVM --arch=power9 singularity build \~/myContainer.sif  \~/myDefinition.de`
+1. Use the script in a job manually allocated via srun/sbatch. This will work the same as when
+   running outside a job but will **not** start a new job. This is useful for using it inside batch
+   scripts, when you already have an allocation or need special arguments for the job system. Again
+   you can run an arbitrary command by passing it to the script.
diff --git a/doc.zih.tu-dresden.de/docs/software/visualization.md b/doc.zih.tu-dresden.de/docs/software/visualization.md
index b01739eec80bc9f11f9eefe07bbd2556a15651ea..328acc490f5fa5c65e687d50bf9f43ceae44c541 100644
--- a/doc.zih.tu-dresden.de/docs/software/visualization.md
+++ b/doc.zih.tu-dresden.de/docs/software/visualization.md
@@ -2,201 +2,207 @@
 
 ## ParaView
 
-[ParaView](https://paraview.org) is an open-source, multi-platform data
-analysis and visualization application. It is available on Taurus under
-the `ParaView` [modules](modules.md#modules-environment)
+[ParaView](https://paraview.org) is an open-source, multi-platform data analysis and visualization
+application. The ParaView package comprises different tools which are designed to meet interactive,
+batch and in-situ workflows.
 
-```Bash
-taurus$ module avail ParaView
+ParaView is available on ZIH systems from the [modules system](modules.md#modules-environment). The
+following command lists the available versions
+
+```console
+marie@login$ module avail ParaView
 
    ParaView/5.4.1-foss-2018b-mpi  (D)    ParaView/5.5.2-intel-2018a-mpi                ParaView/5.7.0-osmesa
    ParaView/5.4.1-intel-2018a-mpi        ParaView/5.6.2-foss-2019b-Python-3.7.4-mpi    ParaView/5.7.0
+[...]
 ```
 
-The ParaView package comprises different tools which are designed to
-meet interactive, batch and in-situ workflows.
-
 ## Batch Mode - PvBatch
 
-ParaView can run in batch mode, i.e., without opening the ParaView GUI,
-executing a python script. This way, common visualization tasks can be
-automated. There are two Python interfaces: - *pvpython* and *pvbatch*.
-*pvbatch* only accepts commands from input scripts, and it will run in
-parallel if it was built using MPI.
-
-ParaView is shipped with a prebuild MPI library and ***pvbatch has to be
-invoked using this very mpiexec*** command. Make sure to not use *srun
-or mpiexec* from another MPI module, e.g., check what *mpiexec* is in
-the path:
-
-```Bash
-taurus$ module load ParaView/5.7.0-osmesa
-taurus$ which mpiexec
-/sw/installed/ParaView/5.7.0-osmesa/bin/mpiexec
+ParaView can run in batch mode, i.e., without opening the ParaView GUI, executing a Python script.
+This way, common visualization tasks can be automated. There are two Python interfaces: *PvPython*
+and *PvBatch*. The interface *PvBatch* only accepts commands from input scripts, and it will run in
+parallel, if it was built using MPI.
+
+!!! note
+
+    ParaView is shipped with a prebuild MPI library and **pvbatch has to be
+    invoked using this very mpiexec** command. Make sure to not use `srun`
+    or `mpiexec` from another MPI module, e.g., check what `mpiexec` is in
+    the path:
+
+    ```console
+    marie@login$ module load ParaView/5.7.0-osmesa
+    marie@login$ which mpiexec
+    /sw/installed/ParaView/5.7.0-osmesa/bin/mpiexec
+    ```
+
+The resources for the MPI processes have to be allocated via the
+[batch system](../jobs_and_resources/slurm.md) option `-c NUM` (not `-n`, as it would be usually for
+MPI processes). It might be valuable in terms of runtime to bind/pin the MPI processes to hardware.
+A convenient option is `-bind-to core`. All other options can be obtained by
+
+```console
+marie@login$ mpiexec -bind-to -help`
 ```
 
-The resources for the MPI processes have to be allocated via the Slurm option *-c NUM* (not *-n*, as
-it would be usually for MPI processes). It might be valuable in terms of runtime to bind/pin the MPI
-processes to hardware. A convenient option is *-bind-to core*. All other options can be obtained by
-*taurus$ mpiexec -bind-to -help* or from
-[https://wiki.mpich.org/mpich/index.php/Using_the_Hydra_Process_Manager#Process-core_Binding%7Cwiki.mpich.org].
+or from
+[mpich wiki](https://wiki.mpich.org/mpich/index.php/Using_the_Hydra_Process_Manager#Process-core_Binding%7Cwiki.mpich.org).
 
-Jobfile
+In the following, we provide two examples on how to use `pvbatch` from within a jobfile and an
+interactive allocation.
 
-```Bash
-#!/bin/bash
+??? example "Example jobfile"
 
-#SBATCH -N 1
-#SBATCH -c 12
-#SBATCH --time=01:00:00
+    ```Bash
+    #!/bin/bash
 
-# Make sure to only use ParaView
-module purge
-module load ParaView/5.7.0-osmesa
+    #SBATCH -N 1
+    #SBATCH -c 12
+    #SBATCH --time=01:00:00
 
-pvbatch --mpi --force-offscreen-rendering pvbatch-script.py
-```
+    # Make sure to only use ParaView
+    module purge
+    module load ParaView/5.7.0-osmesa
 
-Interactive allocation via `salloc`
+    pvbatch --mpi --force-offscreen-rendering pvbatch-script.py
+    ```
 
-```Bash
-taurus$ salloc -N 1 -c 16 --time=01:00:00 bash
-salloc: Pending job allocation 336202
-salloc: job 336202 queued and waiting for resources
-salloc: job 336202 has been allocated resources
-salloc: Granted job allocation 336202
-salloc: Waiting for resource configuration
-salloc: Nodes taurusi6605 are ready for job
+??? example "Example of interactive allocation using `salloc`"
 
-# Make sure to only use ParaView
-taurus$ module purge
-taurus$ module load ParaView/5.7.0-osmesa
+    ```console
+    marie@login$ salloc -N 1 -c 16 --time=01:00:00 bash
+    salloc: Pending job allocation 336202
+    salloc: job 336202 queued and waiting for resources
+    salloc: job 336202 has been allocated resources
+    salloc: Granted job allocation 336202
+    salloc: Waiting for resource configuration
+    salloc: Nodes taurusi6605 are ready for job
 
-# Go to working directory, e.g. workspace
-taurus$ cd /path/to/workspace
+    # Make sure to only use ParaView
+    marie@compute$ module purge
+    marie@compute$ module load ParaView/5.7.0-osmesa
 
-# Execute pvbatch using 16 MPI processes in parallel on allocated resources
-taurus$ pvbatch --mpi --force-offscreen-rendering pvbatch-script.py 
-```
+    # Go to working directory, e.g., workspace
+    marie@compute$ cd /path/to/workspace
+
+    # Execute pvbatch using 16 MPI processes in parallel on allocated resources
+    marie@compute$ pvbatch --mpi --force-offscreen-rendering pvbatch-script.py
+    ```
 
 ### Using GPUs
 
-ParaView Pvbatch can render offscreen through the Native Platform
-Interface (EGL) on the graphics card (GPUs) specified by the device
-index. For that, use the modules indexed with *-egl*, e.g.
-ParaView/5.9.0-RC1-egl-mpi-Python-3.8, and pass the option
-\_--egl-device-index=$CUDA_VISIBLE*DEVICES*.
+ParaView Pvbatch can render offscreen through the Native Platform Interface (EGL) on the graphics
+cards (GPUs) specified by the device index. For that, make sure to use the modules indexed with
+*-egl*, e.g., `ParaView/5.9.0-RC1-egl-mpi-Python-3.8`, and pass the option
+`--egl-device-index=$CUDA_VISIBLE_DEVICES`.
 
-Jobfile
+??? example "Example jobfile"
 
-```Bash
-#!/bin/bash
+    ```Bash
+    #!/bin/bash
 
-#SBATCH -N 1
-#SBATCH -c 12
-#SBATCH --gres=gpu:2
-#SBATCH --partition=gpu2
-#SBATCH --time=01:00:00
+    #SBATCH -N 1
+    #SBATCH -c 12
+    #SBATCH --gres=gpu:2
+    #SBATCH --partition=gpu2
+    #SBATCH --time=01:00:00
 
-# Make sure to only use ParaView
-module purge
-module load ParaView/5.9.0-RC1-egl-mpi-Python-3.8
+    # Make sure to only use ParaView
+    module purge
+    module load ParaView/5.9.0-RC1-egl-mpi-Python-3.8
 
-mpiexec -n $SLURM_CPUS_PER_TASK -bind-to core pvbatch --mpi --egl-device-index=$CUDA_VISIBLE_DEVICES --force-offscreen-rendering pvbatch-script.py
-#or
-pvbatch --mpi --egl-device-index=$CUDA_VISIBLE_DEVICES --force-offscreen-rendering pvbatch-script.py
-```
+    mpiexec -n $SLURM_CPUS_PER_TASK -bind-to core pvbatch --mpi --egl-device-index=$CUDA_VISIBLE_DEVICES --force-offscreen-rendering pvbatch-script.py
+    #or
+    pvbatch --mpi --egl-device-index=$CUDA_VISIBLE_DEVICES --force-offscreen-rendering pvbatch-script.py
+    ```
 
 ## Interactive Mode
 
-There are different ways of using ParaView on the cluster:
+There are three different ways of using ParaView interactively on ZIH systems:
 
 - GUI via NICE DCV on a GPU node
 - Client-/Server mode with MPI-parallel off-screen-rendering
 - GUI via X forwarding
 
-### Using the GUI via NICE DCV on a GPU node
+### Using the GUI via NICE DCV on a GPU Node
 
 This option provides hardware accelerated OpenGL and might provide the best performance and smooth
 handling. First, you need to open a DCV session, so please follow the instructions under
 [virtual desktops](virtual_desktops.md). Start a terminal (right-click on desktop -> Terminal) in your
 virtual desktop session, then load the ParaView module as usual and start the GUI:
 
-```Bash
-taurus$ module load ParaView/5.7.0
+```console
+marie@dcv module load ParaView/5.7.0
 paraview
 ```
 
-Since your DCV session already runs inside a job, i.e., it has been
-scheduled to a compute node, no `srun command` is necessary here.
-
-#### Using Client-/Server mode with MPI-parallel offscreen-rendering
-
-ParaView has a built-in client-server architecture, where you run the
-GUI locally on your desktop and connect to a ParaView server instance
-(so-called `pvserver`) on the cluster. The pvserver performs the
-computationally intensive rendering. Note that **your client must be of
-the same version as the server**.
-
-The pvserver can be run in parallel using MPI, but it will only do CPU
-rendering via MESA. For this, you need to load the \*osmesa\*-suffixed
-version of the ParaView module, which supports offscreen-rendering.
-Then, start the `pvserver` via `srun` in parallel using multiple MPI
-processes:
-
-```Bash
-taurus$ module ParaView/5.7.0-osmesa
-taurus$ srun -N1 -n8 --mem-per-cpu=2500 -p interactive --pty pvserver --force-offscreen-rendering
-srun: job 2744818 queued and waiting for resources
-srun: job 2744818 has been allocated resources
-Waiting for client...
-Connection URL: cs://taurusi6612.taurus.hrsk.tu-dresden.de:11111
-Accepting connection(s): taurusi6612.taurus.hrsk.tu-dresden.de:11111
-```
+Since your DCV session already runs inside a job, which has been scheduled to a compute node, no
+`srun command` is necessary here.
+
+#### Using Client-/Server Mode with MPI-parallel Offscreen-Rendering
+
+ParaView has a built-in client-server architecture, where you run the GUI locally on your desktop
+and connect to a ParaView server instance (so-called pvserver) on a cluster. The pvserver performs
+the computationally intensive rendering. Note that **your client must be of the same version as the
+server**.
+
+The pvserver can be run in parallel using MPI, but it will only do CPU rendering via MESA. For this,
+you need to load the `osmesa`-suffixed version of the ParaView modules, which supports
+offscreen-rendering. Then, start the `pvserver` via `srun` in parallel using multiple MPI
+processes.
+
+??? example "Example"
+
+    ```console
+    marie@login$ module ParaView/5.7.0-osmesa
+    marie@login$ srun -N1 -n8 --mem-per-cpu=2500 -p interactive --pty pvserver --force-offscreen-rendering
+    srun: job 2744818 queued and waiting for resources
+    srun: job 2744818 has been allocated resources
+    Waiting for client...
+    Connection URL: cs://taurusi6612.taurus.hrsk.tu-dresden.de:11111
+    Accepting connection(s): taurusi6612.taurus.hrsk.tu-dresden.de:11111
+    ```
 
 If the default port 11111 is already in use, an alternative port can be specified via `-sp=port`.
 *Once the resources are allocated, the pvserver is started in parallel and connection information
 are outputed.*
 
-This contains the node name which your job and server runs on. However,
-since the node names of the cluster are not present in the public domain
-name system (only cluster-internally), you cannot just use this line
-as-is for connection with your client. You first have to resolve the
-name to an IP address on the cluster: Suffix the nodename with **-mn**
-to get the management network (ethernet) address, and pass it to a
-lookup-tool like *host* in another SSH session:
+This contains the node name which your job and server runs on. However, since the node names of the
+cluster are not present in the public domain name system (only cluster-internally), you cannot just
+use this line as-is for connection with your client. **You first have to resolve** the name to an IP
+address on ZIH systems: Suffix the nodename with `-mn` to get the management network (ethernet)
+address, and pass it to a lookup-tool like `host` in another SSH session:
 
-```Bash
-taurus$ host taurusi6605-mn<br />taurusi6605-mn.taurus.hrsk.tu-dresden.de has address 172.24.140.229
-```Bash
+```console
+marie@login$ host taurusi6605-mn
+taurusi6605-mn.taurus.hrsk.tu-dresden.de has address 172.24.140.229
+```
 
-The SSH tunnel has to be created from the user's localhost. The
-following example will create a forward SSH tunnel to localhost on port
-22222 (or what ever port is prefered):
+The SSH tunnel has to be created from the user's localhost. The following example will create a
+forward SSH tunnel to localhost on port 22222 (or what ever port is preferred):
 
-```Bash
-localhost$ ssh -L 22222:10.10.32.228:11111 userlogin@cara.dlr.de
+```console
+marie@local$ ssh -L 22222:172.24.140.229:11111 <zihlogin>@taurus.hrsk.tu-dresden.de
 ```
 
-The final step is to start ParaView locally on your own machine and add
-the connection
-
--   File→Connect...
--   Add Server
-    -   Name: localhost tunnel
-    -   Server Type: Client / Server
-    -   Host: localhost
-    -   Port: 22222
--   Configure
-    -   Startup Type: Manual
-    -   →Save
--   → Connect
-
-A successful connection is displayed by a *client*connected message
-displayed on the `pvserver` process terminal, and within ParaView's
-Pipeline Browser (instead of it saying builtin). You now are connected
-to the pvserver running on a Taurus node and can open files from the
-cluster's filesystems.
+The final step is to start ParaView locally on your own machine and add the connection
+
+- File -> Connect...
+- Add Server
+    - Name: localhost tunnel
+    - Server Type: Client / Server
+    - Host: localhost
+    - Port: 22222
+- Configure
+    - Startup Type: Manual
+    - -> Save
+- -> Connect
+
+A successful connection is displayed by a *client connected* message displayed on the `pvserver`
+process terminal, and within ParaView's Pipeline Browser (instead of it saying builtin). You now are
+connected to the pvserver running on a compute node at ZIH systems and can open files from its
+filesystems.
 
 #### Caveats
 
@@ -206,26 +212,37 @@ use [VPN](https://tu-dresden.de/zih/dienste/service-katalog/arbeitsumgebung/zuga
 or, when coming via the ZIH login gateway (`login1.zih.tu-dresden.de`), use an SSH tunnel. For the
 example IP address from above, this could look like the following:
 
-```Bash
-# Replace "user" with your login name, of course:
-ssh -f -N -L11111:172.24.140.229:11111 user@login1.zih.tu-dresden.de
+```console
+marie@local$ ssh -f -N -L11111:172.24.140.229:11111 <zihlogin>@login1.zih.tu-dresden.de
 ```
 
-This line opens the port 11111 locally and tunnels it via `login1` to the `pvserver` running on the
-Taurus node. Note that you then must instruct your local ParaView client to connect to host
+This command opens the port 11111 locally and tunnels it via `login1` to the `pvserver` running on
+the compute node. Note that you then must instruct your local ParaView client to connect to host
 `localhost` instead. The recommendation, though, is to use VPN, which makes this extra step
 unnecessary.
 
-#### Using the GUI via X forwarding (not recommended)
+#### Using the GUI via X-Forwarding
+
+(not recommended)
+
+Even the developers, KitWare, say that X-forwarding is not supported at all by ParaView, as it
+requires OpenGL extensions that are not supported by X forwarding. It might still be usable for very
+small examples, but the user experience will not be good. Also, you have to make sure your
+X-forwarding connection provides OpenGL rendering support. Furthermore, especially in newer versions
+of ParaView, you might have to set the environment variable `MESA_GL_VERSION_OVERRIDE=3.2` to fool
+it into thinking your provided GL rendering version is higher than what it actually is.
+
+??? example
+
+    ```console
+    # 1st, connect to ZIH systems using X forwarding (-X).
+    # It is a good idea to also enable compression for such connections (-C):
+    marie@local$ ssh -XC taurus.hrsk.tu-dresden.de
 
-Even the developers, KitWare, say that X forwarding is not supported at
-all by ParaView, as it requires OpenGL extensions that are not supported
-by X forwarding. It might still be usable for very small examples, but
-the user experience will not be good. Also, you have to make sure your X
-forwarding connection provides OpenGL rendering support. Furthermore,
-especially in newer versions of ParaView, you might have to set the
-environment variable MESA_GL_VERSION_OVERRIDE=3.2 to fool it into
-thinking your provided GL rendering version is higher than what it
-actually is. Example:
+    # 2nd, load the ParaView module and override the GL version (if necessary):
+    marie@login$ module Paraview/5.7.0
+    marie@login$ export MESA_GL_VERSION_OVERRIDE=3.2
 
-    # 1st, connect to Taurus using X forwarding (-X).<br /># It is a good idea to also enable compression for such connections (-C):<br />ssh -XC taurus.hrsk.tu-dresden.de<br /><br /># 2nd, load the ParaView module and override the GL version (if necessary):<br />module Paraview/5.7.0<br />export MESA_GL_VERSION_OVERRIDE=3.2<br /><br /># 3rd, start the ParaView GUI inside an interactive job. Don't forget the --x11 parameter for X forwarding:<br />srun -n1 -c1 -p interactive --mem-per-cpu=2500 --pty --x11=first paraview
+    # 3rd, start the ParaView GUI inside an interactive job. Don't forget the --x11 parameter for X forwarding:
+    marie@login$ srun -n1 -c1 -p interactive --mem-per-cpu=2500 --pty --x11=first paraview
+    ```
diff --git a/doc.zih.tu-dresden.de/docs/specific_software.md b/doc.zih.tu-dresden.de/docs/specific_software.md
deleted file mode 100644
index fd98e303e5448ae7ce128ddfbc4e78c63e754075..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/specific_software.md
+++ /dev/null
@@ -1,44 +0,0 @@
-# Use of Specific Software (packages, libraries, etc)
-
-## Modular System
-
-The modular concept is the easiest way to work with the software on Taurus. It allows to user to
-switch between different versions of installed programs and provides utilities for the dynamic
-modification of a user's environment. The information can be found [here]**todo link**.
-
-### Private project and user modules files
-
-[Private project module files]**todo link** allow you to load your group-wide installed software
-into your environment and to handle different versions. It allows creating your own software
-environment for the project. You can create a list of modules that will be loaded for every member
-of the team. It gives opportunity on unifying work of the team and defines the reproducibility of
-results. Private modules can be loaded like other modules with module load.
-
-[Private user module files]**todo link** allow you to load your own installed software into your
-environment. It works in the same manner as to project modules but for your private use.
-
-## Use of containers
-
-[Containerization]**todo link** encapsulating or packaging up software code and all its dependencies
-to run uniformly and consistently on any infrastructure. On Taurus [Singularity]**todo link** used
-as a standard container solution. Singularity enables users to have full control of their
-environment. This means that you don’t have to ask an HPC support to install anything for you - you
-can put it in a Singularity container and run! As opposed to Docker (the most famous container
-solution), Singularity is much more suited to being used in an HPC environment and more efficient in
-many cases. Docker containers can easily be used in Singularity. Information about the use of
-Singularity on Taurus can be found [here]**todo link**.
-
-In some cases using Singularity requires a Linux machine with root privileges (e.g. using the ml
-partition), the same architecture and a compatible kernel. For many reasons, users on Taurus cannot
-be granted root permissions. A solution is a Virtual Machine (VM) on the ml partition which allows
-users to gain root permissions in an isolated environment. There are two main options on how to work
-with VM on Taurus:
-
-  1. [VM tools]**todo link**. Automative algorithms for using virtual machines;
-  1. [Manual method]**todo link**. It required more operations but gives you more flexibility and reliability.
-
-Additional Information: Examples of the definition for the Singularity container ([here]**todo
-link**) and some hints ([here]**todo link**).
-
-Useful links: [Containers]**todo link**, [Custom EasyBuild Environment]**todo link**, [Virtual
-machine on Taurus]**todo link**
diff --git a/doc.zih.tu-dresden.de/docs/support.md b/doc.zih.tu-dresden.de/docs/support.md
deleted file mode 100644
index d85f71226115f277cef27bdb6841e276e85ec1d9..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/support.md
+++ /dev/null
@@ -1,19 +0,0 @@
-# What if everything didn't help?
-
-## Create a Ticket: how do I do that?
-
-The best way to ask about the help is to create a ticket. In order to do that you have to write a
-message to the <a href="mailto:hpcsupport@zih.tu-dresden.de">hpcsupport@zih.tu-dresden.de</a> with a
-detailed description of your problem. If possible please add logs, used environment and write a
-minimal executable example for the purpose to recreate the error or issue.
-
-## Communication with HPC Support
-
-There is the HPC support team who is responsible for the support of HPC users and stable work of the
-cluster. You could find the [details]**todo link** in the right part of any page of the compendium.
-However, please, before the contact with the HPC support team check the documentation carefully
-(starting points: [main page]**todo link**, [HPC-DA]**todo link**), use a search and then create a
-ticket. The ticket is a preferred way to solve the issue, but in some terminable cases, you can call
-to ask for help.
-
-Useful link: [Further Documentation]**todo link**
diff --git a/doc.zih.tu-dresden.de/docs/support/news_archive.md b/doc.zih.tu-dresden.de/docs/support/news_archive.md
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/doc.zih.tu-dresden.de/docs/support/support.md b/doc.zih.tu-dresden.de/docs/support/support.md
new file mode 100644
index 0000000000000000000000000000000000000000..c2c9fbda8bbb70c1dddb82fb384b69a8201e6fb8
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/support/support.md
@@ -0,0 +1,31 @@
+# How to Ask for Support
+
+## Create a Ticket
+
+The best way to ask for help send a message to
+[hpcsupport@zih.tu-dresden.de](mailto:hpcsupport@zih.tu-dresden.de) with a
+detailed description of your problem.
+
+It should include:
+
+- Who is reporting? (login name)
+- Where have you seen the problem? (name of the HPC system and/or of the node)
+- When has the issue occurred? Maybe, when did it work last?
+- What exactly happened?
+
+If possible include
+
+- job ID,
+- batch script,
+- filesystem path,
+- loaded modules and environment,
+- output and error logs,
+- steps to reproduce the error.
+
+This email automatically opens a trouble ticket which will be tracked by the HPC team. Please
+always keep the ticket number in the subject on your answers so that our system can keep track
+on our communication.
+
+For a new request, please simply send a new email (without any ticket number).
+
+!!! hint "Please try to find an answer in this documentation first."
diff --git a/doc.zih.tu-dresden.de/docs/tests.md b/doc.zih.tu-dresden.de/docs/tests.md
deleted file mode 100644
index 7601eb3748d21ce8d414cdb24c7ebef9c0a68cd4..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/tests.md
+++ /dev/null
@@ -1,12 +0,0 @@
-# Tests
-
-Dies ist eine Seite zum Testen der Markdown-Syntax.
-
-```python
-import os
-
-def debug(mystring):
-  print("Debug: ", mystring)
-
-debug("Dies ist ein Syntax-Highligthing-Test")
-```
diff --git a/doc.zih.tu-dresden.de/hackathon.md b/doc.zih.tu-dresden.de/hackathon.md
index 4a49d2b68ede0134d9672d6b8513ceb8d0210060..d41781c45455139c62708c708cf42e05babc3b65 100644
--- a/doc.zih.tu-dresden.de/hackathon.md
+++ b/doc.zih.tu-dresden.de/hackathon.md
@@ -10,21 +10,21 @@ The goals for the hackathon are:
 
 ## twiki2md
 
-The script `twiki2md` converts twiki source files into markdown source files using pandoc. It outputs the
-markdown source files according to the old pages tree into subdirectories. The output and **starting
-point for transferring** old content into the new system can be found at branch `preview` within
-directory `twiki2md/root/`.
+The script `twiki2md` converts twiki source files into markdown source files using pandoc. It
+outputs the markdown source files according to the old pages tree into subdirectories. The
+output and **starting point for transferring** old content into the new system can be found
+at branch `preview` within directory `twiki2md/root/`.
 
 ## Steps
 
 ### Familiarize with New Wiki System
 
-* Make sure your are member of the [repository](https://gitlab.hrz.tu-chemnitz.de/zih/hpc-compendium/hpc-compendium).
+* Make sure your are member of the [repository](https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium).
   If not, ask Danny Rotscher for adding you.
 * Clone repository and checkout branch `preview`
 
 ```Shell Session
-~ git clone git@gitlab.hrz.tu-chemnitz.de:zih/hpc-compendium/hpc-compendium.git
+~ git clone git@gitlab.hrz.tu-chemnitz.de:zih/hpcsupport/hpc-compendium.git
 ~ cd hpc-compendium
 ~ git checkout preview
 ```
@@ -38,23 +38,27 @@ directory `twiki2md/root/`.
 1. Grab a markdown source file from `twiki2md/root/` directory (a topic you are comfortable with)
 1. Find place in new structure according to
 [Typical Project Schedule](https://doc.zih.tu-dresden.de/hpc-wiki/bin/view/Compendium/TypicalProjectSchedule)
-  * Create new feature branch holding your work `~ git checkout -b <BRANCHNAME>`, whereas branch name can
-      be `<FILENAME>` for simplicity
+
+  * Create new feature branch holding your work `~ git checkout -b <BRANCHNAME>`, whereas branch
+      name can be `<FILENAME>` for simplicity
   * Copy reviewed markdown source file to `docs/` directory via
     `~ git mv twiki2md/root/<FILENAME>.md doc.zih.tu-dresden.de/docs/<SUBDIR>/<FILENAME>.md`
   * Update navigation section in `mkdocs.yaml`
+
 1. Commit and push to feature branch via
+
 ```Shell Session
 ~ git commit docs/<SUBDIR>/<FILENAME>.md mkdocs.yaml -m "MESSAGE"
 ~ git push origin <BRANCHNAME>
 ```
+
 1. Run checks locally and fix the issues. Otherwise the pipeline will fail.
     * [Check links](README.md#check-links) (There might be broken links which can only be solved
         with ongoing transfer of content.)
     * [Check pages structure](README.md#check-pages-structure)
     * [Markdown Linter](README.md#markdown-linter)
 1. Create
-  [merge request](https://gitlab.hrz.tu-chemnitz.de/zih/hpc-compendium/hpc-compendium/-/merge_requests)
+  [merge request](https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium/-/merge_requests)
    against `preview` branch
 
 ### Review Content
diff --git a/doc.zih.tu-dresden.de/mkdocs.yml b/doc.zih.tu-dresden.de/mkdocs.yml
index 4d4626c3a6a779265052d32a4ceb6b2847ecbc99..ae931caec53b198e49a6a9837431bbc579c6288d 100644
--- a/doc.zih.tu-dresden.de/mkdocs.yml
+++ b/doc.zih.tu-dresden.de/mkdocs.yml
@@ -9,16 +9,16 @@ nav:
   - Access to ZIH Systems:
     - Overview: access/overview.md
     - Connecting with SSH: access/ssh_login.md
-    - Key Fingerprints: access/key_fingerprints.md  
     - Desktop Visualization: access/desktop_cloud_visualization.md
     - Graphical Applications with WebVNC: access/graphical_applications_with_webvnc.md
-    - Security Restrictions: access/security_restrictions.md
     - JupyterHub:
       - JupyterHub: access/jupyterhub.md
       - JupyterHub for Teaching: access/jupyterhub_for_teaching.md
+    - Key Fingerprints: access/key_fingerprints.md  
+    - Security Restrictions: access/security_restrictions.md
   - Transfer of Data:
     - Overview: data_transfer/overview.md
-    - Data Mover: data_transfer/data_mover.md
+    - Datamover: data_transfer/datamover.md
     - Export Nodes: data_transfer/export_nodes.md
   - Environment and Software:
     - Overview: software/overview.md
@@ -26,11 +26,12 @@ nav:
       - Modules: software/modules.md
       - Runtime Environment: software/runtime_environment.md
       - Custom EasyBuild Modules: software/custom_easy_build_environment.md
+      - Python Virtual Environments: software/python_virtual_environments.md
     - Containers:
       - Singularity: software/containers.md
-      - Singularity Recicpe Hints: software/singularity_recipe_hints.md
-      - Singularity Example Definitions: software/singularity_example_definitions.md
-      - VM tools: software/vm_tools.md
+      - Singularity Recipes and Hints: software/singularity_recipe_hints.md
+      - Virtual Machines Tools: software/virtual_machines_tools.md
+      - Virtual Machines: software/virtual_machines.md
     - Applications:
       - Licenses: software/licenses.md
       - Computational Fluid Dynamics (CFD): software/cfd.md
@@ -38,23 +39,21 @@ nav:
       - Nanoscale Simulations: software/nanoscale_simulations.md
       - FEM Software: software/fem_software.md
     - Visualization: software/visualization.md
-    - HPC-DA:
-      - Get started with HPC-DA: software/get_started_with_hpcda.md
-      - Machine Learning: software/machine_learning.md
-      - Deep Learning: software/deep_learning.md
+    - Data Analytics:
+      - Overview: software/data_analytics.md
       - Data Analytics with R: software/data_analytics_with_r.md
-      - Data Analytics with Python: software/python.md
-      - TensorFlow: 
-        - TensorFlow Overview: software/tensorflow.md
-        - TensorFlow in Container: software/tensorflow_container_on_hpcda.md
-        - TensorFlow in JupyterHub: software/tensorflow_on_jupyter_notebook.md 
-      - Keras: software/keras.md
-      - Dask: software/dask.md
-      - Power AI: software/power_ai.md
+      - Data Analytics with RStudio: software/data_analytics_with_rstudio.md
+      - Data Analytics with Python: software/data_analytics_with_python.md
+      - Apache Spark: software/big_data_frameworks_spark.md
+    - Machine Learning:
+      - Overview: software/machine_learning.md
+      - TensorFlow: software/tensorflow.md
+      - TensorBoard: software/tensorboard.md
       - PyTorch: software/pytorch.md
-      - Apache Spark, Apache Flink, Apache Hadoop: software/big_data_frameworks.md
+      - Distributed Training: software/distributed_training.md
+      - Hyperparameter Optimization (OmniOpt): software/hyperparameter_optimization.md
+      - PowerAI: software/power_ai.md
     - SCS5 Migration Hints: software/scs5_software.md
-    - Virtual Machines: software/virtual_machines.md
     - Virtual Desktops: software/virtual_desktops.md
     - Software Development and Tools:
       - Overview: software/software_development_overview.md
@@ -62,10 +61,12 @@ nav:
       - Compilers: software/compilers.md
       - GPU Programming: software/gpu_programming.md
       - Libraries: software/math_libraries.md
-      - MPI Error Detection: software/mpi_usage_error_detection.md
       - Debugging: software/debuggers.md
+      - MPI Error Detection: software/mpi_usage_error_detection.md
+      - Score-P: software/scorep.md
+      - PAPI Library: software/papi.md
       - Pika: software/pika.md
-      - Perf Tools: software/perf_tools.md 
+      - Perf Tools: software/perf_tools.md
       - Score-P: software/scorep.md
       - Vampir: software/vampir.md
   - Data Life Cycle Management:
@@ -74,41 +75,33 @@ nav:
       - Overview: data_lifecycle/file_systems.md
       - Permanent File Systems: data_lifecycle/permanent.md
       - Lustre: data_lifecycle/lustre.md
-      - BeeGFS: data_lifecycle/bee_gfs.md
+      - BeeGFS: data_lifecycle/beegfs.md
+      - Warm Archive: data_lifecycle/warm_archive.md
       - Intermediate Archive: data_lifecycle/intermediate_archive.md
       - Quotas: data_lifecycle/quotas.md
     - Workspaces: data_lifecycle/workspaces.md
-    - HPC Storage Concept 2019: data_lifecycle/hpc_storage_concept2019.md
     - Preservation of Research Data: data_lifecycle/preservation_research_data.md
     - Structuring Experiments: data_lifecycle/experiments.md
-  - Jobs and Resources:
+  - HPC Resources and Jobs:
     - Overview: jobs_and_resources/overview.md
-    - Batch Systems: jobs_and_resources/batch_systems.md
-    - Hardware Resources:
-      - Hardware Taurus: jobs_and_resources/hardware_taurus.md
+    - HPC Resources:
+      - Overview: jobs_and_resources/hardware_overview.md
       - AMD Rome Nodes: jobs_and_resources/rome_nodes.md
       - IBM Power9 Nodes: jobs_and_resources/power9.md
       - NVMe Storage: jobs_and_resources/nvme_storage.md
       - Alpha Centauri: jobs_and_resources/alpha_centauri.md
       - HPE Superdome Flex: jobs_and_resources/sd_flex.md
-    - Checkpoint/Restart: jobs_and_resources/checkpoint_restart.md
-    - Overview2: jobs_and_resources/index.md
-    - Taurus: jobs_and_resources/system_taurus.md
-    - Slurm Examples: jobs_and_resources/slurm_examples.md
-    - Slurm: jobs_and_resources/slurm.md
-    - HPC-DA: jobs_and_resources/hpcda.md
-    - Binding And Distribution Of Tasks: jobs_and_resources/binding_and_distribution_of_tasks.md
-      #    - Queue Policy: jobs/policy.md
-      #    - Examples: jobs/examples/index.md
-      #    - Affinity: jobs/affinity/index.md
-      #    - Interactive: jobs/interactive.md
-      #    - Best Practices: jobs/best-practices.md
-      #    - Reservations: jobs/reservations.md
-      #    - Monitoring: jobs/monitoring.md
-      #    - FAQs: jobs/jobs-faq.md
-  #- Tests: tests.md
-  - Support: support.md
-  - Archive:
+    - Running Jobs:
+      - Batch System Slurm: jobs_and_resources/slurm.md
+      - Job Examples: jobs_and_resources/slurm_examples.md
+      - Partitions and Limits : jobs_and_resources/partitions_and_limits.md
+      - Checkpoint/Restart: jobs_and_resources/checkpoint_restart.md
+      - Job Profiling: jobs_and_resources/slurm_profiling.md
+      - Binding And Distribution Of Tasks: jobs_and_resources/binding_and_distribution_of_tasks.md
+  - Support:
+    - How to Ask for Support: support/support.md
+    - News Archive: support/news_archive.md
+  - Archive of the Old Wiki:
     - Overview: archive/overview.md
     - Bio Informatics: archive/bioinformatics.md
     - CXFS End of Support: archive/cxfs_end_of_support.md
@@ -116,6 +109,7 @@ nav:
     - No IB Jobs: archive/no_ib_jobs.md
     - Phase2 Migration: archive/phase2_migration.md
     - Platform LSF: archive/platform_lsf.md
+    - BeeGFS on Demand: archive/beegfs_on_demand.md
     - Switched-Off Systems:
       - Overview: archive/systems_switched_off.md
       - From Deimos to Atlas: archive/migrate_to_atlas.md
@@ -128,39 +122,56 @@ nav:
       - System Venus: archive/system_venus.md
       - KNL Nodes: archive/knl_nodes.md
     - UNICORE Rest API: archive/unicore_rest_api.md
-    - Vampir Trace: archive/vampir_trace.md
+    - VampirTrace: archive/vampirtrace.md
     - Windows Batchjobs: archive/windows_batch.md
-
-
+  - Contribute:
+    - How-To: contrib/howto_contribute.md
+    - Content Rules: contrib/content_rules.md
+    - Work Locally Using Containers: contrib/contribute_container.md
+    
 # Project Information
+
 site_name: ZIH HPC Compendium
 site_description: ZIH HPC Compendium
 site_author: ZIH Team
 site_dir: public
-site_url: https://gitlab.hrz.tu-chemnitz.de/zih/hpc-compendium/hpc-compendium
+site_url: https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium
+
 # uncomment next 3 lines if link to repo should not be displayed in the navbar
+
 repo_name: GitLab hpc-compendium
-repo_url: https://gitlab.hrz.tu-chemnitz.de/zih/hpc-compendium/hpc-compendium
-edit_uri: blob/master/docs/
+repo_url: https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium
+edit_uri: blob/main/doc.zih.tu-dresden.de/docs/
 
 # Configuration
-#strict: true
+
+# strict: true
 
 theme:
+
   # basetheme
+
   name: material
+
   # disable fonts being loaded from google fonts
+
   font: false
   language: en
+
   # dir containing all customizations
+
   custom_dir: tud_theme
   favicon: assets/images/Logo_klein.png
+
   # logo in header and footer
+
   logo: assets/images/TUD_Logo_weiss_57.png
   second_logo: assets/images/zih_weiss.png
 
 # extends base css
+
 extra_css:
+
   - stylesheets/extra.css
 
 markdown_extensions:
@@ -177,7 +188,9 @@ extra:
   homepage: https://tu-dresden.de
   zih_homepage: https://tu-dresden.de/zih
   hpcsupport_mail: hpcsupport@zih.tu-dresden.de
+
   # links in footer
+
   footer:
     - link: /legal_notice
       name: "Legal Notice / Impressum"
diff --git a/doc.zih.tu-dresden.de/tud_theme/stylesheets/extra.css b/doc.zih.tu-dresden.de/tud_theme/stylesheets/extra.css
index 0fb1a3d46afe20b02e3fd9a03daf5b716819ad61..a3a992501bff7f7b153a1beb0779e7f3e576f9e6 100644
--- a/doc.zih.tu-dresden.de/tud_theme/stylesheets/extra.css
+++ b/doc.zih.tu-dresden.de/tud_theme/stylesheets/extra.css
@@ -28,19 +28,24 @@
 .md-typeset h5 {
     font-family: 'Open Sans Semibold';
     line-height: 130%;
+	margin: 0.2em;
 }
 
 .md-typeset h1 {
     font-family: 'Open Sans Regular';
-    font-size: 1.6rem;   
+    font-size: 1.6rem;
+	margin-bottom: 0.5em;
 }
 
 .md-typeset h2 {
-    font-size: 1.4rem;
+    font-size: 1.2rem;
+	margin: 0.5em;
+    border-bottom-style: solid;
+    border-bottom-width: 1px;
 }
 
 .md-typeset h3 {
-    font-size: 1.2rem;
+    font-size: 1.1rem;
 }
 
 .md-typeset h4 {
@@ -48,8 +53,8 @@
 }
 
 .md-typeset h5 {
-    font-size: 0.9rem;
-    line-height: 120%;
+    font-size: 0.8rem;
+    text-transform: initial;
 }
 
 strong {
@@ -161,6 +166,7 @@ hr.solid {
 
 p {
     padding: 0 0.6rem;
+	margin: 0.2em;	
 }
 /* main */
 
diff --git a/doc.zih.tu-dresden.de/util/grep-forbidden-words.sh b/doc.zih.tu-dresden.de/util/grep-forbidden-words.sh
index b6d586220052a2bf362aec3c4736c876e4901da6..456eb55e192634bf4e159ce0096c83076989f2fc 100755
--- a/doc.zih.tu-dresden.de/util/grep-forbidden-words.sh
+++ b/doc.zih.tu-dresden.de/util/grep-forbidden-words.sh
@@ -14,13 +14,17 @@ basedir=`dirname "$basedir"`
 # The pattern \<io\> should not be present in any file (case-insensitive match), except when it appears as ".io".
 ruleset="i	\<io\>	\.io
 s	\<SLURM\>
-i	file \+system
-i	\<taurus\>	taurus\.hrsk	/taurus
+i	file \+system	HDFS
+i	\<taurus\>	taurus\.hrsk	/taurus	/TAURUS
 i	\<hrskii\>
-i	hpc \+system
 i	hpc[ -]\+da\>
+i	\(alpha\|ml\|haswell\|romeo\|gpu\|smp\|julia\|hpdlf\|scs5\)-\?\(interactive\)\?[^a-z]*partition
 i	work[ -]\+space"
 
+# Whitelisted files will be ignored
+# Whitespace separated list with full path
+whitelist=(doc.zih.tu-dresden.de/docs/contrib/content_rules.md)
+
 function grepExceptions () {
   if [ $# -gt 0 ]; then
     firstPattern=$1
@@ -37,6 +41,7 @@ function usage () {
   echo ""
   echo "Options:"
   echo "  -a     Search in all markdown files (default: git-changed files)" 
+  echo "  -f     Search in a specific markdown file" 
   echo "  -s     Silent mode"
   echo "  -h     Show help message"
 }
@@ -44,11 +49,16 @@ function usage () {
 # Options
 all_files=false
 silent=false
-while getopts ":ahs" option; do
+file=""
+while getopts ":ahsf:" option; do
  case $option in
    a)
      all_files=true
      ;;
+   f)
+     file=$2
+     shift
+     ;;
    s)
      silent=true
      ;;
@@ -67,14 +77,21 @@ branch="origin/${CI_MERGE_REQUEST_TARGET_BRANCH_NAME:-preview}"
 if [ $all_files = true ]; then
   echo "Search in all markdown files."
   files=$(git ls-tree --full-tree -r --name-only HEAD $basedir/docs/ | grep .md)
+elif [[ ! -z $file ]]; then
+  files=$file
 else
   echo "Search in git-changed files."
   files=`git diff --name-only "$(git merge-base HEAD "$branch")"`
 fi
 
+echo "... $files ..."
 cnt=0
 for f in $files; do
   if [ "$f" != doc.zih.tu-dresden.de/README.md -a "${f: -3}" == ".md" -a -f "$f" ]; then
+    if (printf '%s\n' "${whitelist[@]}" | grep -xq $f); then
+      echo "Skip whitelisted file $f"
+      continue
+    fi
     echo "Check wording in file $f"
     while IFS=$'\t' read -r flags pattern exceptionPatterns; do
       while IFS=$'\t' read -r -a exceptionPatternsArray; do
diff --git a/doc.zih.tu-dresden.de/util/lint-changes.sh b/doc.zih.tu-dresden.de/util/lint-changes.sh
index ba277da7ae8e3ea367424153a8f116ba3e9d6d2c..05ee5784468bed8d49adbbad8c9389bd3823590b 100755
--- a/doc.zih.tu-dresden.de/util/lint-changes.sh
+++ b/doc.zih.tu-dresden.de/util/lint-changes.sh
@@ -7,13 +7,16 @@ if [ -n "$CI_MERGE_REQUEST_TARGET_BRANCH_NAME" ]; then
     branch="origin/$CI_MERGE_REQUEST_TARGET_BRANCH_NAME"
 fi
 
+configfile=$(dirname $0)/../.markdownlintrc
+echo "config: $configfile"
+
 any_fails=false
 
 files=$(git diff --name-only "$(git merge-base HEAD "$branch")")
 for f in $files; do
     if [ "${f: -3}" == ".md" ]; then
         echo "Linting $f"
-        if ! markdownlint "$f"; then
+        if ! markdownlint -c $configfile "$f"; then
             any_fails=true
         fi
     fi
diff --git a/doc.zih.tu-dresden.de/util/pre-commit b/doc.zih.tu-dresden.de/util/pre-commit
new file mode 100755
index 0000000000000000000000000000000000000000..043320f352b923a7e7be96c04de5914960285b65
--- /dev/null
+++ b/doc.zih.tu-dresden.de/util/pre-commit
@@ -0,0 +1,71 @@
+#!/bin/bash
+exit_ok=yes
+files=$(git diff-index --cached --name-only HEAD)
+
+function testPath(){
+path_to_test=doc.zih.tu-dresden.de/docs/$1
+test -f "$path_to_test" || echo $path_to_test does not exist
+}
+
+if ! `docker image inspect hpc-compendium:latest > /dev/null 2>&1`
+then
+  echo Container not built, building...
+  docker build -t hpc-compendium .
+fi
+
+export -f testPath
+
+for file in $files
+do
+  if [ $file == doc.zih.tu-dresden.de/mkdocs.yml ]
+  then
+    echo Testing $file
+    sed -n '/^ *- /s#.*: \([A-Za-z_/]*.md\).*#\1#p' doc.zih.tu-dresden.de/mkdocs.yml | xargs -L1 -I {} bash -c "testPath '{}'"
+    if [ $? -ne 0 ]
+    then
+      exit_ok=no
+    fi
+  elif [[ $file =~ ^doc.zih.tu-dresden.de/(.*.md)$ ]]
+  then
+    filepattern=${BASH_REMATCH[1]}
+
+    #lint
+    echo "Checking linter..."
+    docker run --name=hpc-compendium --rm -w /docs --mount src="$(pwd)"/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium markdownlint $filepattern
+    if [ $? -ne 0 ]
+    then
+      exit_ok=no
+    fi
+
+    #link-check
+    echo "Checking links..."
+    docker run --name=hpc-compendium --rm -w /docs --mount src="$(pwd)"/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium markdown-link-check $filepattern
+    if [ $? -ne 0 ]
+    then
+      exit_ok=no
+    fi
+  fi
+done
+
+#spell-check
+echo "Spell-checking..."
+docker run --name=hpc-compendium --rm -w /docs --mount src="$(pwd)",target=/docs,type=bind hpc-compendium ./doc.zih.tu-dresden.de/util/check-spelling.sh
+if [ $? -ne 0 ]
+then
+  exit_ok=no
+fi
+
+#forbidden words checking
+echo "Forbidden words checking..."
+docker run --name=hpc-compendium --rm -w /docs --mount src="$(pwd)",target=/docs,type=bind hpc-compendium ./doc.zih.tu-dresden.de/util/grep-forbidden-words.sh
+if [ $? -ne 0 ]
+then
+  exit_ok=no
+fi
+
+if [ $exit_ok == yes ]
+then
+  exit 0
+else
+  exit 1
+fi
diff --git a/doc.zih.tu-dresden.de/wordlist.aspell b/doc.zih.tu-dresden.de/wordlist.aspell
index d44626a96656f15dbdec9c7782c3a0efd082868c..70272c91c61cda5121359100710d6c7d541ee937 100644
--- a/doc.zih.tu-dresden.de/wordlist.aspell
+++ b/doc.zih.tu-dresden.de/wordlist.aspell
@@ -1,133 +1,302 @@
-personal_ws-1.1 en 1805
+personal_ws-1.1 en 203 
+APIs
+AVX
+Abaqus
 Altix
+Amber
 Amdahl's
-analytics
-anonymized
-Anonymized
-BeeGFS
-benchmarking
 BLAS
-bsub
-ccNUMA
-citable
+BeeGFS
+CCM
+CLI
 CPU
+CPUID
 CPUs
+CSV
 CUDA
 CXFS
+CentOS
+Chemnitz
+DDP
 DDR
 DFG
+DMTCP
+DNS
+DataFrames
+DataParallel
+DistributedDataParallel
+DockerHub
+Dockerfile
+Dockerfiles
+EPYC
+ESSL
 EasyBuild
-engl
-english
-fastfs
+Espresso
 FFT
 FFTW
-filesystem
-Filesystem
+FMA
 Flink
 Fortran
+GBit
+GDDR
 GFLOPS
-gfortran
-GiB
-gnuplot
-Gnuplot
 GPU
 GPUs
-hadoop
-Haswell
+GROMACS
+Galilei
+Gauss
+Gaussian
+GiB
+GitHub
+GitLab
+GitLab's
+HBM
+HDF
 HDFS
-Horovod
+HDFView
 HPC
+HPE
 HPL
-hyperthreading
-icc
-icpc
-ifort
+Horovod
+Hostnames
+IPs
 ImageNet
 Infiniband
 Itanium
-jpg
 Jupyter
 JupyterHub
 JupyterLab
-Keras
 KNL
+Keras
+LAMMPS
 LAPACK
 LINPACK
+Linter
 LoadLeveler
-lsf
-LSF
-lustre
 MEGWARE
 MIMD
 MKL
+MNIST
+MathKernel
+MathWorks
+Mathematica
+MiB
+Miniconda
 Montecito
-mountpoint
-MPI
-mpicc
-mpiCC
-mpicxx
-mpif
-mpifort
-mpirun
-multicore
-multithreaded
-Neptun
+MultiThreading
+Multithreading
+NAMD
+NCCL
 NFS
-nbsp
+NGC
+NRINGS
 NUMA
 NUMAlink
+NVLINK
+NVMe
+NWChem
+Neptun
+NumPy
 Nutzungsbedingungen
+Nvidia
+OME
 OPARI
+OmniOpt
 OpenACC
 OpenBLAS
 OpenCL
+OpenGL
 OpenMP
-openmpi
 OpenMPI
+OpenSSH
 Opteron
 PAPI
-parallelization
-pdf
+PESSL
+PGI
+PMI
+PSOCK
+Pandarallel
 Perf
+PiB
 Pika
+PowerAI
+Pre
+Preload
+Pthreads
+Quantum
+README
+RHEL
+RSA
+RSS
+RStudio
+Rmpi
+Rsync
+Runtime
+SFTP
+SGEMM
+SGI
+SHA
+SHMEM
+SLES
+SMP
+SMT
+SSHFS
+STAR
+SUSE
+SXM
+Sandybridge
+Saxonid
+ScaDS
+ScaLAPACK
+Scalasca
+SciPy
+Scikit
+Slurm
+SubMathKernel
+Superdome
+TBB
+TCP
+TFLOPS
+TensorBoard
+TensorFlow
+Theano
+ToDo
+Trition
+VASP
+VMSize
+VMs
+VPN
+Vampir
+VampirTrace
+VampirTrace's
+VirtualGL
+WebVNC
+WinSCP
+Workdir
+XArray
+XGBoost
+XLC
+XLF
+Xeon
+Xming
+ZIH
+ZIH's
+analytics
+anonymized
+benchmarking
+broadwell
+bsub
+bullx
+ccNUMA
+centauri
+cgroups
+checkpointing
+citable
+conda
+css
+cuDNN
+dask
+dataframes
+datamover
+dockerized
+ecryptfs
+engl
+english
+env
+fastfs
+filesystem
+filesystems
+foreach
+gfortran
+gifferent
+glibc
+gnuplot
+hadoop
+haswell
+hostname
+html
+hyperparameter
+hyperparameters
+hyperthreading
+icc
+icpc
+ifort
+inode
+jobqueue
+jpg
+jss
+lapply
+linter
+localhost
+lsf
+lustre
+markdownlint
+matlab
+mkdocs
+mountpoint
+mpi
+mpiCC
+mpicc
+mpicxx
+mpif
+mpifort
+mpirun
+multicore
+multithreaded
+natively
+nbsp
+nbsp
+openmpi
+overfitting
+pandarallel
+parallelization
+parallelize
+parfor
+pdf
 pipelining
 png
+ppc
+pre
+preloaded
+preloading
+pymdownx
+queue
+randint
+reachability
+reproducibility
+requeueing
 rome
 romeo
-RSA
+runnable
 runtime
 salloc
-Saxonid
 sbatch
-ScaDS
-ScaLAPACK
-Scalasca
+scalable
 scancel
 scontrol
 scp
 scs
-SGEMM
-SGI
-SHA
-SHMEM
-SLES
-Slurm
-SMP
 squeue
 srun
 ssd
-SSD
 stderr
 stdout
-SUSE
-TBB
-TensorFlow
-TFLOPS
-Theano
+subdirectories
+subdirectory
 tmp
-Trition
+todo
+toolchain
+toolchains
+tracefile
+tracefiles
+transferability
+unencrypted
+uplink
 userspace
-Vampir
-Xeon
-ZIH
+vectorization
+venv
+virtualenv
+workspace
+workspaces
+yaml
+zih