diff --git a/README.md b/README.md
index 05825be788b1d0e0d6436454e6aa0849d28d93c3..d3482f3ae680798e81cdd2ea7814eeadb4abe57d 100644
--- a/README.md
+++ b/README.md
@@ -15,7 +15,7 @@ within the CI/CD pipeline help to ensure a high quality documentation.
 ## Reporting Issues
 
 Issues concerning this documentation can reported via the GitLab
-[issue tracking system](https://gitlab.hrz.tu-chemnitz.de/zih/hpc-compendium/hpc-compendium/-/issues).
+[issue tracking system](https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium/-/issues).
 Please check for any already existing issue before submitting your issue in order to avoid duplicate
 issues.
 
diff --git a/contribute_container.md b/contribute_container.md
deleted file mode 100644
index 6f6d74389ed848d3e52d10f623be470a48bda998..0000000000000000000000000000000000000000
--- a/contribute_container.md
+++ /dev/null
@@ -1,108 +0,0 @@
-# Contributing Using a Local Clone and a Docker Container
-
-see also: [https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium/-/blob/preview/doc.zih.tu-dresden.de/README.md]
-
-## Prerequisites
-
-Assuming you understand in principle how to work with our git. Now you need:
-
-* a system with running Docker installation
-* all necessary access/execution rights
-* a local clone of the repository in the directory `./hpc-compendium`
-
-Remark: We have seen problems running the container an ecryptfs filesystem. So you might
-want to use `/tmp` as root directory.
-
-## Preparation
-
-Build the docker image. This might take a bit longer, but you have to
-run it only once in a while (when we have changed the Dockerfile).
-
-```Bash
-cd hpc-compendium
-docker build -t hpc-compendium . 
-```
-
-## Working with the Docker Container
-
-Here is a suggestion of a workflow which might be suitable for you.
-
-### Start the Local Web Server
-
-The command(s) to start the dockerized web server is this:
-
-```Bash
-docker run --name=hpc-compendium -p 8000:8000 --rm -it -w /docs \
-  -v /tmp/hpc-compendium/doc.zih.tu-dresden.de:/docs:z hpc-compendium bash \
-  -c 'mkdocs build  && mkdocs serve -a 0.0.0.0:8000'
-```
-
-To follow its progress let it run in a single shell (terminal window)
-and open another one for the other steps.
-
-You can view the documentation via
-[http://localhost:8000](http://localhost:8000) in your browser, now.
-
-You can now update the contents in you preferred editor.
-The running container automatically takes care of file changes and rebuilds the
-documentation.
-
-With the details described below, it will then be easy to follow the guidelines
-for local correctness checks before submitting your changes and requesting
-the merge.
-
-### Run the Proposed Checks Inside Container
-
-Remember to keep the local web server running in the other shell.
-
-First, change to the `hpc-compendium` directory and set the environment
-variable DC to save a lot of keystrokes :-)
-
-```Bash
-export DC='docker exec -it hpc-compendium bash -c '
-```
-
-and use it like this...
-
-#### Linter
-
-If you want to check whether the markdown files are formatted
-properly, use the following command:
-
-```Bash
-$DC 'markdownlint docs'
-```
-
-#### Link Checker
-
-To check a single file, e.g.
-`doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md`, use:
-
-```Bash
-$DC 'markdown-link-check docs/software/big_data_frameworks.md'
-```
-
-To check whether there are links that point to a wrong target, use
-(this may take a while and gives a lot of output because it runs over all files):
-
-```Bash
-$DC 'find docs -type f -name "*.md" | xargs -L1 markdown-link-check'
-```
-
-#### Spell Checker
-
-For spell-checking a single file, , e.g.
-`doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md`, use:
-
-```$DC './util/check-spelling.sh docs/software/big_data_frameworks.md'
-```
-
-For spell-checking all files, use:
-
-```Bash
-$DC ./util/check-spelling.sh
-```
-
-This outputs all words of all files that are unknown to the spell checker.
-To let the spell checker "know" a word, append it to
-`doc.zih.tu-dresden.de/wordlist.aspell`.
diff --git a/doc.zih.tu-dresden.de/README.md b/doc.zih.tu-dresden.de/README.md
index fe6487b3f1a181e1b9a2dcf4217c496a7bda2491..1829a5bc54c26ce37f61f27410e45e8901488183 100644
--- a/doc.zih.tu-dresden.de/README.md
+++ b/doc.zih.tu-dresden.de/README.md
@@ -9,7 +9,7 @@ long describing complex steps, contributing is quite easy - trust us.
 ## Contribute via Issue
 
 Users can contribute to the documentation via the
-[issue tracking system](https://gitlab.hrz.tu-chemnitz.de/zih/hpc-compendium/hpc-compendium/-/issues).
+[issue tracking system](https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium/-/issues).
 For that, open an issue to report typos and missing documentation or request for more precise
 wording etc.  ZIH staff will get in touch with you to resolve the issue and improve the
 documentation.
@@ -120,14 +120,20 @@ cd /PATH/TO/hpc-compendium
 docker build -t hpc-compendium .
 ```
 
+To avoid a lot of retyping, use the following in your shell:
+
+```bash
+alias wiki="docker run --name=hpc-compendium --rm -it -w /docs --mount src=$PWD/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium bash -c"
+```
+
 If you want to see how it looks in your browser, you can use shell commands to serve
 the documentation:
 
 ```Bash
-docker run --name=hpc-compendium -p 8000:8000 --rm -it -w /docs --mount src="$(pwd)"/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium bash -c "mkdocs build --verbose && mkdocs serve -a 0.0.0.0:8000"
+wiki "mkdocs build --verbose && mkdocs serve -a 0.0.0.0:8000"
 ```
 
-You can view the documentation via [http://localhost:8000](http://localhost:8000) in your browser, now.
+You can view the documentation via `http://localhost:8000` in your browser, now.
 
 If that does not work, check if you can get the URL for your browser's address
 bar from a different terminal window:
@@ -141,26 +147,26 @@ documentation.  If you want to check whether the markdown files are formatted
 properly, use the following command:
 
 ```Bash
-docker run --name=hpc-compendium --rm -it -w /docs/doc.zih.tu-dresden.de --mount src="$(pwd)",target=/docs,type=bind hpc-compendium markdownlint docs
+wiki 'markdownlint docs'
 ```
 
 To check whether there are links that point to a wrong target, use
 (this may take a while and gives a lot of output because it runs over all files):
 
 ```Bash
-docker run --name=hpc-compendium --rm -it -w /docs --mount src="$(pwd)"/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium bash -c "find docs -type f -name '*.md' | xargs -L1 markdown-link-check"
+wiki "find docs -type f -name '*.md' | xargs -L1 markdown-link-check"
 ```
 
-To check a single file, e. g. `doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md`, use:
+To check a single file, e. g. `doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md`, use:
 
 ```Bash
-docker run --name=hpc-compendium --rm -it -w /docs --mount src="$(pwd)"/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium markdown-link-check docs/software/big_data_frameworks.md
+wiki 'markdown-link-check docs/software/big_data_frameworks_spark.md'
 ```
 
 For spell-checking a single file, use:
 
 ```Bash
-docker run --name=hpc-compendium --rm -it -w /docs --mount src="$(pwd)",target=/docs,type=bind hpc-compendium ./doc.zih.tu-dresden.de/util/check-spelling.sh <file>
+wiki 'util/check-spelling.sh <file>'
 ```
 
 For spell-checking all files, use:
@@ -194,7 +200,7 @@ locally on the documentation. At first, you should add a remote pointing to the
 documentation.
 
 ```Shell Session
-~ git remote add upstream-zih git@gitlab.hrz.tu-chemnitz.de:zih/hpc-compendium/hpc-compendium.git
+~ git remote add upstream-zih git@gitlab.hrz.tu-chemnitz.de:zih/hpcsupport/hpc-compendium.git
 ```
 
 Now, you have two remotes, namely *origin* and *upstream-zih*. The remote *origin* points to your fork,
@@ -204,8 +210,8 @@ whereas *upstream-zih* points to the original documentation repository at GitLab
 $ git remote -v
 origin  git@gitlab.hrz.tu-chemnitz.de:LOGIN/hpc-compendium.git (fetch)
 origin  git@gitlab.hrz.tu-chemnitz.de:LOGIN/hpc-compendium.git (push)
-upstream-zih  git@gitlab.hrz.tu-chemnitz.de:zih/hpc-compendium/hpc-compendium.git (fetch)
-upstream-zih  git@gitlab.hrz.tu-chemnitz.de:zih/hpc-compendium/hpc-compendium.git (push)
+upstream-zih  git@gitlab.hrz.tu-chemnitz.de:zih/hpcsupport/hpc-compendium.git (fetch)
+upstream-zih  git@gitlab.hrz.tu-chemnitz.de:zih/hpcsupport/hpc-compendium.git (push)
 ```
 
 Next, you should synchronize your `main` branch with the upstream.
@@ -237,7 +243,7 @@ new branch (a so-called feature branch) basing on the `main` branch and commit y
 
 The last command pushes the changes to your remote at branch `FEATUREBRANCH`. Now, it is time to
 incorporate the changes and improvements into the HPC Compendium. For this, create a
-[merge request](https://gitlab.hrz.tu-chemnitz.de/zih/hpc-compendium/hpc-compendium/-/merge_requests/new)
+[merge request](https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium/-/merge_requests/new)
 to the `main` branch.
 
 ### Important Branches
@@ -248,8 +254,8 @@ There are two important branches in this repository:
   - Branch containing recent changes which will be soon merged to main branch (protected
     branch)
   - Served at [todo url](todo url) from TUD VPN
-- Main: Branch which is deployed at [doc.zih.tu-dresden.de](doc.zih.tu-dresden.de) holding the
-    current documentation (protected branch)
+- Main: Branch which is deployed at [https://doc.zih.tu-dresden.de](https://doc.zih.tu-dresden.de)
+    holding the current documentation (protected branch)
 
 If you are totally sure about your commit (e.g., fix a typo), it is only the following steps:
 
@@ -388,13 +394,29 @@ pika.md is not included in nav
 specific_software.md is not included in nav
 ```
 
+### Pre-commit Git Hook
+
+You can automatically run checks whenever you try to commit a change. In this case, failing checks
+prevent commits (unless you use option `--no-verify`). This can be accomplished by adding a
+pre-commit hook to your local clone of the repository. The following code snippet shows how to do
+that:
+
+```bash
+cp doc.zih.tu-dresden.de/util/pre-commit .git/hooks/
+```
+
+!!! note
+    The pre-commit hook only works, if you can use docker without using `sudo`. If this is not
+    already the case, use the command `adduser $USER docker` to enable docker commands without
+    `sudo` for the current user. Restart the docker daemons afterwards.
+
 ## Content Rules
 
 **Remark:** Avoid using tabs both in markdown files and in `mkdocs.yaml`. Type spaces instead.
 
 ### New Page and Pages Structure
 
-The pages structure is defined in the configuration file [mkdocs.yaml](doc.zih.tu-dresden.de/mkdocs.yml).
+The pages structure is defined in the configuration file [mkdocs.yaml](mkdocs.yml).
 
 ```Shell Session
 docs/
@@ -453,9 +475,11 @@ there is a list of conventions w.r.t. spelling and technical wording.
 * `I/O` not `IO`
 * `Slurm` not `SLURM`
 * `Filesystem` not `file system`
-* `ZIH system` and `ZIH systems` not `Taurus`, `HRSKII`, `our HPC systems` etc.
+* `ZIH system` and `ZIH systems` not `Taurus`, `HRSKII`, `our HPC systems`, etc.
 * `Workspace` not `work space`
 * avoid term `HPC-DA`
+* Partition names after the keyword *partition*: *partition `ml`* not *ML partition*, *ml
+  partition*, *`ml` partition*, *"ml" partition*, etc.
 
 ### Code Blocks and Command Prompts
 
diff --git a/doc.zih.tu-dresden.de/docs/access/jupyterhub.md b/doc.zih.tu-dresden.de/docs/access/jupyterhub.md
index d3cdc8f582c663a2b5d27dcd4f59a6c2e7dc659b..dcdd9363c8d406d7227b97abce91ad67298e9a67 100644
--- a/doc.zih.tu-dresden.de/docs/access/jupyterhub.md
+++ b/doc.zih.tu-dresden.de/docs/access/jupyterhub.md
@@ -137,8 +137,8 @@ This message appears instantly if your batch system parameters are not valid.
 Please check those settings against the available hardware.
 Useful pages for valid batch system parameters:
 
-- [Slurm batch system (Taurus)](../jobs_and_resources/system_taurus.md#batch-system)
 - [General information how to use Slurm](../jobs_and_resources/slurm.md)
+- [Partitions and limits](../jobs_and_resources/partitions_and_limits.md)
 
 ### Error Message in JupyterLab
 
diff --git a/doc.zih.tu-dresden.de/docs/access/jupyterhub_for_teaching.md b/doc.zih.tu-dresden.de/docs/access/jupyterhub_for_teaching.md
index 970a11898a6f2e93110d8b4f211ae9df9d883eed..92ad16d1325173c384c7472658239baca3e26157 100644
--- a/doc.zih.tu-dresden.de/docs/access/jupyterhub_for_teaching.md
+++ b/doc.zih.tu-dresden.de/docs/access/jupyterhub_for_teaching.md
@@ -14,11 +14,10 @@ Please be aware of the following notes:
 - Scheduled downtimes are announced by email. Please plan your courses accordingly.
 - Access to HPC resources is handled through projects. See your course as a project. Projects need
   to be registered beforehand (more info on the page [Access](../application/overview.md)).
-- Don't forget to **TODO ANCHOR**(add your users)
-  (ProjectManagement#manage_project_members_40dis_45_47enable_41) (eg. students or tutors) to
-your project.
-- It might be a good idea to **TODO ANCHOR**(request a
-  reservation)(Slurm#Reservations) of part of the compute resources for your project/course to
+- Don't forget to [add your users](../application/project_management.md#manage-project-members-dis-enable)
+  (eg. students or tutors) to your project.
+- It might be a good idea to [request a reservation](../jobs_and_resources/overview.md#exclusive-reservation-of-hardware)
+  of part of the compute resources for your project/course to
   avoid unnecessary waiting times in the batch system queue.
 
 ## Clone a Repository With a Link
diff --git a/doc.zih.tu-dresden.de/docs/access/security_restrictions.md b/doc.zih.tu-dresden.de/docs/access/security_restrictions.md
index 25f6270410c4e35cee150019298fac6dd33cd01e..bcdc0f578c8e1c7674d5eb42395870636359729b 100644
--- a/doc.zih.tu-dresden.de/docs/access/security_restrictions.md
+++ b/doc.zih.tu-dresden.de/docs/access/security_restrictions.md
@@ -1,27 +1,27 @@
-# Security Restrictions on Taurus
+# Security Restrictions
 
-As a result of the security incident the German HPC sites in Gau Alliance are now adjusting their
-measurements to prevent infection and spreading of the malware.
+As a result of a security incident the German HPC sites in Gauß Alliance have adjusted their
+measurements to prevent infection and spreading of malware.
 
-The most important items for HPC systems at ZIH are:
+The most important items for ZIH systems are:
 
-- All users (who haven't done so recently) have to
+* All users (who haven't done so recently) have to
   [change their ZIH password](https://selfservice.zih.tu-dresden.de/l/index.php/pswd/change_zih_password).
-  **Login to Taurus is denied with an old password.**
-- All old (private and public) keys have been moved away.
-- All public ssh keys for Taurus have to
-  - be re-generated using only the ED25519 algorithm (`ssh-keygen -t ed25519`)
-  - **passphrase for the private key must not be empty**
-- Ideally, there should be no private key on Taurus except for local use.
-- Keys to other systems must be passphrase-protected!
-- **ssh to Taurus** is only possible from inside TU Dresden Campus
-  (login\[1,2\].zih.tu-dresden.de will be blacklisted). Users from outside can use VPN (see
+    * **Login to ZIH systems is denied with an old password.**
+* All old (private and public) keys have been moved away.
+* All public ssh keys for ZIH systems have to
+    * be re-generated using only the ED25519 algorithm (`ssh-keygen -t ed25519`)
+    * **passphrase for the private key must not be empty**
+* Ideally, there should be no private key on ZIH system except for local use.
+* Keys to other systems must be passphrase-protected!
+* **ssh to ZIH systems** is only possible from inside TU Dresden campus
+  (`login[1,2].zih.tu-dresden.de` will be blacklisted). Users from outside can use VPN (see
   [here](https://tu-dresden.de/zih/dienste/service-katalog/arbeitsumgebung/zugang_datennetz/vpn)).
-- **ssh from Taurus** is only possible inside TU Dresden Campus.
-  (Direct ssh access to other computing centers was the spreading vector of the recent incident.)
+* **ssh from ZIH system** is only possible inside TU Dresden campus.
+  (Direct SSH access to other computing centers was the spreading vector of the recent incident.)
 
-Data transfer is possible via the taurusexport nodes. We are working on a bandwidth-friendly
-solution.
+Data transfer is possible via the [export nodes](../data_transfer/export_nodes.md). We are working
+on a bandwidth-friendly solution.
 
 We understand that all this will change convenient workflows. If the measurements would render your
-work on Taurus completely impossible, please contact the HPC support.
+work on ZIH systems completely impossible, please [contact the HPC support](../support/support.md).
diff --git a/Compendium_attachments/ProjectManagement/add_member.png b/doc.zih.tu-dresden.de/docs/application/misc/add_member.png
similarity index 100%
rename from Compendium_attachments/ProjectManagement/add_member.png
rename to doc.zih.tu-dresden.de/docs/application/misc/add_member.png
diff --git a/Compendium_attachments/ProjectManagement/external_login.png b/doc.zih.tu-dresden.de/docs/application/misc/external_login.png
similarity index 100%
rename from Compendium_attachments/ProjectManagement/external_login.png
rename to doc.zih.tu-dresden.de/docs/application/misc/external_login.png
diff --git a/Compendium_attachments/ProjectManagement/members.png b/doc.zih.tu-dresden.de/docs/application/misc/members.png
similarity index 100%
rename from Compendium_attachments/ProjectManagement/members.png
rename to doc.zih.tu-dresden.de/docs/application/misc/members.png
diff --git a/Compendium_attachments/ProjectManagement/overview.png b/doc.zih.tu-dresden.de/docs/application/misc/overview.png
similarity index 100%
rename from Compendium_attachments/ProjectManagement/overview.png
rename to doc.zih.tu-dresden.de/docs/application/misc/overview.png
diff --git a/Compendium_attachments/ProjectManagement/password.png b/doc.zih.tu-dresden.de/docs/application/misc/password.png
similarity index 100%
rename from Compendium_attachments/ProjectManagement/password.png
rename to doc.zih.tu-dresden.de/docs/application/misc/password.png
diff --git a/Compendium_attachments/ProjectManagement/project_details.png b/doc.zih.tu-dresden.de/docs/application/misc/project_details.png
similarity index 100%
rename from Compendium_attachments/ProjectManagement/project_details.png
rename to doc.zih.tu-dresden.de/docs/application/misc/project_details.png
diff --git a/Compendium_attachments/ProjectRequestForm/request_step1_b.png b/doc.zih.tu-dresden.de/docs/application/misc/request_step1_b.png
similarity index 100%
rename from Compendium_attachments/ProjectRequestForm/request_step1_b.png
rename to doc.zih.tu-dresden.de/docs/application/misc/request_step1_b.png
diff --git a/Compendium_attachments/ProjectRequestForm/request_step2_details.png b/doc.zih.tu-dresden.de/docs/application/misc/request_step2_details.png
similarity index 100%
rename from Compendium_attachments/ProjectRequestForm/request_step2_details.png
rename to doc.zih.tu-dresden.de/docs/application/misc/request_step2_details.png
diff --git a/Compendium_attachments/ProjectRequestForm/request_step3_machines.png b/doc.zih.tu-dresden.de/docs/application/misc/request_step3_machines.png
similarity index 100%
rename from Compendium_attachments/ProjectRequestForm/request_step3_machines.png
rename to doc.zih.tu-dresden.de/docs/application/misc/request_step3_machines.png
diff --git a/Compendium_attachments/ProjectRequestForm/request_step4_software.png b/doc.zih.tu-dresden.de/docs/application/misc/request_step4_software.png
similarity index 100%
rename from Compendium_attachments/ProjectRequestForm/request_step4_software.png
rename to doc.zih.tu-dresden.de/docs/application/misc/request_step4_software.png
diff --git a/Compendium_attachments/ProjectRequestForm/request_step5_description.png b/doc.zih.tu-dresden.de/docs/application/misc/request_step5_description.png
similarity index 100%
rename from Compendium_attachments/ProjectRequestForm/request_step5_description.png
rename to doc.zih.tu-dresden.de/docs/application/misc/request_step5_description.png
diff --git a/Compendium_attachments/ProjectRequestForm/request_step6.png b/doc.zih.tu-dresden.de/docs/application/misc/request_step6.png
similarity index 100%
rename from Compendium_attachments/ProjectRequestForm/request_step6.png
rename to doc.zih.tu-dresden.de/docs/application/misc/request_step6.png
diff --git a/Compendium_attachments/ProjectManagement/stats.png b/doc.zih.tu-dresden.de/docs/application/misc/stats.png
similarity index 100%
rename from Compendium_attachments/ProjectManagement/stats.png
rename to doc.zih.tu-dresden.de/docs/application/misc/stats.png
diff --git a/doc.zih.tu-dresden.de/docs/application/project_management.md b/doc.zih.tu-dresden.de/docs/application/project_management.md
index a69ef756d4b74fc35e7c5be014fc2b060ea0af5e..79e457cb2590d4109a160a8296b676c3384490d5 100644
--- a/doc.zih.tu-dresden.de/docs/application/project_management.md
+++ b/doc.zih.tu-dresden.de/docs/application/project_management.md
@@ -1,113 +1,104 @@
-# Project management
+# Project Management
 
-The HPC project leader has overall responsibility for the project and
-for all activities within his project on ZIH's HPC systems. In
-particular he shall:
+The HPC project leader has overall responsibility for the project and for all activities within the
+corresponding project on ZIH systems. In particular the project leader shall:
 
--   add and remove users from the project,
--   update contact details of th eproject members,
--   monitor the resources his project,
--   inspect and store data of retiring users.
+* add and remove users from the project,
+* update contact details of the project members,
+* monitor the resources of the project,
+* inspect and store data of retiring users.
 
-For this he can appoint a *project administrator* with an HPC account to
-manage technical details.
+The project leader can appoint a *project administrator* with an HPC account to manage these
+technical details.
 
-The front-end to the HPC project database enables the project leader and
-the project administrator to
+The front-end to the HPC project database enables the project leader and the project administrator
+to
 
--   add and remove users from the project,
--   define a technical administrator,
--   view statistics (resource consumption),
--   file a new HPC proposal,
--   file results of the HPC project.
+* add and remove users from the project,
+* define a technical administrator,
+* view statistics (resource consumption),
+* file a new HPC proposal,
+* file results of the HPC project.
 
 ## Access
 
-<span class="twiki-macro IMAGE" type="frame" align="right"
-caption="password" width="100">%ATTACHURLPATH%/external_login.png</span>
-
+![Login Screen>](misc/external_login.png "Login Screen"){loading=lazy width=300 style="float:right"}
 [Entry point to the project management system](https://hpcprojekte.zih.tu-dresden.de/managers)
-
 The project leaders of an ongoing project and their accredited admins
 are allowed to login to the system. In general each of these persons
 should possess a ZIH login at the Technical University of Dresden, with
 which it is possible to log on the homepage. In some cases, it may
 happen that a project leader of a foreign organization do not have a ZIH
 login. For this purpose, it is possible to set a local password:
-"[Passwort vergessen](https://hpcprojekte.zih.tu-dresden.de/managers/members/missingPassword)".
+"[Missing Password](https://hpcprojekte.zih.tu-dresden.de/managers/members/missingPassword)".
 
-<span class="twiki-macro IMAGE" type="frame" align="right" caption="password reset"
-width="100">%ATTACHURLPATH%/password.png</span>
+&nbsp;
+{: style="clear:right;"}
 
-On the 'Passwort vergessen' page, it is possible to reset the
-passwords of a 'non-ZIH-login'. For this you write your login, which
-usually corresponds to your email address, in the field and click on
-'zurcksetzen'. Within 10 minutes the system sends a signed e-mail from
-<hpcprojekte@zih.tu-dresden.de> to the registered e-mail address. this
-e-mail contains a link to reset the password.
+![Password Reset>](misc/password.png "Password Reset"){loading=lazy width=300 style="float:right"}
+On the 'Missing Password' page, it is possible to reset the passwords of a 'non-ZIH-login'. For this
+you write your login, which usually corresponds to your email address, in the field and click on
+'reset. Within 10 minutes the system sends a signed e-mail from <hpcprojekte@zih.tu-dresden.de> to
+the registered e-mail address. this e-mail contains a link to reset the password.
+
+&nbsp;
+{: style="clear:right;"}
 
 ## Projects
 
-<span class="twiki-macro IMAGE" type="frame" align="right"
-caption="projects overview"
-width="100">%ATTACHURLPATH%/overview.png</span>
-
-\<div style="text-align: justify;"> After login you reach an overview
-that displays all available projects. In each of these projects are
-listed, you are either project leader or an assigned project
-administrator. From this list, you have the option to view the details
-of a project or make a following project request. The latter is only
-possible if a project has been approved and is active or was. In the
-upper right area you will find a red button to log out from the system.
-\</div> \<br style="clear: both;" /> \<br /> <span
-class="twiki-macro IMAGE" type="frame" align="right"
-caption="project details"
-width="100">%ATTACHURLPATH%/project_details.png</span> \<div
-style="text-align: justify;"> The project details provide information
-about the requested and allocated resources. The other tabs show the
-employee and the statistics about the project. \</div> \<br
-style="clear: both;" />
-
-### manage project members (dis-/enable)
-
-<span class="twiki-macro IMAGE" type="frame" align="right"
-caption="project members" width="100">%ATTACHURLPATH%/members.png</span>
-\<div style="text-align: justify;"> The project members can be managed
-under the tab 'employee' in the project details. This page gives an
-overview of all ZIH logins that are a member of a project and its
-status. If a project member marked in green, it can work on all
-authorized HPC machines when the project has been approved. If an
-employee is marked in red, this can have several causes:
-
--   he was manually disabled by project managers, project administrator
-    or an employee of the ZIH
--   he was disabled by the system because his ZIH login expired
--   his confirmation of the current hpc-terms is missing
-
-You can specify a user as an administrator. This user can then access
-the project managment system. Next, you can disable individual project
-members. This disabling is only a "request of disabling" and has a time
-delay of 5 minutes. An user can add or reactivate himself, with his
-zih-login, to a project via the link on the end of the page. To prevent
-misuse this link is valid for 2 weeks and will then be renewed
-automatically. \</div> \<br style="clear: both;" />
-
-<span class="twiki-macro IMAGE" type="frame" align="right"
-caption="add member" width="100">%ATTACHURLPATH%/add_member.png</span>
-
-\<div style="text-align: justify;"> The link leads to a page where you
-can sign in to a Project by accepting the term of use. You need also an
-valid ZIH-Login. After this step it can take 1-1,5 h to transfer the
-login to all cluster nodes. \</div> \<br style="clear: both;" />
-
-### statistic
-
-<span class="twiki-macro IMAGE" type="frame" align="right"
-caption="project statistic" width="100">%ATTACHURLPATH%/stats.png</span>
-
-\<div style="text-align: justify;"> The statistic is located under the
-tab 'Statistik' in the project details. The data will updated once a day
-an shows used CPU-time and used disk space of an project. Following
-projects shows also the data of the predecessor. \</div>
-
-\<br style="clear: both;" />
+![Project Overview>](misc/overview.png "Project Overview"){loading=lazy width=300 style="float:right"}
+After login you reach an overview that displays all available projects. In each of these projects
+are listed, you are either project leader or an assigned project administrator. From this list, you
+have the option to view the details of a project or make a following project request. The latter is
+only possible if a project has been approved and is active or was. In the upper right area you will
+find a red button to log out from the system.
+
+&nbsp;
+{: style="clear:right;"}
+
+![Project Details>](misc/project_details.png "Project Details"){loading=lazy width=300 style="float:right"}
+The project details provide information about the requested and allocated resources. The other tabs
+show the employee and the statistics about the project.
+
+&nbsp;
+{: style="clear:right;"}
+
+### Manage Project Members (dis-/enable)
+
+![Project Members>](misc/members.png "Project Members"){loading=lazy width=300 style="float:right"}
+The project members can be managed under the tab 'employee' in the project details. This page gives
+an overview of all ZIH logins that are a member of a project and its status. If a project member
+marked in green, it can work on all authorized HPC machines when the project has been approved. If
+an employee is marked in red, this can have several causes:
+
+* the employee was manually disabled by project managers, project administrator
+  or ZIH staff
+* the employee was disabled by the system because its ZIH login expired
+* confirmation of the current HPC-terms is missing
+
+You can specify a user as an administrator. This user can then access the project management system.
+Next, you can disable individual project members. This disabling is only a "request of disabling"
+and has a time delay of 5 minutes. An user can add or reactivate itself, with its ZIH-login, to a
+project via the link on the end of the page. To prevent misuse this link is valid for 2 weeks and
+will then be renewed automatically.
+
+&nbsp;
+{: style="clear:right;"}
+
+![Add Member>](misc/add_member.png "Add Member"){loading=lazy width=300 style="float:right"}
+The link leads to a page where you can sign in to a project by accepting the term of use. You need
+also an valid ZIH-Login. After this step it can take 1-1,5 h to transfer the login to all cluster
+nodes.
+
+&nbsp;
+{: style="clear:right;"}
+
+### Statistic
+
+![Project Statistic>](misc/stats.png "Project Statistic"){loading=lazy width=300 style="float:right"}
+The statistic is located under the tab 'Statistic' in the project details. The data will updated
+once a day an shows used CPU-time and used disk space of an project. Following projects shows also
+the data of the predecessor.
+
+&nbsp;
+{: style="clear:right;"}
diff --git a/doc.zih.tu-dresden.de/docs/application/project_request_form.md b/doc.zih.tu-dresden.de/docs/application/project_request_form.md
index 7a50b2274b2167e5d2efd89c7a4b1725074e8990..b5b9e348a94c4178d382e5ca27d67047c06f1481 100644
--- a/doc.zih.tu-dresden.de/docs/application/project_request_form.md
+++ b/doc.zih.tu-dresden.de/docs/application/project_request_form.md
@@ -1,78 +1,82 @@
 # Project Request Form
 
-## first step (requester)
-
-<span class="twiki-macro IMAGE" type="frame" align="right"
-caption="picture 2: personal information" width="170" zoom="on
-">%ATTACHURL%/request_step1_b.png</span> <span class="twiki-macro IMAGE"
-type="frame" align="right" caption="picture 1: login screen" width="170"
-zoom="on
-">%ATTACHURL%/request_step1_b.png</span>
+## First Step: Requester
 
+![picture 1: Login Screen >](misc/request_step1_b.png "Login Screen"){loading=lazy width=300 style="float:right"}
 The first step is asking for the personal information of the requester.
-**That's you**, not the leader of this project! \<br />If you have an
-ZIH-Login, you can use it \<sup>\[Pic 1\]\</sup>. If not, you have to
-fill in the whole information \<sup>\[Pic.:2\]\</sup>. <span
-class="twiki-macro IMAGE">clear</span>
-
-## second step (project details)
-
-<span class="twiki-macro IMAGE" type="frame" align="right"
-caption="picture 3: project details" width="170" zoom="on
-">%ATTACHURL%/request_step2_details.png</span> This Step is asking for
-general project Details.\<br />Any project have:
-
--   a title, at least 20 characters long
--   a valid duration
-    -   Projects starts at the first of a month and ends on the last day
-        of a month. So you are not able to send on the second of a month
-        a project request which start in this month.
-    -   The approval is for a maximum of one year. Be careful: a
-        duration from "May, 2013" till "May 2014" has 13 month.
--   a selected science, according to the DFG:
-    <http://www.dfg.de/dfg_profil/gremien/fachkollegien/faecher/index.jsp>
--   a sponsorship
--   a kind of request
--   a project leader/manager
-    -   The leader of this project should hold a professorship
-        (university) or is the head of the research group.
-    -   If you are this Person, leave this fields free.
-
-<span class="twiki-macro IMAGE">clear</span>
-
-## third step (hardware)
-
-<span class="twiki-macro IMAGE" type="frame" align="right"
-caption="picture 4: hardware" width="170" zoom="on
-">%ATTACHURL%/request_step3_machines.png</span> This step inquire the
-required hardware. You can find the specifications [here]**todo fix link**
-\<br />For your guidance:
-
--   gpu => taurus
--   many main memory => venus
--   other machines => you know it and don't need this guidance
-
-<span class="twiki-macro IMAGE">clear</span>
-
-## fourth step (software)
-
-<span class="twiki-macro IMAGE" type="frame" align="right"
-caption="picture 5: software" width="170" zoom="on
-">%ATTACHURL%/request_step4_software.png</span> Any information you will
-give us in this step, helps us to make a rough estimate, if you are able
-to realize your project. For Example: you need matlab. Matlab is only
-available on Taurus. <span class="twiki-macro IMAGE">clear</span>
-
-## fifth step (project description)
-
-<span class="twiki-macro IMAGE" type="frame" align="right"
-caption="picture 6: project description" width="170" zoom="on
-">%ATTACHURL%/request_step5_description.png</span> <span
-class="twiki-macro IMAGE">clear</span>
-
-## sixth step (summary)
-
-<span class="twiki-macro IMAGE" type="frame" align="right"
-caption="picture 8: summary" width="170" zoom="on
-">%ATTACHURL%/request_step6.png</span> <span
-class="twiki-macro IMAGE">clear</span>
+**That's you**, not the leader of this project!
+If you have an ZIH-Login, you can use it.
+If not, you have to fill in the whole information.
+
+&nbsp;
+{: style="clear:right;"}
+
+## Second Step: Project Details
+
+![picture 3: Project Details >][1]{loading=lazy width=300 style="float:right"}
+This Step is asking for general project Details.
+
+Any project have:
+
+* a title, at least 20 characters long
+* a valid duration
+    * Projects starts at the first of a month and ends on the last day of a month. So you are not
+      able to send on the second of a month a project request which start in this month.
+    * The approval is for a maximum of one year. Be careful: a duration from "May, 2013" till
+      "May 2014" has 13 month.
+* a selected science, according to the DFG:
+  http://www.dfg.de/dfg_profil/gremien/fachkollegien/faecher/index.jsp
+* a sponsorship a kind of request a project leader/manager The leader of this project should hold a
+  professorship (university) or is the head of the research group.
+    * If you are this person, leave this fields free.
+
+&nbsp;
+{: style="clear:right;"}
+
+## Third step: Hardware
+
+![picture 4: Hardware >](misc/request_step3_machines.png "Hardware"){loading=lazy width=300 style="float:right"}
+This step inquire the required hardware. You can find the specifications
+[here](../jobs_and_resources/hardware_overview.md).
+
+Please fill in the total computing time you expect in the project runtime.  The compute time is
+given in cores per hour (CPU/h), this refers to the 'virtual' cores for nodes with hyperthreading.
+If they require GPUs, then this is given as GPU units per hour (GPU/h).  Please add 6 CPU hours per
+GPU hour in your application.
+
+The project home is a shared storage in your project.  Here you exchange data or install software
+for your project group in userspace. The directory is not intended for active calculations, for this
+the scratch is available.
+
+&nbsp;
+{: style="clear:right;"}
+
+## Fourth Step: Software
+
+![Picture 5: Software >](misc/request_step4_software.png "Software"){loading=lazy width=300 style="float:right"}
+Any information you will give us in this step, helps us to make a rough estimate, if you are able
+to realize your project. For example, some software requires its own licenses.
+
+&nbsp;
+{: style="clear:right;"}
+
+## Fifth Step: Project Description
+
+![picture 6: Project Description >][2]{loading=lazy width=300 style="float:right"} Please enter a
+short project description here. This is especially important for trial accounts and courses. For
+normal HPC projects a detailed project description is additionally required, which you can upload
+here.
+
+&nbsp;
+{: style="clear:right;"}
+
+## Sixth Step: Summary
+
+![picture 6: summary >](misc/request_step6.png "Summary"){loading=lazy width=300 style="float:right"}
+Check your entries and confirm the terms of use.
+
+&nbsp;
+{: style="clear:right;"}
+
+[1]: misc/request_step2_details.png "Project Details"
+[2]: misc/request_step5_description.png "Project Description"
diff --git a/doc.zih.tu-dresden.de/docs/archive/beegfs_on_demand.md b/doc.zih.tu-dresden.de/docs/archive/beegfs_on_demand.md
index ce009ace4bdcfc58fc20009eafbc6faf6c4fd553..8c2235f933fb41f5e590e880fdeb92ce6e950dfc 100644
--- a/doc.zih.tu-dresden.de/docs/archive/beegfs_on_demand.md
+++ b/doc.zih.tu-dresden.de/docs/archive/beegfs_on_demand.md
@@ -61,8 +61,8 @@ Check the status of the job with `squeue -u \<username>`.
 
 ## Mount BeeGFS Filesystem
 
-You can mount BeeGFS filesystem on the ML partition (PowerPC architecture) or on the Haswell
-[partition](../jobs_and_resources/system_taurus.md) (x86_64 architecture)
+You can mount BeeGFS filesystem on the partition ml (PowerPC architecture) or on the
+partition haswell (x86_64 architecture), more information about [partitions](../jobs_and_resources/partitions_and_limits.md).
 
 ### Mount BeeGFS Filesystem on the Partition `ml`
 
diff --git a/doc.zih.tu-dresden.de/docs/archive/cxfs_end_of_support.md b/doc.zih.tu-dresden.de/docs/archive/cxfs_end_of_support.md
index 84e018b655f958ecb2d0a8d35982aad47a66adb2..2854bb2aeccb7d016e91dda4d9de6d717521bf46 100644
--- a/doc.zih.tu-dresden.de/docs/archive/cxfs_end_of_support.md
+++ b/doc.zih.tu-dresden.de/docs/archive/cxfs_end_of_support.md
@@ -1,44 +1,45 @@
-# Changes in the CXFS File System
+# Changes in the CXFS Filesystem
 
-With the ending support from SGI, the CXFS file system will be seperated
-from its tape library by the end of March, 2013.
+!!! warning
 
-This file system is currently mounted at
+    This page is outdated!
 
-- SGI Altix: `/fastfs/`
-- Atlas: `/hpc_fastfs/`
+With the ending support from SGI, the CXFS filesystem will be separated from its tape library by
+the end of March, 2013.
 
-We kindly ask our users to remove their large data from the file system.
+This filesystem is currently mounted at
+
+* SGI Altix: `/fastfs/`
+* Atlas: `/hpc_fastfs/`
+
+We kindly ask our users to remove their large data from the filesystem.
 Files worth keeping can be moved
 
-- to the new [Intermediate Archive](../data_lifecycle/intermediate_archive.md) (max storage
+* to the new [Intermediate Archive](../data_lifecycle/intermediate_archive.md) (max storage
     duration: 3 years) - see
     [MigrationHints](#migration-from-cxfs-to-the-intermediate-archive) below,
-- or to the [Log-term Archive](../data_lifecycle/preservation_research_data.md) (tagged with
+* or to the [Log-term Archive](../data_lifecycle/preservation_research_data.md) (tagged with
     metadata).
 
-To run the file system without support comes with the risk of losing
-data. So, please store away your results into the Intermediate Archive.
-`/fastfs` might on only be used for really temporary data, since we are
-not sure if we can fully guarantee the availability and the integrity of
-this file system, from then on.
+To run the filesystem without support comes with the risk of losing data. So, please store away
+your results into the Intermediate Archive. `/fastfs` might on only be used for really temporary
+data, since we are not sure if we can fully guarantee the availability and the integrity of this
+filesystem, from then on.
 
-With the new HRSK-II system comes a large scratch file system with appr.
-800 TB disk space. It will be made available for all running HPC systems
-in due time.
+With the new HRSK-II system comes a large scratch filesystem with approximately 800 TB disk space.
+It will be made available for all running HPC systems in due time.
 
 ## Migration from CXFS to the Intermediate Archive
 
 Data worth keeping shall be moved by the users to the directory
 `archive_migration`, which can be found in your project's and your
-personal `/fastfs` directories. (`/fastfs/my_login/archive_migration`,
-`/fastfs/my_project/archive_migration` )
+personal `/fastfs` directories:
 
-\<u>Attention:\</u> Exclusively use the command `mv`. Do **not** use
-`cp` or `rsync`, for they will store a second version of your files in
-the system.
+* `/fastfs/my_login/archive_migration`
+* `/fastfs/my_project/archive_migration`
 
-Please finish this by the end of January. Starting on Feb/18/2013, we
-will step by step transfer these directories to the new hardware.
+**Attention:** Exclusively use the command `mv`. Do **not** use `cp` or `rsync`, for they will store
+a second version of your files in the system.
 
-- Set DENYTOPICVIEW = WikiGuest
+Please finish this by the end of January. Starting on Feb/18/2013, we will step by step transfer
+these directories to the new hardware.
diff --git a/doc.zih.tu-dresden.de/docs/archive/install_jupyter.md b/doc.zih.tu-dresden.de/docs/archive/install_jupyter.md
index c6924fc4f716a0f7bdab1ab5b66bdcfe71019151..0d50ecc6c8ec26c30fccaf7882abee6f2070d55b 100644
--- a/doc.zih.tu-dresden.de/docs/archive/install_jupyter.md
+++ b/doc.zih.tu-dresden.de/docs/archive/install_jupyter.md
@@ -131,7 +131,7 @@ c.NotebookApp.allow_remote_access = True
 ```console
 #!/bin/bash -l
 #SBATCH --gres=gpu:1 # request GPU
-#SBATCH --partition=gpu2 # use GPU partition
+#SBATCH --partition=gpu2 # use partition GPU 2
 #SBATCH --output=notebook_output.txt
 #SBATCH --nodes=1
 #SBATCH --ntasks=1
diff --git a/doc.zih.tu-dresden.de/docs/archive/unicore_rest_api.md b/doc.zih.tu-dresden.de/docs/archive/unicore_rest_api.md
index 3cc59e7beb48a69a2b939542b14fef28cf4047fc..839028f327e069e912f59ffb688ccd1f54b58a40 100644
--- a/doc.zih.tu-dresden.de/docs/archive/unicore_rest_api.md
+++ b/doc.zih.tu-dresden.de/docs/archive/unicore_rest_api.md
@@ -1,18 +1,15 @@
 # UNICORE access via REST API
 
-**%RED%The UNICORE support has been abandoned and so this way of access
-is no longer available.%ENDCOLOR%**
+!!! warning
 
-Most of the UNICORE features are also available using its REST API.
-
-This API is documented here:
-
-<https://sourceforge.net/p/unicore/wiki/REST_API/>
+    This page is outdated! The UNICORE support has been abandoned and so this way of access is no
+    longer available.
 
-Some useful examples of job submission via REST are available at:
-
-<https://sourceforge.net/p/unicore/wiki/REST_API_Examples/>
-
-The base address for the Taurus system at the ZIH is:
+Most of the UNICORE features are also available using its REST API.
 
-unicore.zih.tu-dresden.de:8080/TAURUS/rest/core
+* This API is documented here:
+    * [https://sourceforge.net/p/unicore/wiki/REST_API/](https://sourceforge.net/p/unicore/wiki/REST_API/)
+* Some useful examples of job submission via REST are available at:
+    * [https://sourceforge.net/p/unicore/wiki/REST_API_Examples/](https://sourceforge.net/p/unicore/wiki/REST_API_Examples/)
+* The base address for the system at the ZIH is:
+    * `unicore.zih.tu-dresden.de:8080/TAURUS/rest/core`
diff --git a/doc.zih.tu-dresden.de/docs/contrib/content_rules.md b/doc.zih.tu-dresden.de/docs/contrib/content_rules.md
index f57a6b06e1ff7f912856f414b13d81fb183f1474..f5492e7f35ff26e425bff9c7b246f7c0d4a29fb0 100644
--- a/doc.zih.tu-dresden.de/docs/contrib/content_rules.md
+++ b/doc.zih.tu-dresden.de/docs/contrib/content_rules.md
@@ -88,7 +88,6 @@ We follow this rules regarding prompts:
 | `haswell` partition    | `marie@haswell$` |
 | `ml` partition         | `marie@ml$`      |
 | `alpha` partition      | `marie@alpha$`   |
-| `alpha` partition      | `marie@alpha$`   |
 | `romeo` partition      | `marie@romeo$`   |
 | `julia` partition      | `marie@julia$`   |
 | Localhost              | `marie@local$`   |
diff --git a/doc.zih.tu-dresden.de/docs/contrib/contribute_container.md b/doc.zih.tu-dresden.de/docs/contrib/contribute_container.md
index d050940d2afec1eb7325fa5e923a47797a83659c..d3b87d46d6f45af76665b49a74fb3ed7f580edcb 100644
--- a/doc.zih.tu-dresden.de/docs/contrib/contribute_container.md
+++ b/doc.zih.tu-dresden.de/docs/contrib/contribute_container.md
@@ -4,36 +4,42 @@
 
 Please follow this standard Git procedure for working with a local clone:
 
-1. Change to a local (unencrypted) filesystem. (We have seen problems running the container
-an ecryptfs filesystem. So you might
-want to use e.g. `/tmp` as root directory.)
-1. Get a clone of the Git repository: `git clone git@gitlab.hrz.tu-chemnitz.de:zih/hpc-compendium/hpc-compendium.git`
-1. Change to the root of the local clone: `cd hpc-compendium`
-1. Create a new feature branch for you to work in. Ideally name it like the file you want
-to modify. `git checkout -b <BRANCHNAME>`. (Navigation section can be found in `mkdocs.yaml`.)
-1. Add/correct the documentation with your preferred editor.
-1. Run the correctness checks until everything is fine. - Incorrect files will be rejected
+1. Change to a local (unencrypted) filesystem. (We have seen problems running the container on an
+ecryptfs filesystem. So you might want to use e.g. `/tmp` as the start directory.)
+1. Create a new directory, e.g. with `mkdir hpc-wiki`
+1. Change into the new directory, e.g. `cd hpc-wiki`
+1. Clone the Git repository:
+`git clone git@gitlab.hrz.tu-chemnitz.de:zih/hpcsupport/hpc-compendium.git .` (don't forget the
+dot)
+1. Create a new feature branch for you to work in. Ideally, name it like the file you want to
+modify or the issue you want to work on, e.g.: `git checkout -b issue-174`. (If you are uncertain
+about the name of a file, please look into `mkdocs.yaml`.)
+1. Improve the documentation with your preferred editor, i.e. add new files and correct mistakes.
 automatically by our CI pipeline.
-1. Commit the changes with `git commit -m "<DESCRIPTION>" <FILE LIST>`. Include a description
-of the change and a list of all changed files.
-1. Push the local changes to the global feature branch with `git push origin <BRANCHNAME>`.
+1. Use `git add <FILE>` to select your improvements for the next commit.
+1. Commit the changes with `git commit -m "<DESCRIPTION>"`. The description should be a meaningful
+description of your changes. If you work on an issue, please also add "Closes 174" (for issue 174).
+1. Push the local changes to the GitLab server, e.g. with `git push origin issue-174`.
 1. As an output you get a link to create a merge request against the preview branch.
+1. When the merge request is created, a continuous integration (CI) pipeline automatically checks
+your contributions.
 
-You can find the details and command in the next section.
+You can find the details and commands to preview your changes and apply checks in the next section.
 
 ## Preparation
 
-Assuming you understand in principle how to work with our Git. Now you need:
+Assuming you already have a working Docker installation and have cloned the repository as mentioned
+above, a few more steps are necessary.
 
-* a system with running Docker installation
+* a working Docker installation
 * all necessary access/execution rights
-* a local clone of the repository in the directory `./hpc-compendium`
+* a local clone of the repository in the directory `./hpc-wiki`
 
 Build the docker image. This might take a bit longer, but you have to
 run it only once in a while.
 
-```Bash
-cd hpc-compendium
+```bash
+cd hpc-wiki
 docker build -t hpc-compendium .
 ```
 
@@ -45,25 +51,23 @@ Here is a suggestion of a workflow which might be suitable for you.
 
 The command(s) to start the dockerized web server is this:
 
-```Bash
-docker run --name=hpc-compendium -p 8000:8000 --rm -it -w /docs \
-  -v /tmp/hpc-compendium/doc.zih.tu-dresden.de:/docs:z hpc-compendium bash \
-  -c 'mkdocs build  && mkdocs serve -a 0.0.0.0:8000'
+```bash
+docker run --name=hpc-compendium -p 8000:8000 --rm -w /docs --mount src="$(pwd)"/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium bash -c "mkdocs build && mkdocs serve -a 0.0.0.0:8000"
 ```
 
-To follow its progress let it run in a single shell (terminal window)
-and open another one for the other steps.
+You can view the documentation via `http://localhost:8000` in your browser, now.
 
-You can view the documentation via
-[http://localhost:8000](http://localhost:8000) in your browser, now.
+!!! note
+
+    You can keep the local web server running in this shell to always have the opportunity to see
+    the result of your changes in the browser. Simply open another terminal window for other
+    commands.
 
-You can now update the contents in you preferred editor.
-The running container automatically takes care of file changes and rebuilds the
-documentation.
+You can now update the contents in you preferred editor. The running container automatically takes
+care of file changes and rebuilds the documentation whenever you save a file.
 
-With the details described below, it will then be easy to follow the guidelines
-for local correctness checks before submitting your changes and requesting
-the merge.
+With the details described below, it will then be easy to follow the guidelines for local
+correctness checks before submitting your changes and requesting the merge.
 
 ### Run the Proposed Checks Inside Container
 
@@ -73,61 +77,56 @@ In our continuous integration (CI) pipeline, a merge request triggers the automa
 * correct spelling,
 * correct text format.
 
-If one of them fails the merge request will be rejected. To prevent this you can run these
+If one of them fails, the merge request will not be accepted. To prevent this, you can run these
 checks locally and adapt your files accordingly.
 
-!!! note
-
-    Remember to keep the local web server running in the other shell.
-
-First, change to the `hpc-compendium` directory and set the environment
-variable DC to save a lot of keystrokes :-)
+To avoid a lot of retyping, use the following in your shell:
 
-```Bash
-export DC='docker exec -it hpc-compendium bash -c '
+```bash
+alias wiki="docker run --name=hpc-compendium --rm -it -w /docs --mount src=$PWD/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium bash -c"
 ```
 
-and use it like this...
-
-#### Link Checker
-
-To check a single file, e.g.
-`doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md`, use:
+You are now ready to use the different checks
 
-```Bash
-$DC 'markdown-link-check docs/software/big_data_frameworks.md'
-```
+#### Linter
 
-To check whether there are links that point to a wrong target, use
-(this may take a while and gives a lot of output because it runs over all files):
+If you want to check whether the markdown files are formatted properly, use the following command:
 
-```Bash
-$DC 'find docs -type f -name "*.md" | xargs -L1 markdown-link-check'
+```bash
+wiki 'markdownlint docs'
 ```
 
 #### Spell Checker
 
 For spell-checking a single file, , e.g.
-`doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md`, use:
+`doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md`, use:
 
-```Bash
-$DC './util/check-spelling.sh docs/software/big_data_frameworks.md'
+```bash
+wiki 'util/check-spelling.sh docs/software/big_data_frameworks_spark.md'
 ```
 
 For spell-checking all files, use:
 
-```Bash
-$DC ./util/check-spelling.sh
+```bash
+wiki 'find docs -type f -name "*.md" | xargs -L1 util/check-spelling.sh'
 ```
 
 This outputs all words of all files that are unknown to the spell checker.
 To let the spell checker "know" a word, append it to
 `doc.zih.tu-dresden.de/wordlist.aspell`.
 
-#### Linter
+#### Link Checker
 
-If you want to check whether the markdown files are formatted properly, use the following command:
+To check a single file, e.g.
+`doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md`, use:
+
+```bash
+wiki 'markdown-link-check docs/software/big_data_frameworks_spark.md'
+```
+
+To check whether there are links that point to a wrong target, use
+(this may take a while and gives a lot of output because it runs over all files):
 
-```Bash
-$DC 'markdownlint docs'
+```bash
+wiki 'find docs -type f -name "*.md" | xargs -L1 markdown-link-check'
 ```
diff --git a/doc.zih.tu-dresden.de/docs/contrib/howto_contribute.md b/doc.zih.tu-dresden.de/docs/contrib/howto_contribute.md
index 6f16e925fe8008444a4e7046dd9737b839d266b4..31105a5208932ff49ee86d939ed8faa744dad854 100644
--- a/doc.zih.tu-dresden.de/docs/contrib/howto_contribute.md
+++ b/doc.zih.tu-dresden.de/docs/contrib/howto_contribute.md
@@ -7,7 +7,7 @@
 ## Contribute via Issue
 
 Users can contribute to the documentation via the
-[GitLab issue tracking system](https://gitlab.hrz.tu-chemnitz.de/zih/hpc-compendium/hpc-compendium/-/issues).
+[GitLab issue tracking system](https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium/-/issues).
 For that, open an issue to report typos and missing documentation or request for more precise
 wording etc.  ZIH staff will get in touch with you to resolve the issue and improve the
 documentation.
@@ -35,6 +35,6 @@ refer to the corresponding documentation for further information.
 ## Contribute Using Git Locally
 
 For experienced Git users, we provide a Docker container that includes all checks of the CI engine
-used in the backend. Using them should ensure that merge requests will not be blocked
+used in the back-end. Using them should ensure that merge requests will not be blocked
 due to automatic checking.
 For details, see [Work Locally Using Containers](contribute_container.md).
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/preservation_research_data.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/preservation_research_data.md
index 29399cf9f323337bacb34e76a5da8412d599119d..5c035e56d8a3fa647f9d847a08ed5be9ef903f93 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/preservation_research_data.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/preservation_research_data.md
@@ -47,42 +47,38 @@ stored in XML-format but free text is also possible. There are some meta-data st
 Below are some examples:
 
 - possible meta-data for a book would be:
-  - Title
-  - Author
-  - Publisher
-  - Publication
-  - year
-  - ISBN
+    - Title
+    - Author
+    - Publisher
+    - Publication
+    - year
+    - ISBN
 - possible meta-data for an electronically saved image would be:
-  - resolution of the image
-  - information about the colour depth of the picture
-  - file format (jpg or tiff or ...)
-  - file size how was this image created (digital camera, scanner, ...)
-  - description of what the image shows
-  - creation date of the picture
-  - name of the person who made the picture
+    - resolution of the image
+    - information about the colour depth of the picture
+    - file format (jpg or tiff or ...)
+    - file size how was this image created (digital camera, scanner, ...)
+    - description of what the image shows
+    - creation date of the picture
+    - name of the person who made the picture
 - meta-data for the result of a calculation/simulation could be:
-  - file format
-  - file size
-  - input data
-  - which software in which version was used to calculate the result/to do the simulation
-  - configuration of the software
-  - date of the calculation/simulation (start/end or start/duration)
-  - computer on which the calculation/simulation was done
-  - name of the person who submitted the calculation/simulation
-  - description of what was calculated/simulated
+    - file format
+    - file size
+    - input data
+    - which software in which version was used to calculate the result/to do the simulation
+    - configuration of the software
+    - date of the calculation/simulation (start/end or start/duration)
+    - computer on which the calculation/simulation was done
+    - name of the person who submitted the calculation/simulation
+    - description of what was calculated/simulated
 
 ## Where can I get more information about management of research data?
 
-Got to
-
--   <http://www.forschungsdaten.org/> (german version) or <http://www.forschungsdaten.org/en/>
--   (english version)
-
-to find more information about managing research data.
+Go to [http://www.forschungsdaten.org/en/](http://www.forschungsdaten.org/en/) to find more
+information about managing research data.
 
 ## I want to store my research data at ZIH. How can I do that?
 
 Longterm preservation of research data is under construction at ZIH and in a testing phase.
 Nevertheless you can already use the archiving service. If you would like to become a test
-user, please write an E-Mail to Dr. Klaus Köhler (klaus.koehler \[at\] tu-dresden.de).
+user, please write an E-Mail to [Dr. Klaus Köhler](mailto:klaus.koehler@tu-dresden.de).
diff --git a/doc.zih.tu-dresden.de/docs/data_transfer/overview.md b/doc.zih.tu-dresden.de/docs/data_transfer/overview.md
index 1cc64dfd9413d3d99d009a4412bbbf7ffde33e30..095fa14a96d514f6daea6b8edc8850651ba5f367 100644
--- a/doc.zih.tu-dresden.de/docs/data_transfer/overview.md
+++ b/doc.zih.tu-dresden.de/docs/data_transfer/overview.md
@@ -18,5 +18,5 @@ The recommended way for data transfer inside ZIH Systems is the **datamover**. I
 data transfer machine that provides the best transfer speed. To load, move, copy etc. files from one
 filesystem to another filesystem, you have to use commands prefixed with `dt`: `dtcp`, `dtwget`,
 `dtmv`, `dtrm`, `dtrsync`, `dttar`, `dtls`. These commands submit a job to the data transfer
-machines that execute the selected command.  Plese refer to the detailed documentation regarding the
+machines that execute the selected command.  Please refer to the detailed documentation regarding the
 [datamover](datamover.md).
diff --git a/doc.zih.tu-dresden.de/docs/index.md b/doc.zih.tu-dresden.de/docs/index.md
index cc174e052a72bf6258ce4844749690ae28d7a46c..24d3907def65508bc521a0fd3109b9792c76f19b 100644
--- a/doc.zih.tu-dresden.de/docs/index.md
+++ b/doc.zih.tu-dresden.de/docs/index.md
@@ -1,48 +1,30 @@
-# ZIH HPC Compendium
+# ZIH HPC Documentation
 
-Dear HPC users,
+This is the documentation of the HPC systems and services provided at
+[TU Dresden/ZIH](https://tu-dresden.de/zih/).  This documentation is work in progress, since we try
+to incorporate more information with increasing experience and with every question you ask us. The
+HPC team invites you to take part in the improvement of these pages by correcting or adding useful
+information.
 
-due to restrictions coming from data security and software incompatibilities the old
-"HPC Compendium" is now reachable only from inside TU Dresden campus (or via VPN).
+## Contribution
 
-Internal users should be redirected automatically.
+Issues concerning this documentation can reported via the GitLab
+[issue tracking system](https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium/-/issues).
+Please check for any already existing issue before submitting your issue in order to avoid duplicate
+issues.
 
-We apologize for this severe action, but we are in the middle of the preparation for a wiki
-relaunch, so we do not want to redirect resources to fix technical/security issues for a system
-that will last only a few weeks.
+Contributions from user-side are highly welcome. Please refer to
+the detailed [documentation](contrib/howto_contribute.md) to get started.
 
-Thank you for your understanding,
+**Reminder:** Non-documentation issues and requests need to be send as ticket to
+[hpcsupport@zih.tu-dresden.de](mailto:hpcsupport@zih.tu-dresden.de).
 
-your HPC Support Team ZIH
+---
 
-## What is new?
+---
 
-The desire for a new technical documentation is driven by two major aspects:
+## News
 
-1. Clear and user-oriented structure of the content
-1. Usage of modern tools for technical documentation
+**2021-10-05** Offline-maintenance (black building test)
 
-The HPC Compendium provided knowledge and help for many years. It grew with every new hardware
-installation and ZIH stuff tried its best to keep it up to date. But, to be honest, it has become
-quite messy, and housekeeping it was a nightmare.
-
-The new structure is designed with the schedule for an HPC project in mind. This will ease the start
-for new HPC users, as well speedup searching information w.r.t. a specific topic for advanced users.
-
-We decided against a classical wiki software. Instead, we write the documentation in markdown and
-make use of the static site generator [mkdocs](https://www.mkdocs.org/) to create static html files
-from this markdown files. All configuration, layout and content files are managed within a git
-repository. The generated static html files, i.e, the documentation you are now reading, is deployed
-to a web server.
-
-The workflow is flexible, allows a high level of automation, and is quite easy to maintain.
-
-From a technical point, our new documentation system is highly inspired by
-[OLFC User Documentation](https://docs.olcf.ornl.gov/) as well as
-[NERSC Technical Documentation](https://nersc.gitlab.io/).
-
-## Contribute
-
-Contributions are highly welcome. Please refere to
-[README.md](https://gitlab.hrz.tu-chemnitz.de/zih/hpc-compendium/hpc-compendium/-/blob/main/doc.zih.tu-dresden.de/README.md)
-file of this project.
+**2021-09-29** Introduction to HPC at ZIH ([slides](misc/HPC-Introduction.pdf))
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md
index 5324f550e30e66b6ec6830cf7fddbb921b0dbdbf..ca813dbe4b627f2ac74b33163f285c6caa93348b 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md
@@ -1,13 +1,13 @@
-# Alpha Centauri - Multi-GPU sub-cluster
+# Alpha Centauri - Multi-GPU Sub-Cluster
 
-The sub-cluster "AlphaCentauri" had been installed for AI-related computations (ScaDS.AI).
+The sub-cluster "Alpha Centauri" had been installed for AI-related computations (ScaDS.AI).
 It has 34 nodes, each with:
 
-- 8 x NVIDIA A100-SXM4 (40 GB RAM)
-- 2 x AMD EPYC CPU 7352 (24 cores) @ 2.3 GHz with multithreading enabled
-- 1 TB RAM 3.5 TB `/tmp` local NVMe device
-- Hostnames: `taurusi[8001-8034]`
-- Slurm partition `alpha` for batch jobs and `alpha-interactive` for interactive jobs
+* 8 x NVIDIA A100-SXM4 (40 GB RAM)
+* 2 x AMD EPYC CPU 7352 (24 cores) @ 2.3 GHz with multi-threading enabled
+* 1 TB RAM 3.5 TB `/tmp` local NVMe device
+* Hostnames: `taurusi[8001-8034]`
+* Slurm partition `alpha` for batch jobs and `alpha-interactive` for interactive jobs
 
 !!! note
 
@@ -19,12 +19,12 @@ It has 34 nodes, each with:
 ### Modules
 
 The easiest way is using the [module system](../software/modules.md).
-The software for the `alpha` partition is available in `modenv/hiera` module environment.
+The software for the partition alpha is available in `modenv/hiera` module environment.
 
 To check the available modules for `modenv/hiera`, use the command
 
-```bash
-module spider <module_name>
+```console
+marie@alpha$ module spider <module_name>
 ```
 
 For example, to check whether PyTorch is available in version 1.7.1:
@@ -95,11 +95,11 @@ Successfully installed torchvision-0.10.0
 
 ### JupyterHub
 
-[JupyterHub](../access/jupyterhub.md) can be used to run Jupyter notebooks on AlphaCentauri
+[JupyterHub](../access/jupyterhub.md) can be used to run Jupyter notebooks on Alpha Centauri
 sub-cluster. As a starting configuration, a "GPU (NVIDIA Ampere A100)" preset can be used
 in the advanced form. In order to use latest software, it is recommended to choose
 `fosscuda-2020b` as a standard environment. Already installed modules from `modenv/hiera`
-can be pre-loaded in "Preload modules (modules load):" field.
+can be preloaded in "Preload modules (modules load):" field.
 
 ### Containers
 
@@ -109,6 +109,6 @@ Detailed information about containers can be found [here](../software/containers
 Nvidia
 [NGC](https://developer.nvidia.com/blog/how-to-run-ngc-deep-learning-containers-with-singularity/)
 containers can be used as an effective solution for machine learning related tasks. (Downloading
-containers requires registration).  Nvidia-prepared containers with software solutions for specific
+containers requires registration). Nvidia-prepared containers with software solutions for specific
 scientific problems can simplify the deployment of deep learning workloads on HPC. NGC containers
 have shown consistent performance compared to directly run code.
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/batch_systems.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/batch_systems.md
deleted file mode 100644
index 06e9be7e7a8ab5efa0ae1272ba6159ac50310e0b..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/batch_systems.md
+++ /dev/null
@@ -1,56 +0,0 @@
-# Batch Systems
-
-Applications on an HPC system can not be run on the login node. They have to be submitted to compute
-nodes with dedicated resources for user jobs. Normally a job can be submitted with these data:
-
-- number of CPU cores,
-- requested CPU cores have to belong on one node (OpenMP programs) or
-  can distributed (MPI),
-- memory per process,
-- maximum wall clock time (after reaching this limit the process is
-  killed automatically),
-- files for redirection of output and error messages,
-- executable and command line parameters.
-
-Depending on the batch system the syntax differs slightly:
-
-- [Slurm](../jobs_and_resources/slurm.md) (taurus, venus)
-
-If you are confused by the different batch systems, you may want to enjoy this [batch system
-commands translation table](http://slurm.schedmd.com/rosetta.pdf).
-
-**Comment:** Please keep in mind that for a large runtime a computation may not reach its end. Try
-to create shorter runs (4...8 hours) and use checkpointing.  Here is an extreme example from
-literature for the waste of large computing resources due to missing checkpoints:
-
-*Earth was a supercomputer constructed to find the question to the answer to the Life, the Universe,
-and Everything by a race of hyper-intelligent pan-dimensional beings. Unfortunately 10 million years
-later, and five minutes before the program had run to completion, the Earth was destroyed by
-Vogons.* (Adams, D. The Hitchhikers Guide Through the Galaxy)
-
-## Exclusive Reservation of Hardware
-
-If you need for some special reasons, e.g., for benchmarking, a project or paper deadline, parts of
-our machines exclusively, we offer the opportunity to request and reserve these parts for your
-project.
-
-Please send your request **7 working days** before the reservation should start (as that's our
-maximum time limit for jobs and it is therefore not guaranteed that resources are available on
-shorter notice) with the following information to the [HPC
-support](mailto:hpcsupport@zih.tu-dresden.de?subject=Request%20for%20a%20exclusive%20reservation%20of%20hardware&body=Dear%20HPC%20support%2C%0A%0AI%20have%20the%20following%20request%20for%20a%20exclusive%20reservation%20of%20hardware%3A%0A%0AProject%3A%0AReservation%20owner%3A%0ASystem%3A%0AHardware%20requirements%3A%0ATime%20window%3A%20%3C%5Byear%5D%3Amonth%3Aday%3Ahour%3Aminute%20-%20%5Byear%5D%3Amonth%3Aday%3Ahour%3Aminute%3E%0AReason%3A):
-
-- `Project:` *\<Which project will be credited for the reservation?>*
-- `Reservation owner:` *\<Who should be able to run jobs on the
-  reservation? I.e., name of an individual user or a group of users
-  within the specified project.>*
-- `System:` *\<Which machine should be used?>*
-- `Hardware requirements:` *\<How many nodes and cores do you need? Do
-  you have special requirements, e.g., minimum on main memory,
-  equipped with a graphic card, special placement within the network
-  topology?>*
-- `Time window:` *\<Begin and end of the reservation in the form
-  year:month:dayThour:minute:second e.g.: 2020-05-21T09:00:00>*
-- `Reason:` *\<Reason for the reservation.>*
-
-**Please note** that your project CPU hour budget will be credited for the reserved hardware even if
-you don't use it.
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/binding_and_distribution_of_tasks.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/binding_and_distribution_of_tasks.md
index 4e8bde8c6e43ab765135f3199525a09820abf8d1..4677a625300c59a04160389f4cf9a3bf975018c8 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/binding_and_distribution_of_tasks.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/binding_and_distribution_of_tasks.md
@@ -1,45 +1,76 @@
 # Binding and Distribution of Tasks
 
+Slurm provides several binding strategies to place and bind the tasks and/or threads of your job
+to cores, sockets and nodes.
+
+!!! note
+
+    Keep in mind that the distribution method might have a direct impact on the execution time of
+    your application. The manipulation of the distribution can either speed up or slow down your
+    application.
+
 ## General
 
-To specify a pattern the commands `--cpu_bind=<cores|sockets>` and
-`--distribution=<block | cyclic>` are needed. cpu_bind defines the resolution in which the tasks
-will be allocated. While --distribution determinates the order in which the tasks will be allocated
-to the cpus.  Keep in mind that the allocation pattern also depends on your specification.
+To specify a pattern the commands `--cpu_bind=<cores|sockets>` and `--distribution=<block|cyclic>`
+are needed. The option `cpu_bind` defines the resolution in which the tasks will be allocated. While
+`--distribution` determinate the order in which the tasks will be allocated to the CPUs. Keep in
+mind that the allocation pattern also depends on your specification.
 
-```Bash
-#!/bin/bash 
-#SBATCH --nodes=2                        # request 2 nodes 
-#SBATCH --cpus-per-task=4                # use 4 cores per task 
-#SBATCH --tasks-per-node=4               # allocate 4 tasks per node - 2 per socket 
+!!! example "Explicitly specify binding and distribution"
 
-srun --ntasks 8 --cpus-per-task 4 --cpu_bind=cores --distribution=block:block ./application
-```
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2                        # request 2 nodes
+    #SBATCH --cpus-per-task=4                # use 4 cores per task
+    #SBATCH --tasks-per-node=4               # allocate 4 tasks per node - 2 per socket
+
+    srun --ntasks 8 --cpus-per-task 4 --cpu_bind=cores --distribution=block:block ./application
+    ```
 
 In the following sections there are some selected examples of the combinations between `--cpu_bind`
 and `--distribution` for different job types.
 
+## OpenMP Strategies
+
+The illustration below shows the default binding of a pure OpenMP-job on a single node with 16 CPUs
+on which 16 threads are allocated.
+
+```Bash
+#!/bin/bash
+#SBATCH --nodes=1
+#SBATCH --tasks-per-node=1
+#SBATCH --cpus-per-task=16
+
+export OMP_NUM_THREADS=16
+
+srun --ntasks 1 --cpus-per-task $OMP_NUM_THREADS ./application
+```
+
+![OpenMP](misc/openmp.png)
+{: align=center}
+
 ## MPI Strategies
 
-### Default Binding and Dsitribution Pattern
+### Default Binding and Distribution Pattern
 
-The default binding uses --cpu_bind=cores in combination with --distribution=block:cyclic. The
-default (as well as block:cyclic) allocation method will fill up one node after another, while
+The default binding uses `--cpu_bind=cores` in combination with `--distribution=block:cyclic`. The
+default (as well as `block:cyclic`) allocation method will fill up one node after another, while
 filling socket one and two in alternation. Resulting in only even ranks on the first socket of each
 node and odd on each second socket of each node.
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAw4AAADeCAIAAAAb9sCoAAAABmJLR0QA/wD/AP+gvaeTAAAfBklEQVR4nO3dfXBU1f348bshJEA2ISGbB0gIZAMJxqciIhCktGKxaqs14UEGC9gBJVUjxIo4EwFlpiqMOgydWipazTBNVATbGevQMQQYUMdSEEUNYGIID8kmMewmm2TzeH9/3On+9pvN2T27N9nsJu/XX+Tu/dx77uee8+GTu8tiUFVVAQAAQH/ChnoAAAAAwYtWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQChcT7DBYBiocQAIOaqqDvUQfEC9AkYyPfWKp0oAAABCup4qaULrN0sA+oXuExrqFTDS6K9XPFUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUauX72s58ZDIZPP/3UuSU5OfnDDz+UP8KXX35pNBrl9y8uLs7JyYmKikpOTvZhoABGvMDXq40bN2ZnZ48bNy4tLW3Tpk2dnZ0+DBfDC63SiBYfH//0008H7HQmk2nDhg3btm0L2BkBDBsBrld2u33Pnj2XLl0qLS0tLS3dunVrwE6NYEOrNKKtXbu2srLygw8+cH+ptrZ26dKliYmJqampjz/+eFtbm7b90qVLd911V2xs7A033HDixAnn/s3Nzfn5+ZMnT05ISHjwwQcbGxvdj3nPPfcsW7Zs8uTJg3Q5AIaxANerN954Y8GCBfHx8Tk5OQ8//LBrOEYaWqURzWg0btu27dlnn+3q6urzUl5e3ujRoysrK0+ePHnq1KnCwkJt+9KlS1NTU+vq6v71r3/95S9/ce6/cuVKi8Vy+vTpmpqa8ePHr1mzJmBXAWAkGMJ6dfz48VmzZg3o1SCkqDroPwKG0MKFC7dv397V1TVjxozdu3erqpqUlHTw4EFVVSsqKhRFqa+v1/YsKysbM2ZMT09PRUWFwWBoamrSthcXF0dFRamqWlVVZTAYnPvbbDaDwWC1Wvs9b0lJSVJS0mBfHQZVKK79UBwznIaqXqmqumXLlvT09MbGxkG9QAwe/Ws/PNCtGYJMeHj4Sy+9tG7dulWrVjk3Xr58OSoqKiEhQfvRbDY7HI7GxsbLly/Hx8fHxcVp26dPn679obq62mAwzJ4923mE8ePHX7lyZfz48YG6DgDDX+Dr1QsvvLBv377y8vL4+PjBuioEPVolKPfff/8rr7zy0ksvObekpqa2trY2NDRo1ae6ujoyMtJkMqWkpFit1o6OjsjISEVR6urqtP3T0tIMBsOZM2fojQAMqkDWq82bNx84cODo0aOpqamDdkEIAXxWCYqiKDt37ty1a1dLS4v2Y2Zm5ty5cwsLC+12u8ViKSoqWr16dVhY2IwZM2bOnPnaa68pitLR0bFr1y5t/4yMjMWLF69du7a2tlZRlIaGhv3797ufpaenx+FwaJ8zcDgcHR0dAbo8AMNIYOpVQUHBgQMHDh06ZDKZHA4HXxYwktEqQVEUZc6cOffee6/zn40YDIb9+/e3tbWlp6fPnDnzpptuevXVV7WX3n///bKysltuueWOO+644447nEcoKSmZNGlSTk5OdHT03Llzjx8/7n6WN954Y+zYsatWrbJYLGPHjuWBNgA/BKBeWa3W3bt3X7hwwWw2jx07duzYsdnZ2YG5OgQhg/MTT/4EGwyKoug5AoBQFIprPxTHDEA//Wufp0oAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABC4UM9AAAInKqqqqEeAoAQY1BV1f9gg0FRFD1HABCKQnHta2MGMDLpqVcD8FSJAgQg+JnN5qEeAoCQNABPlQCMTKH1VAkA/KOrVQIAABje+BdwAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrq+gpLvVRoJ/Ps6CebGSBBaXzXCnBwJqFcQ0VOveKoEAAAgNAD/sUlo/WYJefp/02JuDFeh+1s4c3K4ol5BRP/c4KkSAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACA0PBvlb799ttf//rXJpNp3LhxM2bMeOaZZ/w4yIwZMz788EPJnX/yk5+Ulpb2+1JxcXFOTk5UVFRycrIfw8DACqq5sXHjxuzs7HHjxqWlpW3atKmzs9OPwSDUBdWcpF4FlaCaGyOtXg3zVqm3t/eXv/zlpEmTvv7668bGxtLSUrPZPITjMZlMGzZs2LZt2xCOAZpgmxt2u33Pnj2XLl0qLS0tLS3dunXrEA4GQyLY5iT1KngE29wYcfVK1UH/EQbbpUuXFEX59ttv3V+6evXqkiVLEhISUlJSHnvssdbWVm37tWvX8vPz09LSoqOjZ86cWVFRoapqVlbWwYMHtVcXLly4atWqzs5Om822fv361NRUk8m0fPnyhoYGVVUff/zx0aNHm0ymKVOmrFq1qt9RlZSUJCUlDdY1Dxw995e54d/c0GzZsmXBggUDf80DJ/jvr7vgH3NwzknqVTAIzrmhGQn1apg/VZo0aVJmZub69evffffdmpoa15fy8vJGjx5dWVl58uTJU6dOFRYWattXrFhx8eLFzz77zGq1vvPOO9HR0c6Qixcvzp8///bbb3/nnXdGjx69cuVKi8Vy+vTpmpqa8ePHr1mzRlGU3bt3Z2dn7969u7q6+p133gngtcI3wTw3jh8/PmvWrIG/ZgS3YJ6TGFrBPDdGRL0a2k4tACwWy+bNm2+55Zbw8PBp06aVlJSoqlpRUaEoSn19vbZPWVnZmDFjenp6KisrFUW5cuVKn4NkZWU999xzqampe/bs0bZUVVUZDAbnEWw2m8FgsFqtqqrefPPN2llE+C0tSATh3FBVdcuWLenp6Y2NjQN4pQMuJO5vHyEx5iCck9SrIBGEc0MdMfVq+LdKTi0tLa+88kpYWNhXX331ySefREVFOV/64YcfFEWxWCxlZWXjxo1zj83KykpKSpozZ47D4dC2HD58OCwsbIqL2NjYb775RqX06I4NvOCZG88//7zZbK6urh7Q6xt4oXV/NaE15uCZk9SrYBM8c2Pk1Kth/gacK6PRWFhYOGbMmK+++io1NbW1tbWhoUF7qbq6OjIyUntTtq2trba21j18165dCQkJ9913X1tbm6IoaWlpBoPhzJkz1f9z7dq17OxsRVHCwkZQVoeHIJkbmzdv3rdv39GjR6dMmTIIV4lQEiRzEkEoSObGiKpXw3yR1NXVPf3006dPn25tbW1qanrxxRe7urpmz56dmZk5d+7cwsJCu91usViKiopWr14dFhaWkZGxePHiRx55pLa2VlXVs2fPOqdaZGTkgQMHYmJi7r777paWFm3PtWvXajs0NDTs379f2zM5OfncuXP9jqenp8fhcHR1dSmK4nA4Ojo6ApIG9CPY5kZBQcGBAwcOHTpkMpkcDsew/8e3cBdsc5J6FTyCbW6MuHo1tA+1BpvNZlu3bt306dPHjh0bGxs7f/78jz76SHvp8uXLubm5JpNp4sSJ+fn5drtd297U1LRu3bqUlJTo6Ohbbrnl3Llzqsu/Guju7v7tb3972223NTU1Wa3WgoKCqVOnGo1Gs9n85JNPakc4cuTI9OnTY2Nj8/Ly+ozn9ddfd02+64PTIKTn/jI3fJob165d67MwMzIyApcL3wX//XUX/GMOqjmpUq+CSVDNjRFYrwzOo/jBYDBop/f7CAhmeu4vc2N4C8X7G4pjhjzqFUT0399h/gYcAACAHrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQuH6D2EwGPQfBMMScwPBhjkJEeYGRHiqBAAAIGRQVXWoxwAAABCkeKoEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgpOvbuvlu05HAv2/eYm6MBKH1rWzMyZGAegURPfWKp0oAAABCA/B/wIXWb5aQp/83LebGcBW6v4UzJ4cr6hVE9M8NnioBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAIDdtW6cSJE/fee++ECROioqJuvPHGoqKi1tbWAJy3u7u7oKBgwoQJMTExK1eubG5u7nc3o9FocBEZGdnR0RGA4Y1YQzUfLBbLsmXLTCZTbGzsXXfdde7cuX53Ky4uzsnJiYqKSk5Odt2+Zs0a13lSWloagDEj8KhXcEW9CjbDs1X65z//uWjRoptvvvmzzz6rr6/ft29ffX39mTNnZGJVVe3q6vL71M8///yhQ4dOnjz5/fffX7x4cf369f3uZrFYWv4nNzf3gQceiIyM9Puk8GwI50N+fr7Vaj1//vyVK1cmTpy4dOnSfnczmUwbNmzYtm2b+0uFhYXOqbJkyRK/R4KgRb2CK+pVMFJ10H+EwdDT05OamlpYWNhne29vr6qqV69eXbJkSUJCQkpKymOPPdba2qq9mpWVVVRUdPvtt2dmZpaXl9tstvXr16empppMpuXLlzc0NGi7vfrqq1OmTBk/fvzEiRO3b9/ufvbExMS33npL+3N5eXl4ePi1a9c8jLahoSEyMvLw4cM6r3ow6Lm/wTM3hnY+ZGRk7N27V/tzeXl5WFhYd3e3aKglJSVJSUmuW1avXv3MM8/4e+mDKHjur7zgHDP1aqBQr6hXIgPQ7Qzt6QeD1n2fPn2631fnzZu3YsWK5ubm2traefPmPfroo9r2rKysG264obGxUfvxV7/61QMPPNDQ0NDW1vbII4/ce++9qqqeO3fOaDReuHBBVVWr1frf//63z8Fra2tdT609zT5x4oSH0e7cuXP69Ok6LncQDY/SM4TzQVXVTZs2LVq0yGKx2Gy2hx56KDc318NQ+y09EydOTE1NnTVr1ssvv9zZ2el7AgZF8NxfecE5ZurVQKFeUa9EaJX68cknnyiKUl9f7/5SRUWF60tlZWVjxozp6elRVTUrK+tPf/qTtr2qqspgMDh3s9lsBoPBarVWVlaOHTv2vffea25u7vfU58+fVxSlqqrKuSUsLOzjjz/2MNrMzMydO3f6fpWBMDxKzxDOB23nhQsXatm47rrrampqPAzVvfQcOnTo008/vXDhwv79+1NSUtx/1xwqwXN/5QXnmKlXA4V6pW2nXrnTf3+H4WeVEhISFEW5cuWK+0uXL1+OiorSdlAUxWw2OxyOxsZG7cdJkyZpf6iurjYYDLNnz546derUqVNvuumm8ePHX7lyxWw2FxcX//nPf05OTv7pT3969OjRPsePjo5WFMVms2k/trS09Pb2xsTEvP32285PurnuX15eXl1dvWbNmoG6drgbwvmgquqdd95pNpubmprsdvuyZctuv/321tZW0Xxwt3jx4nnz5k2bNi0vL+/ll1/et2+fnlQgCFGv4Ip6FaSGtlMbDNp7vU899VSf7b29vX268vLy8sjISGdXfvDgQW37999/P2rUKKvVKjpFW1vbH//4x7i4OO39Y1eJiYl/+9vftD8fOXLE83v/y5cvf/DBB327vADSc3+DZ24M4XxoaGhQ3N7g+Pzzz0XHcf8tzdV77703YcIET5caQMFzf+UF55ipVwOFeqVtp165G4BuZ2hPP0j+8Y9/jBkz5rnnnqusrHQ4HGfPns3Pzz9x4kRvb+/cuXMfeuihlpaWurq6+fPnP/LII1qI61RTVfXuu+9esmTJ1atXVVWtr69///33VVX97rvvysrKHA6HqqpvvPFGYmKie+kpKirKysqqqqqyWCwLFixYsWKFaJD19fURERHB+QFJzfAoPeqQzocpU6asW7fOZrO1t7e/8MILRqOxqanJfYTd3d3t7e3FxcVJSUnt7e3aMXt6evbu3VtdXW21Wo8cOZKRkeH8aMKQC6r7Kylox0y9GhDUK+cRqFd90CoJHT9+/O67746NjR03btyNN9744osvav9Y4PLly7m5uSaTaeLEifn5+Xa7Xdu/z1SzWq0FBQVTp041Go1ms/nJJ59UVfXUqVO33XZbTExMXFzcnDlzjh075n7ezs7OJ554IjY21mg0rlixwmaziUa4Y8eOoP2ApGbYlB516ObDmTNnFi9eHBcXFxMTM2/ePNHfNK+//rrrs96oqChVVXt6eu688874+PiIiAiz2fzss8+2tbUNeGb8E2z3V0Ywj5l6pR/1yhlOvepD//01OI/iB+2dSz1HQDDTc3+ZG8NbKN7fUBwz5FGvIKL//g7Dj3UDAAAMFFolAAAAIVolAAAAIVolAAAAIVolAAAAIVolAAAAIVolAAAAIVolAAAAIVolAAAAoXD9h/D6vw1jxGJuINgwJyHC3IAIT5UAAACEdP0fcAAAAMMbT5UAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEdH1bN99tOhL4981bzI2RILS+lY05ORJQryCip17xVAkAAEBoAP4POD1dPLHBH6tHKF4vsfKxoSgU80ysfKweoXi9xMrH6sFTJQAAACFaJQAAACFaJQAAAKFBaZW6u7sLCgomTJgQExOzcuXK5uZm+diNGzdmZ2ePGzcuLS1t06ZNnZ2dfpx95syZBoOhrq7Op8B///vfc+bMGTNmTEJCwqZNm+QDLRbLsmXLTCZTbGzsXXfdde7cOc/7FxcX5+TkREVFJScn9xm517yJYmXyJop1nt2/vPnE8xg8KyoqSk9Pj4yMjI+Pv++++77//nv52DVr1hhclJaWyscajUbX2MjIyI6ODsnYy5cv5+XlxcfHT5gw4fe//73XQFF+ZPIm2kcmb6JYPXkLZh7y6bUOiGJl6oBoncqsfVGszNr3vI/nte8h1muuRLEyuRLNWz1/v8gQ3V+ZOiCKlakDolzJrH1RrMzaF8XKrH1RrEyuRLEyuRJdl56/X7xQdRAdoaioKDMzs7Ky0mKxzJ8/f8WKFfKxa9euPXbsWGNj44kTJyZPnrx582b5WM327dsXLVqkKEptba18bFlZmdFo/Otf/1pXV1dTU3Ps2DH52AceeOAXv/jFjz/+aLfbV69efeONN3qO/eijj959990dO3YkJSW57iPKm0ysKG8ysRr3vOmZIaJYz2PwHPv5559XVlY2NzdXVVXdf//9OTk58rGrV68uLCxs+Z+uri75WLvd7gzMzc1dvny5fOxtt9324IMP2my2q1evzp0798knn/QcK8qPaLtMrChvMrGivOmvHoEnc72iOiATK6oDrrGidSqz9kWxMmvfc131vPZFsTK5EsXK5Eo0b2Vy5SuZ+yuqAzKxojogkyuZtS+KlVn7oliZtS+KlcmVKFYmV6LrksmVfwalVUpMTHzrrbe0P5eXl4eHh1+7dk0y1tWWLVsWLFggf15VVb/55puMjIwvvvhC8bFVysnJeeaZZzyPRxSbkZGxd+9e7c/l5eVhYWHd3d1eY0tKSvrcTlHeZGJdueZNMrbfvA1U6XHnefxez9vZ2Zmfn3/PPffIx65evdrv++vU0NAQGRl5+PBhydgrV64oilJRUaH9ePDgQaPR2NHR4TVWlB/37T7NjT55k4kV5U1/6Qk8mesV1QGZWFEdEOXKdZ3Kr333WNF2yVif1r5rrHyu3GN9ylWfeetrrmT4tI761AGvsR7qgPz9lVn7olhVYu27x/q69vs9r9dc9Yn1NVf9/l0gnyt5A/8GXF1dXX19/cyZM7UfZ82a1d3d/e233/pxqOPHj8+aNUt+/56ent/97nevvfZadHS0TydyOByff/55T0/PddddFxcXt2jRoq+++ko+PC8vr6SkpL6+vrm5+c033/zNb34zatQonwaghGbeAq+4uDg5OTk6Ovrrr7/++9//7mvs5MmTb7311h07dnR1dflx9rfffjstLe3nP/+55P7OJepkt9t9et9woAxt3kJFgOuAc536sfZFa1xm7bvu4+vad8b6kSvX80rmyn3eDmCd9FsA6oCvNdxDrE9r3z1Wfu33O2bJXDlj5XOlp6b5Q0+f1e8Rzp8/ryhKVVXV/2/HwsI+/vhjmVhXW7ZsSU9Pb2xslDyvqqo7d+5cunSpqqrfffed4stTpdraWkVR0tPTz549a7fbN2zYkJKSYrfbJc9rs9kWLlyovXrdddfV1NTInLdP5+shb15jXfXJm0ysKG96ZojnWL+fKrW1tV29evXYsWMzZ85cu3atfOyhQ4c+/fTTCxcu7N+/PyUlpbCw0Ncxq6qamZm5c+dOn8Z86623Oh8mz5s3T1GUzz77zGvsgD9V6jdvMrGivOmvHoHn9Xo91AGZXInqQL+5cl2nPq19VVwbva599318WvuusT7lyv28krlyn7e+5kqSTzW2Tx2QiRXVAfn7K/mkxD1Wcu27x/q09kVz0muu3GMlc+Xh74LBeKo08K2StoROnz6t/ah95u7EiRMysU7PP/+82Wyurq6WP++FCxcmTZpUV1en+t4qtbS0KIqyY8cO7cf29vZRo0YdPXpUJra3t3f27NkPP/xwU1OT3W7funVrWlqaTJvVb5nuN2/yy9g9b15jPeRtYEuPzPjlz3vs2DGDwdDa2upH7L59+xITE3097+HDhyMiIhoaGnwa88WLF3Nzc5OSktLT07du3aooyvnz573GDtIbcOr/zZuvsa550196As/r9XqoA15jPdQB99g+69SntS+qjTJrv88+Pq39PrE+5apPrE+50jjnrU+5kie/FtzrgEysqA7I31+Zte/5703Pa99zrOe1L4qVyZV7rHyu3K9LExpvwCUnJycmJn755Zfaj6dOnQoPD8/OzpY/wubNm/ft23f06NEpU6bIRx0/fryxsfH66683mUxaK3r99de/+eabMrFGo3HatGnOL/T06Zs9f/zxx//85z8FBQVxcXFRUVFPPfVUTU3N2bNn5Y+gCcW8Da1Ro0b58UanoigRERHd3d2+Ru3Zsyc3N9dkMvkUlZaW9sEHH9TV1VVVVaWmpqakpEybNs3XUw+sAOcthASmDrivU/m1L1rjMmvffR/5te8eK58r91j/aqY2b/XXSZ0GtQ74V8PlY0Vr32ush7XvIdZrrvqN9aNm+l3TfKCnzxIdoaioKCsrq6qqymKxLFiwwKd/AffEE09Mnz69qqqqvb29vb3d/TOwotjW1tZL/3PkyBFFUU6dOiX/Jtqrr75qNpvPnTvX3t7+hz/8YfLkyfJPLKZMmbJu3Tqbzdbe3v7CCy8YjcampiYPsd3d3e3t7cXFxUlJSe3t7Q6HQ9suyptMrChvXmM95E3PDBHFisbvNbazs/PFF1+sqKiwWq1ffPHFrbfempeXJxnb09Ozd+/e6upqq9V65MiRjIyMRx99VH7MqqrW19dHRET0+4Fuz7EnT5784YcfGhsbDxw4kJCQ8Pbbb3uOFeVHtN1rrIe8eY31kDf91SPwZPIsqgMysaI64BorWqcya18UK7P2+91Hcu2Lji+TK1Gs11x5mLcyuRqMuaEK6oBMrKgOyORKZu33Gyu59vuNlVz7Hv6+9porUazXXHm4Lplc+WdQWqXOzs4nnngiNjbWaDSuWLHCZrNJxl67dk35vzIyMuTP6+TrG3Cqqvb29m7ZsiUpKSkmJuaOO+74+uuv5WPPnDmzePHiuLi4mJiYefPmef0XUq+//rrrNUZFRWnbRXnzGushbzLnFeVNz/QSxXodgyi2q6vrvvvuS0pKioiImDp16saNG+XnVU9Pz5133hkfHx8REWE2m5999tm2tjb5MauqumPHjunTp/f7kufYXbt2JSYmjh49Ojs7u7i42GusKD+i7V5jPeTNa6yHvOmZG0NFJs+iOiATK6oDzlgP69Tr2hfFyqx9mboqWvseYr3mykOs11x5mLcyddJXMvdXFdQBmVhRHZDJlde1L4qVWfuiWJm173leec6Vh1ivufJwXTJ10j8G51H8ELr/bR6xxBI7VLFDJRRzRSyxxA5trIb/2AQAAECIVgkAAECIVgkAAECIVgkAAEBoAD7WjeFNz8foMLyF4se6MbxRryDCx7oBAAAGha6nSgAAAMMbT5UAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACE/h82xQH7rLtt0wAAAABJRU5ErkJggg=="
-/>
+![Default distribution](misc/mpi_default.png)
+{: align="center"}
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=16
-#SBATCH --cpus-per-task=1
+!!! example "Default binding and default distribution"
 
-srun --ntasks 32 ./application
-```
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2
+    #SBATCH --tasks-per-node=16
+    #SBATCH --cpus-per-task=1
+
+    srun --ntasks 32 ./application
+    ```
 
 ### Core Bound
 
@@ -50,18 +81,19 @@ application.
 
 This method allocates the tasks linearly to the cores.
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAw4AAADeCAIAAAAb9sCoAAAABmJLR0QA/wD/AP+gvaeTAAAe5UlEQVR4nO3dfVRUdf7A8TuIoDIgyPCgIMigYPS0ZqZirrvZ6la7tYEPeWzV9mjJVqS0mZ1DanXOVnqq43HPtq7WFsezUJm2e07bcU+IerQ6ratZVqhBiA8wQDgDAwyP9/fH/TW/+TF8Z74zw8MdeL/+gjv3c7/fO/fz/fDhzjAYVFVVAAAA0JeQoZ4AAACAftEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACIUGEmwwGPprHgCCjqqqQz0FH1CvgJEskHrFXSUAAAChgO4qaYLrN0sAgQveOzTUK2CkCbxecVcJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFZp5PrZz35mMBg++eQT55bExMQPPvhA/ghffPGF0WiU37+oqCg7OzsiIiIxMdGHiQIY8Qa/Xm3cuDErK2vcuHEpKSmbNm3q6OjwYboYXmiVRrTY2Ninnnpq0IYzmUwbNmzYtm3boI0IYNgY5Hplt9t379596dKlkpKSkpKSrVu3DtrQ0BtapRFt7dq1FRUV77//vvtDNTU1S5cujY+PT05Ofuyxx1pbW7Xtly5dWrx4cXR09A033HDixAnn/k1NTXl5eZMnT46Li3vggQcaGhrcj3n33XcvW7Zs8uTJA3Q6AIaxQa5Xe/bsmT9/fmxsbHZ29kMPPeQajpGGVmlEMxqN27Zte+aZZzo7O3s9lJubO3r06IqKipMnT546daqgoEDbvnTp0uTk5Nra2n/9619/+ctfnPuvXLnSYrGcPn26urp6/Pjxa9asGbSzADASDGG9On78+MyZM/v1bBBU1AAEfgQMoQULFrzwwgudnZ3Tp0/ftWuXqqoJCQkHDx5UVbW8vFxRlLq6Om3P0tLSMWPGdHd3l5eXGwyGxsZGbXtRUVFERISqqpWVlQaDwbm/zWYzGAxWq7XPcYuLixMSEgb67DCggnHtB+Oc4TRU9UpV1S1btqSlpTU0NAzoCWLgBL72Qwe7NYPOhIaGvvTSS+vWrVu1apVz4+XLlyMiIuLi4rRvzWazw+FoaGi4fPlybGxsTEyMtn3atGnaF1VVVQaDYdasWc4jjB8//sqVK+PHjx+s8wAw/A1+vXr++ef37dtXVlYWGxs7UGcF3aNVgnLfffe98sorL730knNLcnJyS0tLfX29Vn2qqqrCw8NNJlNSUpLVam1vbw8PD1cUpba2Vts/JSXFYDCcOXOG3gjAgBrMerV58+YDBw4cPXo0OTl5wE4IQYD3KkFRFGXHjh07d+5sbm7Wvs3IyJgzZ05BQYHdbrdYLIWFhatXrw4JCZk+ffqMGTNee+01RVHa29t37typ7Z+enr5o0aK1a9fW1NQoilJfX79//373Ubq7ux0Oh/Y+A4fD0d7ePkinB2AYGZx6lZ+ff+DAgUOHDplMJofDwYcFjGS0SlAURZk9e/Y999zj/LMRg8Gwf//+1tbWtLS0GTNm3HTTTa+++qr20HvvvVdaWnrLLbfccccdd9xxh/MIxcXFkyZNys7OjoyMnDNnzvHjx91H2bNnz9ixY1etWmWxWMaOHcsNbQB+GIR6ZbVad+3adeHCBbPZPHbs2LFjx2ZlZQ3O2UGHDM53PPkTbDAoihLIEQAEo2Bc+8E4ZwCBC3ztc1cJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAiFYJAABAKHSoJwAAg6eysnKopwAgyBhUVfU/2GBQFCWQIwAIRsG49rU5AxiZAqlX/XBXiQIEQP/MZvNQTwFAUOqHu0oARqbguqsEAP4JqFUCAAAY3vgLOAAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAAKGAPoKSz1UaCfz7OAlyYyQIro8aISdHAuoVRAKpV9xVAgAAEOqHf2wSXL9ZQl7gv2mRG8NV8P4WTk4OV9QriASeG9xVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEBr+rdI333zz61//2mQyjRs3bvr06U8//bQfB5k+ffoHH3wgufNPfvKTkpKSPh8qKirKzs6OiIhITEz0YxroX7rKjY0bN2ZlZY0bNy4lJWXTpk0dHR1+TAbBTlc5Sb3SFV3lxkirV8O8Verp6fnlL385adKkr776qqGhoaSkxGw2D+F8TCbThg0btm3bNoRzgEZvuWG323fv3n3p0qWSkpKSkpKtW7cO4WQwJPSWk9Qr/dBbboy4eqUGIPAjDLRLly4pivLNN9+4P3T16tUlS5bExcUlJSU9+uijLS0t2vZr167l5eWlpKRERkbOmDGjvLxcVdXMzMyDBw9qjy5YsGDVqlUdHR02m239+vXJyckmk2n58uX19fWqqj722GOjR482mUypqamrVq3qc1bFxcUJCQkDdc79J5DrS274lxuaLVu2zJ8/v//Puf/o//q60/+c9ZmT1Cs90GduaEZCvRrmd5UmTZqUkZGxfv36d955p7q62vWh3Nzc0aNHV1RUnDx58tSpUwUFBdr2FStWXLx48dNPP7VarW+//XZkZKQz5OLFi/Pmzbv99tvffvvt0aNHr1y50mKxnD59urq6evz48WvWrFEUZdeuXVlZWbt27aqqqnr77bcH8VzhGz3nxvHjx2fOnNn/5wx903NOYmjpOTdGRL0a2k5tEFgsls2bN99yyy2hoaFTp04tLi5WVbW8vFxRlLq6Om2f0tLSMWPGdHd3V1RUKIpy5cqVXgfJzMx89tlnk5OTd+/erW2prKw0GAzOI9hsNoPBYLVaVVW9+eabtVFE+C1NJ3SYG6qqbtmyJS0traGhoR/PtN8FxfXtJSjmrMOcpF7phA5zQx0x9Wr4t0pOzc3Nr7zySkhIyJdffvnxxx9HREQ4H/r+++8VRbFYLKWlpePGjXOPzczMTEhImD17tsPh0LYcPnw4JCQk1UV0dPTXX3+tUnoCjh18+smN5557zmw2V1VV9ev59b/gur6a4JqzfnKSeqU3+smNkVOvhvkLcK6MRmNBQcGYMWO+/PLL5OTklpaW+vp67aGqqqrw8HDtRdnW1taamhr38J07d8bFxd17772tra2KoqSkpBgMhjNnzlT96Nq1a1lZWYqihISMoGd1eNBJbmzevHnfvn1Hjx5NTU0dgLNEMNFJTkKHdJIbI6peDfNFUltb+9RTT50+fbqlpaWxsfHFF1/s7OycNWtWRkbGnDlzCgoK7Ha7xWIpLCxcvXp1SEhIenr6okWLHn744ZqaGlVVz54960y18PDwAwcOREVF3XXXXc3Nzdqea9eu1Xaor6/fv3+/tmdiYuK5c+f6nE93d7fD4ejs7FQUxeFwtLe3D8rTgD7oLTfy8/MPHDhw6NAhk8nkcDiG/R/fwp3ecpJ6pR96y40RV6+G9qbWQLPZbOvWrZs2bdrYsWOjo6PnzZv34Ycfag9dvnw5JyfHZDJNnDgxLy/Pbrdr2xsbG9etW5eUlBQZGXnLLbecO3dOdfmrga6urt/+9re33XZbY2Oj1WrNz8+fMmWK0Wg0m81PPPGEdoQjR45MmzYtOjo6Nze313xef/111yff9capDgVyfckNn3Lj2rVrvRZmenr64D0XvtP/9XWn/znrKidV6pWe6Co3RmC9MjiP4geDwaAN7/cRoGeBXF9yY3gLxusbjHOGPOoVRAK/vsP8BTgAAIBA0CoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAIhQZ+CIPBEPhBMCyRG9AbchIi5AZEuKsEAAAgZFBVdajnAAAAoFPcVQIAABCiVQIAABCiVQIAABCiVQIAABCiVQIAABCiVQIAABCiVQIAABAK6NO6+WzTkcC/T94iN0aC4PpUNnJyJKBeQSSQesVdJQAAAKF++B9wwfWbJeQF/psWuTFcBe9v4eTkcEW9gkjgucFdJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAAKFh2yqdOHHinnvumTBhQkRExI033lhYWNjS0jII43Z1deXn50+YMCEqKmrlypVNTU197mY0Gg0uwsPD29vbB2F6I9ZQ5YPFYlm2bJnJZIqOjl68ePG5c+f63K2oqCg7OzsiIiIxMdF1+5o1a1zzpKSkZBDmjMFHvYIr6pXeDM9W6Z///OfChQtvvvnmTz/9tK6ubt++fXV1dWfOnJGJVVW1s7PT76Gfe+65Q4cOnTx58rvvvrt48eL69ev73M1isTT/KCcn5/777w8PD/d7UHg2hPmQl5dntVrPnz9/5cqViRMnLl26tM/dTCbThg0btm3b5v5QQUGBM1WWLFni90ygW9QruKJe6ZEagMCPMBC6u7uTk5MLCgp6be/p6VFV9erVq0uWLImLi0tKSnr00UdbWlq0RzMzMwsLC2+//faMjIyysjKbzbZ+/frk5GSTybR8+fL6+nptt1dffTU1NXX8+PETJ0584YUX3EePj49/8803ta/LyspCQ0OvXbvmYbb19fXh4eGHDx8O8KwHQiDXVz+5MbT5kJ6evnfvXu3rsrKykJCQrq4u0VSLi4sTEhJct6xevfrpp5/299QHkH6urzx9zpl61V+oV9QrkX7odoZ2+IGgdd+nT5/u89G5c+euWLGiqamppqZm7ty5jzzyiLY9MzPzhhtuaGho0L791a9+df/999fX17e2tj788MP33HOPqqrnzp0zGo0XLlxQVdVqtf73v//tdfCamhrXobW72SdOnPAw2x07dkybNi2A0x1Aw6P0DGE+qKq6adOmhQsXWiwWm8324IMP5uTkeJhqn6Vn4sSJycnJM2fOfPnllzs6Onx/AgaEfq6vPH3OmXrVX6hX1CsRWqU+fPzxx4qi1NXVuT9UXl7u+lBpaemYMWO6u7tVVc3MzPzTn/6kba+srDQYDM7dbDabwWCwWq0VFRVjx4599913m5qa+hz6/PnziqJUVlY6t4SEhHz00UceZpuRkbFjxw7fz3IwDI/SM4T5oO28YMEC7dm47rrrqqurPUzVvfQcOnTok08+uXDhwv79+5OSktx/1xwq+rm+8vQ5Z+pVf6FeadupV+4Cv77D8L1KcXFxiqJcuXLF/aHLly9HRERoOyiKYjabHQ5HQ0OD9u2kSZO0L6qqqgwGw6xZs6ZMmTJlypSbbrpp/PjxV65cMZvNRUVFf/7znxMTE3/6058ePXq01/EjIyMVRbHZbNq3zc3NPT09UVFRb731lvOdbq77l5WVVVVVrVmzpr/OHe6GMB9UVb3zzjvNZnNjY6Pdbl+2bNntt9/e0tIiygd3ixYtmjt37tSpU3Nzc19++eV9+/YF8lRAh6hXcEW90qmh7dQGgvZa75NPPtlre09PT6+uvKysLDw83NmVHzx4UNv+3XffjRo1ymq1ioZobW394x//GBMTo71+7Co+Pv5vf/ub9vWRI0c8v/a/fPnyBx54wLfTG0SBXF/95MYQ5kN9fb3i9gLHZ599JjqO+29prt59990JEyZ4OtVBpJ/rK0+fc6Ze9RfqlbadeuWuH7qdoR1+gPzjH/8YM2bMs88+W1FR4XA4zp49m5eXd+LEiZ6enjlz5jz44IPNzc21tbXz5s17+OGHtRDXVFNV9a677lqyZMnVq1dVVa2rq3vvvfdUVf32229LS0sdDoeqqnv27ImPj3cvPYWFhZmZmZWVlRaLZf78+StWrBBNsq6uLiwsTJ9vkNQMj9KjDmk+pKamrlu3zmaztbW1Pf/880ajsbGx0X2GXV1dbW1tRUVFCQkJbW1t2jG7u7v37t1bVVVltVqPHDmSnp7ufGvCkNPV9ZWk2zlTr/oF9cp5BOpVL7RKQsePH7/rrruio6PHjRt34403vvjii9ofC1y+fDknJ8dkMk2cODEvL89ut2v790o1q9Wan58/ZcoUo9FoNpufeOIJVVVPnTp12223RUVFxcTEzJ49+9ixY+7jdnR0PP7449HR0UajccWKFTabTTTD7du36/YNkpphU3rUocuHM2fOLFq0KCYmJioqau7cuaKfNK+//rrrvd6IiAhVVbu7u++8887Y2NiwsDCz2fzMM8+0trb2+zPjH71dXxl6njP1KnDUK2c49aqXwK+vwXkUP2ivXAZyBOhZINeX3BjegvH6BuOcIY96BZHAr+8wfFs3AABAf6FVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEAoN/BBe/9swRixyA3pDTkKE3IAId5UAAACEAvofcAAAAMMbd5UAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEAvq0bj7bdCTw75O3yI2RILg+lY2cHAmoVxAJpF5xVwkAAECoH/4HXCBdPLH6jw1EMJ4vsfKxwSgYn2di5WMDEYznS6x8bCC4qwQAACBEqwQAACBEqwQAACA0IK1SV1dXfn7+hAkToqKiVq5c2dTUJB9bVFSUnZ0dERGRmJjo67gbN27MysoaN25cSkrKpk2bOjo65GMLCwvT0tLCw8NjY2Pvvffe7777ztfRu7q6ZsyYYTAYamtr5aPWrFljcFFSUuLToP/+979nz549ZsyYuLi4TZs2yQcajUbXccPDw9vb230a2j8Wi2XZsmUmkyk6Onrx4sXnzp2Tj718+XJubm5sbOyECRN+//vfe52wKJdk8lMUK5Ofon1k8lMUK5OfnufmOT9FsQHmp255eK68rilRrMyaEsXKrAtRrMy6EOWezFoQxcqsBVGszFoQ7RN4rfbM89w8ryNRrMw68jCu15wUxcrkpChWJidFsTI5KbqOMjkpig2kf/BCDYDoCIWFhRkZGRUVFRaLZd68eStWrJCP/fDDD995553t27cnJCT4Ou7atWuPHTvW0NBw4sSJyZMnb968WT72s88+q6ioaGpqqqysvO+++7Kzs+VjNS+88MLChQsVRampqZGPXb16dUFBQfOPOjs75WNLS0uNRuNf//rX2tra6urqY8eOycfa7XbnoDk5OcuXL5ePlSGKvf/++3/xi1/88MMPdrt99erVN954o3zsbbfd9sADD9hstqtXr86ZM+eJJ57wHCvKJVF+ysSKtsvEivJTJlaUnzKxGvf8lIkV5Wfg1WPwyZyvaE3JxIrWlEysaF3IxIrWhWusKPdk1oIoVmYtiGJl1oJoH5m14CuZcTWe15EoVmYdiWJlclIUK5OToliZnBTFyuSk6DrK5KQoViYn/TMgrVJ8fPybb76pfV1WVhYaGnrt2jXJWE1xcbEfrZKrLVu2zJ8/34/Yjo6OvLy8u+++26fYr7/+Oj09/fPPP1d8b5WefvppD/PxEJudne13rFN9fX14ePjhw4f9iPVj3PT09L1792pfl5WVhYSEdHV1ycReuXJFUZTy8nLt24MHDxqNxvb2dq+x7rkkyk+ZWNF2+ViNa376FNsrPyVj+8xPmVhRfgZeegafzPmK1pRP16jXmpKJFa0Lr7Ee1oXoGrnmnvxacI8VnYt8rPt2n2K9rgV5kuNKriP3WF/XkWusfE72OWeN15x0j5XPyV6xvuZkr+voU072+fNaPifl9f8LcLW1tXV1dTNmzNC+nTlzZldX1zfffNPvA3l2/PjxmTNn+hRSVFSUmJgYGRn51Vdf/f3vf5cP7O7u/t3vfvfaa69FRkb6OM3/HXfy5Mm33nrr9u3bOzs7JaMcDsdnn33W3d193XXXxcTELFy48Msvv/Rj9LfeeislJeXnP/+5H7F+yM3NLS4urqura2pqeuONN37zm9+MGjVKJtCZ7k52u92Pe+/kp6/8y89gNIRrajDXhTP3/FgLfuSt11iZY/bax++14CvXcX1dR+5zll9Hzlg/crLP51MyJ11jfc1JZ6x8TrpfR/mcHLQc+F+B9Fl9HuH8+fOKolRWVv5fOxYS8tFHH8nEOgV4V2nLli1paWkNDQ0+xba2tl69evXYsWMzZsxYu3atfOyOHTuWLl2qquq3336r+HhX6dChQ5988smFCxf279+flJRUUFAgGVtTU6MoSlpa2tmzZ+12+4YNG5KSkux2u/z5ajIyMnbs2NHnQ4FkiCjWZrMtWLBAe/S6666rrq6Wj7311ludN3Xnzp2rKMqnn37qNbZXLnnIT6+xHrbLx6pu+SkZ22d+ysSK8lMmVpSfgVePwef1fD2sKZ+ub681JRMrWhcysaJ10ec1cs09n9aCKqirkr/Bi2qy17XQZ6zkWpAnM678OnKP9Wkducb6lJPu4zp5zUn3WPmcdI+VzEn36yifkx5+Xg/EXaX+b5W0S3v69GntW+09WSdOnJCJdQqkVXruuefMZnNVVZUfsZpjx44ZDIaWlhaZ2AsXLkyaNKm2tlb1q1VytW/fvvj4eMnY5uZmRVG2b9+ufdvW1jZq1KijR4/6NO7hw4fDwsLq6+v7fLTfS09PT8+sWbMeeuihxsZGu92+devWlJQU+fbu4sWLOTk5CQkJaWlpW7duVRTl/PnzXmP7/HHYZ37K/FgSbZePdc9P+ViNa356jfWQn76O65qfgZeewef1fD2sKfnnyn1NeY31sC5kxhWtC/fYXrnn01oQ1VWZtSCKlVkLnuu557Ugz+u4Pq0jz3P2vI56xfqUk6JxZXKyV6xPOek+rnxOapzX0aec7BXr3BIcL8AlJibGx8d/8cUX2renTp0KDQ3Nysrq94H6tHnz5n379h09ejQ1NTWQ44waNUryBvjx48cbGhquv/56k8mktc/XX3/9G2+84cegYWFhXV1dkjsbjcapU6c6P4TUv08j3b17d05Ojslk8iPWDz/88MN//vOf/Pz8mJiYiIiIJ598srq6+uzZs5LhKSkp77//fm1tbWVlZXJyclJS0tSpU32dA/k5OPkZjIZqTQ3OunDPPfm1EEjeimJljimzj/xakOc+rvw68jpnD+vIPVY+Jz2M6zUn3WPlc7LPcf2o1dp19K8+D0QO9BZInyU6QmFhYWZmZmVlpcVimT9/vk9/AdfV1dXW1lZUVJSQkNDW1uZwOORjH3/88WnTplVWVra1tbW1tbm/51cU29HR8eKLL5aXl1ut1s8///zWW2/Nzc2VjG1pabn0oyNHjiiKcurUKck7Jd3d3Xv37q2qqrJarUeOHElPT3/kkUfkz/fVV181m83nzp1ra2v7wx/+MHnyZMk7YZq6urqwsLA+39DtNdYrUWxqauq6detsNltbW9vzzz9vNBobGxslY0+ePPn99983NDQcOHAgLi7urbfe8jyuKJdE+SkTK9ouEyvKT6+xHvLTa6yH/PQa6yE/A68eg0/mGonWlEysKlhTMrGidSETK1oXrrGi3JNZC6JYmbUgipVZC33uI7kWfOV1XMl11Ges5DoSPScyOenhZ5/XnBTFyuSkKNZrTnq4jl5z0kOsTE76Z0BapY6Ojscffzw6OtpoNK5YscJms8nHvv7664qLiIgIydhr164p/196erpkbGdn57333puQkBAWFjZlypSNGzf6NGcnX1+A6+7uvvPOO2NjY8PCwsxm8zPPPNPa2io/bk9Pz5YtWxISEqKiou64446vvvrKpzlv37592rRpHk6nv0qPqzNnzixatCgmJiYqKmru3Lk+/eXdzp074+PjR48enZWVVVRU5HVcUS6J8lMmVrTda6yH/PQa6yE/Zebs5OGFgz5jPeRnILkxVGSeK9Gaknye+1xTMrGidSETK1oXzlgPued1LXiI9boWRLEya0G0j+Ra8JXM+TqJ1pEoVmYdeRjXa056nrPnnPQQ6zUnPcR6zUkP19FrTnqIlanP/jE4j+KH4P23ecQSS+xQxQ6VYHyuiCWW2KGN1fCPTQAAAIRolQAAAIRolQAAAIRolQAAAIT64W3dGN4CeRsdhrdgfFs3hjfqFUR4WzcAAMCACOiuEgAAwPDGXSUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAAAhWiUAAACh/wGLggH7ga71+AAAAABJRU5ErkJggg=="
-/>
+![block:block distribution](misc/mpi_block_block.png)
+{: align="center"}
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=16
-#SBATCH --cpus-per-task=1
+!!! example "Binding to cores and block:block distribution"
 
-srun --ntasks 32 --cpu_bind=cores --distribution=block:block ./application
-```
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2
+    #SBATCH --tasks-per-node=16
+    #SBATCH --cpus-per-task=1
+
+    srun --ntasks 32 --cpu_bind=cores --distribution=block:block ./application
+    ```
 
 #### Distribution: cyclic:cyclic
 
@@ -71,18 +103,19 @@ then the first socket of the second node until one task is placed on
 every first socket of every node. After that it will place a task on
 every second socket of every node and so on.
 
-\<img alt=""
-src="<data:;base64,iVBORw0KGgoAAAANSUhEUgAAAw4AAADeCAIAAAAb9sCoAAAABmJLR0QA/wD/AP+gvaeTAAAfCElEQVR4nO3de1BU5/348bOIoLIgyHJREGRRMORWY4yKWtuYapO0SQNe4piq6aiRJqKSxugMURNnmkQnyTh2am1MmjBOIYnRtDNpxk4QdTTJpFZjNAYvEMQLLBDchQWW6/n+cab748fuczi7y8JZeL/+kt3zec5zPs+Hxw9nl8Ugy7IEAAAAd4IGegIAAAD6RasEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgFOxLsMFg6Kt5AAg4siwP9BQ8wH4FDGW+7FfcVQIAABDy6a6SIrB+sgTgu8C9Q8N+BQw1vu9X3FUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUaun72s58ZDIYvvvjC+Uh8fPwnn3yifYRvvvnGaDRqP76goCAzMzMsLCw+Pt6DiQIY8vp/v9q4cWNGRsaoUaOSkpI2bdrU1tbmwXQxuNAqDWnR0dEvvPBCv53OZDJt2LBh+/bt/XZGAINGP+9Xdrt93759169fLyoqKioq2rZtW7+dGnpDqzSkrVq1qqys7OOPP3Z9qqqqatGiRbGxsYmJic8991xzc7Py+PXr1xcsWBAZGXnXXXedOnXKeXxDQ0NOTs748eNjYmKefPLJuro61zEfeeSRxYsXjx8/3k+XA2AQ6+f96u23354zZ050dHRmZubTTz/dPRxDDa3SkGY0Grdv375ly5b29vYeT2VnZw8fPrysrOz06dNnzpzJy8tTHl+0aFFiYmJ1dfW//vWvv/zlL87jly1bZrFYzp49W1lZOXr06JUrV/bbVQAYCgZwvzp58uTUqVP79GoQUGQf+D4CBtDcuXN37NjR3t4+efLkPXv2yLIcFxd3+PBhWZZLS0slSaqpqVGOLC4uHjFiRGdnZ2lpqcFgqK+vVx4vKCgICwuTZbm8vNxgMDiPt9lsBoPBarW6PW9hYWFcXJy/rw5+FYjf+4E4ZzgN1H4ly/LWrVtTUlLq6ur8eoHwH9+/94P7uzWDzgQHB7/22murV69evny588EbN26EhYXFxMQoX5rNZofDUVdXd+PGjejo6KioKOXxSZMmKf+oqKgwGAzTpk1zjjB69OibN2+OHj26v64DwODX//vVK6+8cuDAgZKSkujoaH9dFXSPVgnS448//sYbb7z22mvORxITE5uammpra5Xdp6KiIjQ01GQyJSQkWK3W1tbW0NBQSZKqq6uV45OSkgwGw7lz5+iNAPhVf+5XmzdvPnTo0PHjxxMTE/12QQgAvFcJkiRJu3bt2r17d2Njo/JlWlrajBkz8vLy7Ha7xWLJz89fsWJFUFDQ5MmTp0yZ8tZbb0mS1Nraunv3buX41NTU+fPnr1q1qqqqSpKk2tragwcPup6ls7PT4XAo7zNwOBytra39dHkABpH+2a9yc3MPHTp05MgRk8nkcDj4sIChjFYJkiRJ06dPf/TRR52/NmIwGA4ePNjc3JySkjJlypR77rnnzTffVJ766KOPiouL77vvvgcffPDBBx90jlBYWDhu3LjMzMzw8PAZM2acPHnS9Sxvv/32yJEjly9fbrFYRo4cyQ1tAF7oh/3KarXu2bPnypUrZrN55MiRI0eOzMjI6J+rgw4ZnO948ibYYJAkyZcRAASiQPzeD8Q5A/Cd79/73FUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQCh7oCQBA/ykvLx/oKQAIMAZZlr0PNhgkSfJlBACBKBC/95U5AxiafNmv+uCuEhsQAP0zm80DPQUAAakP7ioBGJoC664SAHjHp1YJAABgcOM34AAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIR8+ghKPldpKPDu4ySojaEgsD5qhJocCtivIOLLfsVdJQAAAKE++MMmgfWTJbTz/SctamOwCtyfwqnJwYr9CiK+1wZ3lQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIRolQAAAIQGf6t08eLFX//61yaTadSoUZMnT37xxRe9GGTy5MmffPKJxoN/8pOfFBUVuX2qoKAgMzMzLCwsPj7ei2mgb+mqNjZu3JiRkTFq1KikpKRNmza1tbV5MRkEOl3VJPuVruiqNobafjXIW6Wurq5f/vKX48aNO3/+fF1dXVFRkdlsHsD5mEymDRs2bN++fQDnAIXeasNut+/bt+/69etFRUVFRUXbtm0bwMlgQOitJtmv9ENvtTHk9ivZB76P4G/Xr1+XJOnixYuuT926dWvhwoUxMTEJCQnPPvtsU1OT8vjt27dzcnKSkpLCw8OnTJlSWloqy3J6evrhw4eVZ+fOnbt8+fK2tjabzbZ27drExESTybRkyZLa2lpZlp977rnhw4ebTKbk5OTly5e7nVVhYWFcXJy/rrnv+LK+1IZ3taHYunXrnDlz+v6a+47+19eV/uesz5pkv9IDfdaGYijsV4P8rtK4cePS0tLWrl37wQcfVFZWdn8qOzt7+PDhZWVlp0+fPnPmTF5envL40qVLr1279uWXX1qt1vfffz88PNwZcu3atVmzZs2ePfv9998fPnz4smXLLBbL2bNnKysrR48evXLlSkmS9uzZk5GRsWfPnoqKivfff78frxWe0XNtnDx5curUqX1/zdA3PdckBpaea2NI7FcD26n1A4vFsnnz5vvuuy84OHjixImFhYWyLJeWlkqSVFNToxxTXFw8YsSIzs7OsrIySZJu3rzZY5D09PSXXnopMTFx3759yiPl5eUGg8E5gs1mMxgMVqtVluV7771XOYsIP6XphA5rQ5blrVu3pqSk1NXV9eGV9rmAWN8eAmLOOqxJ9iud0GFtyENmvxr8rZJTY2PjG2+8ERQU9O23337++edhYWHOp3744QdJkiwWS3Fx8ahRo1xj09PT4+Lipk+f7nA4lEeOHj0aFBSU3E1kZOR3330ns/X4HNv/9FMbL7/8stlsrqio6NPr63uBtb6KwJqzfmqS/Upv9FMbQ2e/GuQvwHVnNBrz8vJGjBjx7bffJiYmNjU11dbWKk9VVFSEhoYqL8o2NzdXVVW5hu/evTsmJuaxxx5rbm6WJCkpKclgMJw7d67if27fvp2RkSFJUlDQEMrq4KCT2ti8efOBAweOHz+enJzsh6tEINFJTUKHdFIbQ2q/GuTfJNXV1S+88MLZs2ebmprq6+tfffXV9vb2adOmpaWlzZgxIy8vz263WyyW/Pz8FStWBAUFpaamzp8/f82aNVVVVbIsX7hwwVlqoaGhhw4dioiIePjhhxsbG5UjV61apRxQW1t78OBB5cj4+PhLly65nU9nZ6fD4Whvb5ckyeFwtLa29ksa4IbeaiM3N/fQoUNHjhwxmUwOh2PQ//ItXOmtJtmv9ENvtTHk9quBvanlbzabbfXq1ZMmTRo5cmRkZOSsWbM+/fRT5akbN25kZWWZTKaxY8fm5OTY7Xbl8fr6+tWrVyckJISHh993332XLl2Su/3WQEdHx29/+9sHHnigvr7earXm5uZOmDDBaDSazeb169crIxw7dmzSpEmRkZHZ2dk95rN3797uye9+41SHfFlfasOj2rh9+3aPb8zU1NT+y4Xn9L++rvQ/Z13VpMx+pSe6qo0huF8ZnKN4wWAwKKf3egTomS/rS20MboG4voE4Z2jHfgUR39d3kL8ABwAA4AtaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAAKFg34cwGAy+D4JBidqA3lCTEKE2IMJdJQAAACGDLMsDPQcAAACd4q4SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAkE+f1s1nmw4F3n3yFrUxFATWp7JRk0MB+xVEfNmvuKsEAAAg1Ad/Ay6wfrKEdr7/pEVtDFaB+1M4NTlYsV9BxPfa4K4SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACA0KBtlU6dOvXoo4+OGTMmLCzs7rvvzs/Pb2pq6ofzdnR05ObmjhkzJiIiYtmyZQ0NDW4PMxqNhm5CQ0NbW1v7YXpD1kDVg8ViWbx4sclkioyMXLBgwaVLl9weVlBQkJmZGRYWFh8f3/3xlStXdq+ToqKifpgz+h/7Fbpjv9Kbwdkq/fOf/5w3b96999775Zdf1tTUHDhwoKam5ty5c1piZVlub2/3+tQvv/zykSNHTp8+ffXq1WvXrq1du9btYRaLpfF/srKynnjiidDQUK9PCnUDWA85OTlWq/Xy5cs3b94cO3bsokWL3B5mMpk2bNiwfft216fy8vKcpbJw4UKvZwLdYr9Cd+xXeiT7wPcR/KGzszMxMTEvL6/H411dXbIs37p1a+HChTExMQkJCc8++2xTU5PybHp6en5+/uzZs9PS0kpKSmw229q1axMTE00m05IlS2pra5XD3nzzzeTk5NGjR48dO3bHjh2uZ4+NjX333XeVf5eUlAQHB9++fVtltrW1taGhoUePHvXxqv3Bl/XVT20MbD2kpqbu379f+XdJSUlQUFBHR4doqoWFhXFxcd0fWbFixYsvvujtpfuRftZXO33Omf2qr7BfsV+J9EG3M7Cn9wel+z579qzbZ2fOnLl06dKGhoaqqqqZM2c+88wzyuPp6el33XVXXV2d8uWvfvWrJ554ora2trm5ec2aNY8++qgsy5cuXTIajVeuXJFl2Wq1/ve//+0xeFVVVfdTK3ezT506pTLbXbt2TZo0yYfL9aPBsfUMYD3Isrxp06Z58+ZZLBabzfbUU09lZWWpTNXt1jN27NjExMSpU6e+/vrrbW1tnifAL/Szvtrpc87sV32F/Yr9SoRWyY3PP/9ckqSamhrXp0pLS7s/VVxcPGLEiM7OTlmW09PT//SnPymPl5eXGwwG52E2m81gMFit1rKyspEjR3744YcNDQ1uT3358mVJksrLy52PBAUFffbZZyqzTUtL27Vrl+dX2R8Gx9YzgPWgHDx37lwlG3fccUdlZaXKVF23niNHjnzxxRdXrlw5ePBgQkKC68+aA0U/66udPufMftVX2K+Ux9mvXPm+voPwvUoxMTGSJN28edP1qRs3boSFhSkHSJJkNpsdDkddXZ3y5bhx45R/VFRUGAyGadOmTZgwYcKECffcc8/o0aNv3rxpNpsLCgr+/Oc/x8fH//SnPz1+/HiP8cPDwyVJstlsypeNjY1dXV0RERHvvfee851u3Y8vKSmpqKhYuXJlX107XA1gPciy/NBDD5nN5vr6ervdvnjx4tmzZzc1NYnqwdX8+fNnzpw5ceLE7Ozs119//cCBA76kAjrEfoXu2K90amA7NX9QXut9/vnnezze1dXVoysvKSkJDQ11duWHDx9WHr969eqwYcOsVqvoFM3NzX/84x+joqKU14+7i42N/dvf/qb8+9ixY+qv/S9ZsuTJJ5/07PL6kS/rq5/aGMB6qK2tlVxe4Pjqq69E47j+lNbdhx9+OGbMGLVL7Uf6WV/t9Dln9qu+wn6lPM5+5aoPup2BPb2f/OMf/xgxYsRLL71UVlbmcDguXLiQk5Nz6tSprq6uGTNmPPXUU42NjdXV1bNmzVqzZo0S0r3UZFl++OGHFy5ceOvWLVmWa2pqPvroI1mWv//+++LiYofDIcvy22+/HRsb67r15Ofnp6enl5eXWyyWOXPmLF26VDTJmpqakJAQfb5BUjE4th55QOshOTl59erVNputpaXllVdeMRqN9fX1rjPs6OhoaWkpKCiIi4traWlRxuzs7Ny/f39FRYXVaj127FhqaqrzrQkDTlfrq5Fu58x+1SfYr5wjsF/1QKskdPLkyYcffjgyMnLUqFF33333q6++qvyywI0bN7Kyskwm09ixY3Nycux2u3J8j1KzWq25ubkTJkwwGo1ms3n9+vWyLJ85c+aBBx6IiIiIioqaPn36iRMnXM/b1ta2bt26yMhIo9G4dOlSm80mmuHOnTt1+wZJxaDZeuSBq4dz587Nnz8/KioqIiJi5syZov9p9u7d2/1eb1hYmCzLnZ2dDz30UHR0dEhIiNls3rJlS3Nzc59nxjt6W18t9Dxn9ivfsV85w9mvevB9fQ3OUbygvHLpywjQM1/Wl9oY3AJxfQNxztCO/Qoivq/vIHxbNwAAQF+hVQIAABCiVQIAABCiVQIAABCiVQIAABCiVQIAABCiVQIAABCiVQIAABCiVQIAABAK9n2IXv/aMIYsagN6Q01ChNqACHeVAAAAhHz6G3AAAACDG3eVAAAAhGiVAAAAhGiVAAAAhGiVAAAAhGiVAAAAhGiVAAAAhGiVAAAAhHz6tG4+23Qo8O6Tt6iNoSCwPpWNmhwK2K8g4st+xV0lAAAAoT74G3C+dPHE6j/WF4F4vcRqjw1EgZhnYrXH+iIQr5dY7bG+4K4SAACAEK0SAACAEK0SAACAkF9apY6Ojtzc3DFjxkRERCxbtqyhocGLEaZMmWIwGKqrq7VHWSyWxYsXm0ymyMjIBQsWXLp0Sf34goKCzMzMsLCw+Pj47o9v3LgxIyNj1KhRSUlJmzZtamtr0x4rSdK///3v6dOnjxgxIiYmZtOmTa6xovG15E19bup5E8V6mjdfaMmtil5z251ojbTkWWV9pd7yLIrVkmdRfrTkTeWYXvOWn5+fkpISGhoaHR392GOPXb16VXuuAp36WqtbuXKloZuioiLtsTdu3MjOzo6Ojh4zZszvf//71tZW7+YpWjstsUajsfv8Q0NDXachqisteRPFasmbKNbTvPlCS25FtOS2O1E+teRZdIyWPItiteRZtEZa8iaK1ZI30fi+fC/3QvaBaIT8/Py0tLSysjKLxTJr1qylS5dqj1Xs2LFj3rx5kiRVVVVpj33iiSd+8Ytf/Pjjj3a7fcWKFXfffbd67KeffvrBBx/s3LkzLi6u+zGrVq06ceJEXV3dqVOnxo8fv3nzZu2xxcXFRqPxr3/9a3V1dWVl5YkTJ1xjReOL8qYlVpQ3LbGivPlSIaJY9fmrx4pyK4oVrZGWPItiFep5FsVqybMoP1pqUnSMlpr86quvysrKGhoaysvLH3/88czMTO25ChSiOauvtXrsihUr8vLyGv+nvb1de+wDDzzw5JNP2my2W7duzZgxY/369eqxonmK1k5LrN1ud04+KytryZIlrrGiuhKNqSVWlDctsaK8+WO/EuVWS6wot6JYUT615Fl0jJY8i2K15Fm0RlpqUhSrpSZF42vJlXf80irFxsa+++67yr9LSkqCg4Nv376tMVaW5e+++y41NfXrr7+WPGyVUlNT9+/f7zxvUFBQR0dHr7GFhYUqW+TWrVvnzJmjPTYzM/PFF1/UPufu44vypiVWFuRNS6wob/7YelTm32usKLfqsa5rpD3PbmtDY55dYz3Nsyg/6jXpeoxHNdnW1paTk/PII48oX3pak3qmPmf1fUAUu2LFCi9qUpblmzdvSpJUWlqqfHn48GGj0dja2tprrMo8e6ydR7G1tbWhoaFHjx5VmbPsriZdx9QSK8pbr7EqefPrftUjtx7F9siteqxojbTk2fUY7XnuEetFnt3uV73WpEqslpp0uy7aa1K7vn8Brrq6uqamZsqUKcqXU6dO7ejouHjxosbwzs7O3/3ud2+99VZ4eLinp87Ozi4sLKypqWloaHjnnXd+85vfDBs2zNNBejh58uTUqVM1HuxwOL766qvOzs477rgjKipq3rx53377rcbxvchb97l5mrfusf7Im6dz6JUXuXUrgOpTlB8teXMeoz1vBQUF8fHx4eHh58+f//vf/y75nKshoqCgYPz48ffff//OnTvb29s1Rjm3bye73e7R6zs95tBj7Tz13nvvJSUl/fznP1c/zKPvWfVYj/LmjO3bvGnRb7n1k36rT9f11Z43t3Wlnjff18UzvvRZbke4fPmyJEnl5eX/rx0LCvrss8+0xMqyvGvXrkWLFsmy/P3330se3lWy2Wxz585Vnr3jjjsqKyu1xKr8pLV169aUlJS6ujqNsVVVVZIkpaSkXLhwwW63b9iwISEhwW63i+bcfXyVvPUaK4vzpiVWlDdfKqTX2B5z6DVWJbfqsT3WyKM8u9aG9jy7xnqUZ1F+eq3JHsdor8nm5uZbt26dOHFiypQpq1at8jRX+qc+Z+/uKh05cuSLL764cuXKwYMHExIS8vLytMfef//9zhc4Zs6cKUnSl19+2Wus23m6rp32WEVaWtquXbvU5+y2JjX+BN8jVpQ3LbGivPlpv3KbW42xih65VY/t27tK2vPsGutRnl1rQ2NNuo1VqNekyrr4465S37dKytZ89uxZ5UvlfaCnTp3SEnvlypVx48ZVV1fLnrdKXV1d06ZNe/rpp+vr6+12+7Zt25KSkrz4r9Tp5ZdfNpvNFRUV2mMbGxslSdq5c6fyZUtLy7Bhw44fP+42tsf4KnnrNVYlb73GquTNT1uP6xy0xKrkVj3WbTurMc89Yj3Kc49Yj/Isyo+WmuxxjEc1qThx4oTBYGhqavIoV/qnPmfvWqXuDhw4EBsbqz322rVrWVlZcXFxKSkp27ZtkyTp8uXLvcaqz9O5dh7FHj16NCQkpLa2VuW8oprU8t+S+vd797xpiRXlzX/7laJ7brXHuuZWPbZvW6Xu1PPsGqs9z+rrq16TolgtNek6vuhafN+v+v4FuPj4+NjY2G+++Ub58syZM8HBwRkZGVpiT548WVdXd+edd5pMJqWNvfPOO9955x0tsT/++ON//vOf3NzcqKiosLCw559/vrKy8sKFC95dxebNmw8cOHD8+PHk5GTtUUajceLEic4PBlX5hFDX8bXnzTVWe95cY/s2b1r4O7fq9F+fovxoyZvrMd7lbdiwYcOGDfMlV0NQSEhIR0eH9uOTkpI+/vjj6urq8vLyxMTEhISEiRMn+j4NZe08Ctm3b19WVpbJZBId4N33rMZYlby5jfVT3rTwR277jZ/qU0ttiPKmEutR3rxYF4/50meJRsjPz09PTy8vL7dYLHPmzNH+G3BNTU3X/+fYsWOSJJ05c0bLnSFFcnLy6tWrbTZbS0vLK6+8YjQa6+vrVWI7OjpaWloKCgri4uJaWlocDofy+Lp16yZNmlReXt7S0tLS0uJ8r6WW2DfffNNsNl+6dKmlpeUPf/jD+PHjXbtp0fiivPUaq5I3LecV5c2XChHFiuagJVaUW1GsaI205NltrMY8i86rJc+i/GipSdExvdZkW1vbq6++WlpaarVav/766/vvvz87O1t7rgKFaM6i9eo1trOzc//+/RUVFVar9dixY6mpqc8884z2854+ffqHH36oq6s7dOhQTEzMe++9px7rdp4qa6elJmVZrqmpCQkJ6fGmYy11JRqz11iVvGk5ryhvfb5fqeS211iF29yKYkX51JJnt8dozLNofC15drtGGmtS5f8C9ZpUGV9Lrrzjl1apra1t3bp1kZGRRqNx6dKlNptNe6yTF+9VOnfu3Pz586OioiIiImbOnNnrbxzs3btX6iYsLEyW5du3b0v/v9TUVI2xsix3dXVt3bo1Li4uIiLiwQcfPH/+fI9YlfFFedMSK8qbllhR3nwpL7exWuavcl5RbkWxojXqNc8qsU4qL8CJYnvNsyg/WmpS5Zhea7K9vf2xxx6Li4sLCQmZMGHCxo0bnTnRkqtAIZpzr2stiu3s7HzooYeio6NDQkLMZvOWLVuam5u1n3f37t2xsbHDhw/PyMgoKCjodc5u56mydhrreefOnZMmTRKdV6WuRGP2GquSNy3nFeXNl5p0G6uS215jFW5zK4oV5bPXPIuO0ZJnlfF7zbNojbTUpPr/Beo1qTK+llx5x+AcxQuB+2fziCWW2IGKHSiBmCtiiSV2YGMV/GETAAAAIVolAAAAIVolAAAAIVolAAAAoT54WzcGN1/eRofBLRDf1o3Bjf0KIrytGwAAwC98uqsEAAAwuHFXCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQOj/AItyAftZS8fsAAAAAElFTkSuQmCC>"
-/>
+![cyclic:cyclic distribution](misc/mpi_cyclic_cyclic.png)
+{: align="center"}
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=16
-#SBATCH --cpus-per-task=1
+!!! example "Binding to cores and cyclic:cyclic distribution"
 
-srun --ntasks 32 --cpu_bind=cores --distribution=cyclic:cyclic
-```
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2
+    #SBATCH --tasks-per-node=16
+    #SBATCH --cpus-per-task=1
+
+    srun --ntasks 32 --cpu_bind=cores --distribution=cyclic:cyclic
+    ```
 
 #### Distribution: cyclic:block
 
@@ -90,104 +123,108 @@ The cyclic:block distribution will allocate the tasks of your job in
 alternation on node level, starting with first node filling the sockets
 linearly.
 
-\<img alt=""
-src="<data:;base64,iVBORw0KGgoAAAANSUhEUgAAAw4AAADeCAIAAAAb9sCoAAAABmJLR0QA/wD/AP+gvaeTAAAe3klEQVR4nO3de3BU9f3/8bMhJEA2ISGbCyQkZAMJxlsREQhSWrFQtdWacJHBAnZASdUIsSLORECZqQqjDkOnlIpWM0wTFcF2xjp0DAEG1LEURFEDmBjCJdkkht1kk2yu5/fHme5vv9l8dj+7m8vZ5Pn4i5w973M+57Wf/fDO2WUxqKqqAAAAoC8hQz0AAAAA/aJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEAoNpNhgMPTXOAAEHVVVh3oIPmC9AkayQNYr7ioBAAAIBXRXSRNcv1kCCFzw3qFhvQJGmsDXK+4qAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqAQAACNEqjVw/+9nPDAbDp59+6tySmJj44Ycfyh/hyy+/NBqN8vsXFRVlZ2dHREQkJib6MFAAI97gr1cbN27MysoaN25cSkrKpk2bOjo6fBguhhdapREtNjb2mWeeGbTTmUymDRs2bNu2bdDOCGDYGOT1ym6379279/LlyyUlJSUlJVu3bh20U0NvaJVGtLVr11ZUVHzwwQfuD9XU1CxdujQ+Pj45OfmJJ55obW3Vtl++fHnx4sXR0dE33XTTyZMnnfs3NTXl5eVNnjw5Li7uoYceamhocD/mvffeu2zZssmTJw/Q5QAYxgZ5vXrjjTfmz58fGxubnZ39yCOPuJZjpKFVGtGMRuO2bduee+65zs7OXg/l5uaOHj26oqLi1KlTp0+fLigo0LYvXbo0OTm5trb2X//611/+8hfn/itXrrRYLGfOnKmurh4/fvyaNWsG7SoAjARDuF6dOHFi5syZ/Xo1CCpqAAI/AobQggULtm/f3tnZOX369N27d6uqmpCQcOjQIVVVy8vLFUWpq6vT9iwtLR0zZkx3d3d5ebnBYGhsbNS2FxUVRUREqKpaWVlpMBic+9tsNoPBYLVa+zxvcXFxQkLCQF8dBlQwvvaDccxwGqr1SlXVLVu2pKWlNTQ0DOgFYuAE/toPHezWDDoTGhr68ssvr1u3btWqVc6NV65ciYiIiIuL0340m80Oh6OhoeHKlSuxsbExMTHa9mnTpml/qKqqMhgMs2bNch5h/PjxV69eHT9+/GBdB4Dhb/DXqxdffHH//v1lZWWxsbEDdVXQPVolKA888MCrr7768ssvO7ckJye3tLTU19drq09VVVV4eLjJZEpKSrJare3t7eHh4Yqi1NbWavunpKQYDIazZ8/SGwEYUIO5Xm3evPngwYPHjh1LTk4esAtCEOCzSlAURdm5c+euXbuam5u1HzMyMubMmVNQUGC32y0WS2Fh4erVq0NCQqZPnz5jxozXX39dUZT29vZdu3Zp+6enpy9atGjt2rU1NTWKotTX1x84cMD9LN3d3Q6HQ/ucgcPhaG9vH6TLAzCMDM56lZ+ff/DgwcOHD5tMJofDwZcFjGS0SlAURZk9e/Z9993n/GcjBoPhwIEDra2taWlpM2bMuOWWW1577TXtoffff7+0tPS2226766677rrrLucRiouLJ02alJ2dHRkZOWfOnBMnTrif5Y033hg7duyqVassFsvYsWO5oQ3AD4OwXlmt1t27d1+8eNFsNo8dO3bs2LFZWVmDc3XQIYPzE0/+FBsMiqIEcgQAwSgYX/vBOGYAgQv8tc9dJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAACFaJQAAAKHQoR4AAAyeysrKoR4CgCBjUFXV/2KDQVGUQI4AIBgF42tfGzOAkSmQ9aof7iqxAAHQP7PZPNRDABCU+uGuEoCRKbjuKgGAfwJqlQAAAIY3/gUcAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAUEBfQcn3Ko0E/n2dBHNjJAiurxphTo4ErFcQCWS94q4SAACAUD/8xybB9Zsl5AX+mxZzY7gK3t/CmZPDFesVRAKfG9xVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEBr+rdK3337761//2mQyjRs3bvr06c8++6wfB5k+ffqHH34oufNPfvKTkpKSPh8qKirKzs6OiIhITEz0YxjoX7qaGxs3bszKyho3blxKSsqmTZs6Ojr8GAyCna7mJOuVruhqboy09WqYt0o9PT2//OUvJ02a9PXXXzc0NJSUlJjN5iEcj8lk2rBhw7Zt24ZwDNDobW7Y7fa9e/devny5pKSkpKRk69atQzgYDAm9zUnWK/3Q29wYceuVGoDAjzDQLl++rCjKt99+6/7QtWvXlixZEhcXl5SU9Pjjj7e0tGjbr1+/npeXl5KSEhkZOWPGjPLyclVVMzMzDx06pD26YMGCVatWdXR02Gy29evXJycnm0ym5cuX19fXq6r6xBNPjB492mQypaamrlq1qs9RFRcXJyQkDNQ1959Anl/mhn9zQ7Nly5b58+f3/zX3H/0/v+70P2Z9zknWKz3Q59zQjIT1apjfVZo0aVJGRsb69evffffd6upq14dyc3NHjx5dUVFx6tSp06dPFxQUaNtXrFhx6dKlzz77zGq1vvPOO5GRkc6SS5cuzZs3784773znnXdGjx69cuVKi8Vy5syZ6urq8ePHr1mzRlGU3bt3Z2Vl7d69u6qq6p133hnEa4Vv9Dw3Tpw4MXPmzP6/Zuibnuckhpae58aIWK+GtlMbBBaLZfPmzbfddltoaOjUqVOLi4tVVS0vL1cUpa6uTtuntLR0zJgx3d3dFRUViqJcvXq110EyMzOff/755OTkvXv3alsqKysNBoPzCDabzWAwWK1WVVVvvfVW7Swi/JamEzqcG6qqbtmyJS0traGhoR+vtN8FxfPbS1CMWYdzkvVKJ3Q4N9QRs14N/1bJqbm5+dVXXw0JCfnqq68++eSTiIgI50M//PCDoigWi6W0tHTcuHHutZmZmQkJCbNnz3Y4HNqWI0eOhISEpLqIjo7+5ptvVJaegGsHn37mxgsvvGA2m6uqqvr1+vpfcD2/muAas37mJOuV3uhnboyc9WqYvwHnymg0FhQUjBkz5quvvkpOTm5paamvr9ceqqqqCg8P196UbW1trampcS/ftWtXXFzc/fff39raqihKSkqKwWA4e/Zs1f9cv349KytLUZSQkBGU6vCgk7mxefPm/fv3Hzt2LDU1dQCuEsFEJ3MSOqSTuTGi1qth/iKpra195plnzpw509LS0tjY+NJLL3V2ds6aNSsjI2POnDkFBQV2u91isRQWFq5evTokJCQ9PX3RokWPPvpoTU2Nqqrnzp1zTrXw8PCDBw9GRUXdc889zc3N2p5r167Vdqivrz9w4IC2Z2Ji4vnz5/scT3d3t8Ph6OzsVBTF4XC0t7cPSgzog97mRn5+/sGDBw8fPmwymRwOx7D/x7dwp7c5yXqlH3qbGyNuvRram1oDzWazrVu3btq0aWPHjo2Ojp43b95HH32kPXTlypWcnByTyTRx4sS8vDy73a5tb2xsXLduXVJSUmRk5G233Xb+/HnV5V8NdHV1/fa3v73jjjsaGxutVmt+fv6UKVOMRqPZbH7qqae0Ixw9enTatGnR0dG5ubm9xrNnzx7X8F1vnOpQIM8vc8OnuXH9+vVeL8z09PTBy8J3+n9+3el/zLqakyrrlZ7oam6MwPXK4DyKHwwGg3Z6v48APQvk+WVuDG/B+PwG45ghj/UKIoE/v8P8DTgAAIBA0CoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAIhQZ+CIPBEPhBMCwxN6A3zEmIMDcgwl0lAAAAIYOqqkM9BgAAAJ3irhIAAIAQrRIAAIAQrRIAAIAQrRIAAIAQrRIAAIAQrRIAAIAQrRIAAIBQQN/WzXebjgT+ffMWc2MkCK5vZWNOjgSsVxAJZL3irhIAAIBQP/wfcMH1myXkBf6bFnNjuAre38KZk8MV6xVEAp8b3FUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQGrat0smTJ++7774JEyZERETcfPPNhYWFLS0tg3Derq6u/Pz8CRMmREVFrVy5sqmpqc/djEajwUV4eHh7e/sgDG/EGqr5YLFYli1bZjKZoqOjFy9efP78+T53Kyoqys7OjoiISExMdN2+Zs0a13lSUlIyCGPG4GO9givWK70Znq3SP//5z4ULF956662fffZZXV3d/v376+rqzp49K1OrqmpnZ6ffp37hhRcOHz586tSp77///tKlS+vXr+9zN4vF0vw/OTk5Dz74YHh4uN8nhWdDOB/y8vKsVuuFCxeuXr06ceLEpUuX9rmbyWTasGHDtm3b3B8qKChwTpUlS5b4PRLoFusVXLFe6ZEagMCPMBC6u7uTk5MLCgp6be/p6VFV9dq1a0uWLImLi0tKSnr88cdbWlq0RzMzMwsLC++8886MjIyysjKbzbZ+/frk5GSTybR8+fL6+nptt9deey01NXX8+PETJ07cvn27+9nj4+Pfeust7c9lZWWhoaHXr1/3MNr6+vrw8PAjR44EeNUDIZDnVz9zY2jnQ3p6+r59+7Q/l5WVhYSEdHV1iYZaXFyckJDgumX16tXPPvusv5c+gPTz/MrT55hZr/oL6xXrlUg/dDtDe/qBoHXfZ86c6fPRuXPnrlixoqmpqaamZu7cuY899pi2PTMz86abbmpoaNB+/NWvfvXggw/W19e3trY++uij9913n6qq58+fNxqNFy9eVFXVarX+97//7XXwmpoa11Nrd7NPnjzpYbQ7d+6cNm1aAJc7gIbH0jOE80FV1U2bNi1cuNBisdhstocffjgnJ8fDUPtceiZOnJicnDxz5sxXXnmlo6PD9wAGhH6eX3n6HDPrVX9hvWK9EqFV6sMnn3yiKEpdXZ37Q+Xl5a4PlZaWjhkzpru7W1XVzMzMP/3pT9r2yspKg8Hg3M1msxkMBqvVWlFRMXbs2Pfee6+pqanPU1+4cEFRlMrKSueWkJCQjz/+2MNoMzIydu7c6ftVDobhsfQM4XzQdl6wYIGWxg033FBdXe1hqO5Lz+HDhz/99NOLFy8eOHAgKSnJ/XfNoaKf51eePsfMetVfWK+07axX7gJ/fofhZ5Xi4uIURbl69ar7Q1euXImIiNB2UBTFbDY7HI6Ghgbtx0mTJml/qKqqMhgMs2bNmjJlypQpU2655Zbx48dfvXrVbDYXFRX9+c9/TkxM/OlPf3rs2LFex4+MjFQUxWazaT82Nzf39PRERUW9/fbbzk+6ue5fVlZWVVW1Zs2a/rp2uBvC+aCq6t133202mxsbG+12+7Jly+68886WlhbRfHC3aNGiuXPnTp06NTc395VXXtm/f38gUUCHWK/givVKp4a2UxsI2nu9Tz/9dK/tPT09vbrysrKy8PBwZ1d+6NAhbfv3338/atQoq9UqOkVra+sf//jHmJgY7f1jV/Hx8X/729+0Px89etTze//Lly9/6KGHfLu8QRTI86ufuTGE86G+vl5xe4Pj888/Fx3H/bc0V++9996ECRM8Xeog0s/zK0+fY2a96i+sV9p21it3/dDtDO3pB8g//vGPMWPGPP/88xUVFQ6H49y5c3l5eSdPnuzp6ZkzZ87DDz/c3NxcW1s7b968Rx99VCtxnWqqqt5zzz1Lliy5du2aqqp1dXXvv/++qqrfffddaWmpw+FQVfWNN96Ij493X3oKCwszMzMrKystFsv8+fNXrFghGmRdXV1YWJg+PyCpGR5Ljzqk8yE1NXXdunU2m62tre3FF180Go2NjY3uI+zq6mpraysqKkpISGhra9OO2d3dvW/fvqqqKqvVevTo0fT0dOdHE4acrp5fSbodM+tVv2C9ch6B9aoXWiWhEydO3HPPPdHR0ePGjbv55ptfeukl7R8LXLlyJScnx2QyTZw4MS8vz263a/v3mmpWqzU/P3/KlClGo9FsNj/11FOqqp4+ffqOO+6IioqKiYmZPXv28ePH3c/b0dHx5JNPRkdHG43GFStW2Gw20Qh37Nih2w9IaobN0qMO3Xw4e/bsokWLYmJioqKi5s6dK/qbZs+ePa73eiMiIlRV7e7uvvvuu2NjY8PCwsxm83PPPdfa2trvyfhHb8+vDD2PmfUqcKxXznLWq14Cf34NzqP4QXvnMpAjQM8CeX6ZG8NbMD6/wThmyGO9gkjgz+8w/Fg3AABAf6FVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEKJVAgAAEAoN/BBe/7dhjFjMDegNcxIizA2IcFcJAABAKKD/Aw4AAGB4464SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAUEDf1s13m44E/n3zFnNjJAiub2VjTo4ErFcQCWS94q4SAACAUD/8H3CBdPHU6r82EMF4vdTK1wajYMyZWvnaQATj9VIrXxsI7ioBAAAI0SoBAAAI0SoBAAAIDUir1NXVlZ+fP2HChKioqJUrVzY1NcnXbty4MSsra9y4cSkpKZs2bero6PDj7DNmzDAYDLW1tT4V/vvf/549e/aYMWPi4uI2bdokX2ixWJYtW2YymaKjoxcvXnz+/HnP+xcVFWVnZ0dERCQmJvYaudfcRLUyuYlqnWf3LzevPJzXa+aiWpnMRZnI5CyqlcnZ8z6ec/ZQ6zUrUa1MVoWFhWlpaeHh4bGxsffff//3338vn1Ww8/y68EyUm4w1a9YYXJSUlMjXGo1G19rw8PD29nbJ2itXruTm5sbGxk6YMOH3v/+910JRPjK5ifaRyU1UG0huMkTnlclcVCuTuej1K5OzqFYmZ1GtTM6iWpmsRLUyWYmuK5DXshdqAERHKCwszMjIqKiosFgs8+bNW7FihXzt2rVrjx8/3tDQcPLkycmTJ2/evFm+VrN9+/aFCxcqilJTUyNfW1paajQa//rXv9bW1lZXVx8/fly+9sEHH/zFL37x448/2u321atX33zzzZ5rP/roo3fffXfHjh0JCQmu+4hyk6kV5SZTq3HPLZAZInNeUeYytaLMXWtFmcjkLKqVydnzHPacs6hWJitRrUxWn3/+eUVFRVNTU2Vl5QMPPJCdnS2fVbAQjdnz68JzrSg3mdrVq1cXFBQ0/09nZ6d8rd1udxbm5OQsX75cvvaOO+546KGHbDbbtWvX5syZ89RTT3muFeUj2i5TK8pNplaU20CvV6LMZWpFmcu8fmVyFtXK5CyqlclZVCuTlahWJivRdclk5Z8BaZXi4+Pfeust7c9lZWWhoaHXr1+XrHW1ZcuW+fPny59XVdVvvvkmPT39iy++UHxslbKzs5999lnP4xHVpqen79u3T/tzWVlZSEhIV1eX19ri4uJeT6coN5laV665Sdb2mVt/LT2i84oyl6kVZS4as2sm8jm714q2S9b6lLNrrXxW7rU+ZdXR0ZGXl3fvvfdqP/qalZ55HrPn15TX6+2Vm0zt6tWr/V5znOrr68PDw48cOSJZe/XqVUVRysvLtR8PHTpkNBrb29u91orycd/u03rVKzeZWlFuA71eOfXK3Guth8zl1xyZnEW1qkTO7rW+5tzneb1m1avW16z6fN3JZyWv/9+Aq62traurmzFjhvbjzJkzu7q6vv32Wz8OdeLEiZkzZ8rv393d/bvf/e7111+PjIz06UQOh+Pzzz/v7u6+4YYbYmJiFi5c+NVXX8mX5+bmFhcX19XVNTU1vfnmm7/5zW9GjRrl0wCU4MwtEIOcuTMTP3IW5SmTs+s+vubsrPUjK9fzSmZVVFSUmJgYGRn59ddf//3vf1f6dU4OY+65+VQ7efLk22+/fceOHZ2dnX6c/e23305JSfn5z38uub/zrw0nu93u0/uG/WVocwvEIGTu6xruodannN1r5XPuc8ySWTlr5bMKZP74I5A+q88jXLhwQVGUysrK/9+OhYR8/PHHMrWutmzZkpaW1tDQIHleVVV37ty5dOlSVVW/++47xZe7SjU1NYqipKWlnTt3zm63b9iwISkpyW63S57XZrMtWLBAe/SGG26orq6WOW+vztdDbl5rXfXKTaZWlFsgM8TreT1kLjNmUeZ9jtk1E59yVsXz0GvO7vv4lLNrrU9ZuZ9XMqvW1tZr164dP358xowZa9eu9SMrnfM8Zr/vKrnnJll7+PDhTz/99OLFiwcOHEhKSiooKPB1zKqqZmRk7Ny506cx33777c43OObOnasoymeffea1tt/vKvWZm0ytKLcBXa9c9cpcplaUufyaI3mnxL1WMmf3Wp9yFq2TXrNyr5XMysPrbiDuKvV/q6Qt62fOnNF+1D4HevLkSZlapxdeeMFsNldVVcmf9+LFi5MmTaqtrVV9b5Wam5sVRdmxY4f2Y1tb26hRo44dOyZT29PTM2vWrEceeaSxsdFut2/dujUlJUWmzeqzdegzN/mXsXtuXms95DagS4+HzL3WesjcvbZXJj7lLJqHMjn32sennHvV+pRVr1qfstIcP37cYDC0tLT4lJX+eR5zgG/AqS65+VG7f//++Ph4X8975MiRsLCw+vp6n8Z86dKlnJychISEtLS0rVu3Kopy4cIFr7UD9Aac+n9z87XWNbcBXa+c3DOXqRVlLr/myOTs+e9Nzzl7rvWcs6hWJiv3Wvms3K9LExxvwCUmJsbHx3/55Zfaj6dPnw4NDc3KypI/wubNm/fv33/s2LHU1FT5qhMnTjQ0NNx4440mk0lrRW+88cY333xTptZoNE6dOtX5hZ4+fbPnjz/++J///Cc/Pz8mJiYiIuLpp5+urq4+d+6c/BE0wZhbIAYnc/dM5HMW5SmTs/s+8jm718pn5V7r3/wcNWrUqFGjAp+TI42Wmx+FYWFhXV1dvlbt3bs3JyfHZDL5VJWSkvLBBx/U1tZWVlYmJycnJSVNnTrV11P3r0HOLRADmrl/a7h8rShnr7UecvZQ6zWrPmv9mJ9+zx8fBNJniY5QWFiYmZlZWVlpsVjmz5/v07+Ae/LJJ6dNm1ZZWdnW1tbW1ub+eUNRbUtLy+X/OXr0qKIop0+fln8T7bXXXjObzefPn29ra/vDH/4wefJk+d8OU1NT161bZ7PZ2traXnzxRaPR2NjY6KG2q6urra2tqKgoISGhra3N4XBo20W5ydSKcvNa6yG3QGaIzJhFmcvUijJ3rRVlIpOzqFYm5z73kcxZdHyZrES1XrPq6Oh46aWXysvLrVbrF198cfvtt+fm5spnFSxEYxbNMa+1HnLzWtvd3b1v376qqiqr1Xr06NH09PTHHntMfsyqqtbV1YWFhfX5gW7PtadOnfrhhx8aGhoOHjwYFxf39ttve64V5SPa7rXWQ25eaz3kNtDrlSrIXKZWlLnM61cm5z5rJXPus1YyZw9/X3vNSlTrNSsP1yWTlX8GpFXq6Oh48skno6OjjUbjihUrbDabZO3169eV/ys9PV3+vE6+vgGnqmpPT8+WLVsSEhKioqLuuuuur7/+Wr727NmzixYtiomJiYqKmjt3rtd/jbJnzx7Xa4yIiNC2i3LzWushN5nzinILZHrJnFeUuUytKHNnrYdMvOYsqpXJWWYOi3L2UOs1Kw+1XrPq7Oy8//77ExISwsLCpkyZsnHjRmcmMnMyWIjG7PV1Iar1kJvX2u7u7rvvvjs2NjYsLMxsNj/33HOtra3yY1ZVdceOHdOmTevzIc+1u3btio+PHz16dFZWVlFRkddaUT6i7V5rPeTmtdZDboHMSZnrVQWZy9SKMnfWenj9es1ZVCuTs6hWJmfPa53nrDzUes3Kw3XJzEn/GJxH8UPw/rd51FJL7VDVDpVgzIpaaqkd2loN/7EJAACAEK0SAACAEK0SAACAEK0SAACAUD98rBvDWyAfo8PwFowf68bwxnoFET7WDQAAMCACuqsEAAAwvHFXCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQOj/AVpnAfsg0n+oAAAAAElFTkSuQmCC>"
-/>
+![cyclic:block distribution](misc/mpi_cyclic_block.png)
+{: align="center"}
+
+!!! example "Binding to cores and cyclic:block distribution"
 
+    ```bash
     #!/bin/bash
     #SBATCH --nodes=2
     #SBATCH --tasks-per-node=16
     #SBATCH --cpus-per-task=1
 
     srun --ntasks 32 --cpu_bind=cores --distribution=cyclic:block ./application
+    ```
 
 ### Socket Bound
 
-Note: The general distribution onto the nodes and sockets stays the
-same. The mayor difference between socket and cpu bound lies within the
-ability of the tasks to "jump" from one core to another inside a socket
-while executing the application. These jumps can slow down the execution
-time of your application.
+The general distribution onto the nodes and sockets stays the same. The mayor difference between
+socket- and CPU-bound lies within the ability of the OS to move tasks from one core to another
+inside a socket while executing the application. These jumps can slow down the execution time of
+your application.
 
 #### Default Distribution
 
-The default distribution uses --cpu_bind=sockets with
---distribution=block:cyclic. The default allocation method (as well as
-block:cyclic) will fill up one node after another, while filling socket
-one and two in alternation. Resulting in only even ranks on the first
-socket of each node and odd on each second socket of each node.
+The default distribution uses `--cpu_bind=sockets` with `--distribution=block:cyclic`. The default
+allocation method (as well as `block:cyclic`) will fill up one node after another, while filling
+socket one and two in alternation. Resulting in only even ranks on the first socket of each node and
+odd on each second socket of each node.
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAvoAAADyCAIAAACzsfbGAAAABmJLR0QA/wD/AP+gvaeTAAAgAElEQVR4nO3daXQUVdrA8Wq27AukIQECCQkQCAoCIov44iAHFJARCJtggsqWIyLiCOggghsoisOAo4zoSE6cZGQTj8twDmGZAdzZiSAkhCVASITurJ2EpN4PNdMnk+7qrqR64+b/+5RU31t1763nPjypNB2DLMsSAACAuJp5ewAAAADuRbkDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAE10JPZ4PB4KpxALjtyLLs4SuSc4CmTE/O4ekOAAAQnK6nOwrP/4QHwLu8+5SFnAM0NfpzDk93AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3mq7777/fYDAcOnTIeiQqKurzzz/XfoajR48GBwdrb5+WljZkyJCgoKCoqKgGDBSAEDyfc5599tnExMTAwMDOnTsvXry4qqqqAcOFWCh3mrSIiIjnn3/eY5czGo0LFy5csWKFx64IwKd4OOeUlpZu3Ljx0qVLmZmZmZmZL7/8sscuDV9DudOkzZo1KycnZ9u2bbYvXb16ddKkSe3atYuOjp4/f355ebly/NKlS6NGjQoPD7/jjjsOHjxobV9cXJyamtqpU6e2bdtOnTq1qKjI9pyjR4+ePHlyp06d3DQdAD7Owznnww8/vO+++yIiIoYMGfL444/X7Y6mhnKnSQsODl6xYsULL7xQXV1d76WJEye2bNkyJyfnp59+Onz48KJFi5TjkyZNio6Ovnbt2tdff/3BBx9Y20+fPr2goODIkSMXL14MCwubOXOmx2YB4HbhxZxz4MCB/v37u3Q2uK3IOug/A7xo2LBhr776anV1dY8ePdavXy/LcmRk5I4dO2RZPn36tCRJ169fV1pmZWX5+/vX1NScPn3aYDDcuHFDOZ6WlhYUFCTLcm5ursFgsLY3m80Gg8FkMtm9bkZGRmRkpLtnB7fy1t4n59zWvJVzZFlevnx5ly5dioqK3DpBuI/+vd/C0+UVfEyLFi1Wr149e/bs5ORk68HLly8HBQW1bdtW+TYuLs5isRQVFV2+fDkiIqJ169bK8W7duilf5OXlGQyGAQMGWM8QFhaWn58fFhbmqXkAuD14Pue88sor6enpe/fujYiIcNes4PModyD9/ve/f+edd1avXm09Eh0dXVZWVlhYqGSfvLw8Pz8/o9HYsWNHk8lUWVnp5+cnSdK1a9eU9p07dzYYDMeOHaO+AeCUJ3PO0qVLt2/fvn///ujoaLdNCLcB3rsDSZKkNWvWrFu3rqSkRPm2e/fugwYNWrRoUWlpaUFBwbJly1JSUpo1a9ajR4++ffu+++67kiRVVlauW7dOaR8fHz9y5MhZs2ZdvXpVkqTCwsKtW7faXqWmpsZisSi/s7dYLJWVlR6aHgAf45mcs2DBgu3bt+/atctoNFosFv4jelNGuQNJkqSBAweOGTPG+l8hDAbD1q1by8vLu3Tp0rdv3969e69du1Z5acuWLVlZWf369Rs+fPjw4cOtZ8jIyOjQocOQIUNCQkIGDRp04MAB26t8+OGHAQEBycnJBQUFAQEBPFgGmiwP5ByTybR+/fqzZ8/GxcUFBAQEBAQkJiZ6ZnbwQQbrO4Aa09lgkCRJzxkA3I68tffJOUDTpH/v83QHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIroX+UxgMBv0nAQCNyDkAGoqnOwAAQHAGWZa9PQYAAAA34ukOAAAQHOUOAAAQHOUOAAAQHOUOAAAQHOUOAAAQHOUOAAAQnK6PGeTDvpqCxn1UAbHRFHj+YyyIq6aAnAM1enIOT3cAAIDgXPBHJPigQlHp/2mJ2BCVd3+SJq5ERc6BGv2xwdMdAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOPHLnezs7IcffthoNAYGBvbo0WPJkiWNOEmPHj0+//xzjY3vuuuuzMxMuy+lpaUNGTIkKCgoKiqqEcOAa/lUbDz77LOJiYmBgYGdO3devHhxVVVVIwYDX+BTcUXO8Sk+FRtNLecIXu7U1tY++OCDHTp0OHHiRFFRUWZmZlxcnBfHYzQaFy5cuGLFCi+OAQpfi43S0tKNGzdeunQpMzMzMzPz5Zdf9uJg0Gi+FlfkHN/ha7HR5HKOrIP+M7jbpUuXJEnKzs62fenKlStJSUlt27bt2LHjU089VVZWphy/efNmampq586dQ0JC+vbte/r0aVmWExISduzYobw6bNiw5OTkqqoqs9k8b9686Ohoo9E4ZcqUwsJCWZbnz5/fsmVLo9EYExOTnJxsd1QZGRmRkZHumrPr6Lm/xEbjYkOxfPny++67z/Vzdh1v3V/iipzjjr6e4ZuxoWgKOUfwpzsdOnTo3r37vHnz/vGPf1y8eLHuSxMnTmzZsmVOTs5PP/10+PDhRYsWKcenTZt24cKFb7/91mQybd68OSQkxNrlwoUL995779ChQzdv3tyyZcvp06cXFBQcOXLk4sWLYWFhM2fOlCRp/fr1iYmJ69evz8vL27x5swfniobx5dg4cOBA//79XT9nuJ8vxxW8y5djo0nkHO9WWx5QUFCwdOnSfv36tWjRomvXrhkZGbIsnz59WpKk69evK22ysrL8/f1rampycnIkScrPz693koSEhJdeeik6Onrjxo3KkdzcXIPBYD2D2Ww2GAwmk0mW5T59+ihXUcNPWj7CB2NDluXly5d36dKlqKjIhTN1OW/dX+KKnOOOvh7jg7EhN5mcI365Y1VSUvLOO+80a9bs+PHju3fvDgoKsr50/vx5SZIKCgqysrICAwNt+yYkJERGRg4cONBisShH9uzZ06xZs5g6wsPDT506JZN6dPf1PN+JjZUrV8bFxeXl5bl0fq5HuaOF78QVOcfX+E5sNJ2cI/gvs+oKDg5etGiRv7//8ePHo6Ojy8rKCgsLlZfy8vL8/PyUX3CWl5dfvXrVtvu6devatm07bty48vJySZI6d+5sMBiOHTuW9183b95MTEyUJKlZsya0qmLwkdhYunRpenr6/v37Y2Ji3DBLeJqPxBV8kI/ERpPKOYJvkmvXrj3//PNHjhwpKyu7cePGqlWrqqurBwwY0L1790GDBi1atKi0tLSgoGDZsmUpKSnNmjWLj48fOXLknDlzrl69KsvyyZMnraHm5+e3ffv20NDQhx56qKSkRGk5a9YspUFhYeHWrVuVllFRUWfOnLE7npqaGovFUl1dLUmSxWKprKz0yDLADl+LjQULFmzfvn3Xrl1Go9FisQj/n0JF5WtxRc7xHb4WG00u53j34ZK7mc3m2bNnd+vWLSAgIDw8/N577/3qq6+Uly5fvjxhwgSj0di+ffvU1NTS0lLl+I0bN2bPnt2xY8eQkJB+/fqdOXNGrvNO+Fu3bj322GP33HPPjRs3TCbTggULYmNjg4OD4+LinnnmGeUM+/bt69atW3h4+MSJE+uN5/3336+7+HUfYPogPfeX2GhQbNy8ebPexoyPj/fcWjSct+4vcUXOcUdfz/Cp2GiCOcdgPUsjGAwG5fKNPgN8mZ77S2yIzVv3l7gSGzkHavTfX8F/mQUAAEC5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABNdC/ymUP8sO2CI24A7EFdQQG1DD0x0AACA4gyzL3h4DAACAG/F0BwAACI5yBwAACI5yBwAACI5yBwAACI5yBwAACI5yBwAACI5yBwAACE7Xpyrz+ZVNQeM+mYnYaAo8/6ldxFVTQM6BGj05h6c7AABAcC74m1l8LrOo9P+0RGyIyrs/SRNXoiLnQI3+2ODpDgAAEBzlDgAAEBzlDgAAEBzlDgAAEBzlDgAAEBzlDgAAEBzlDgAAEBzlDgAAEJyw5c7BgwfHjBnTpk2boKCgO++8c9myZWVlZR647q1btxYsWNCmTZvQ0NDp06cXFxfbbRYcHGyow8/Pr7Ky0gPDa7K8FQ8FBQWTJ082Go3h4eGjRo06c+aM3WZpaWlDhgwJCgqKioqqe3zmzJl14yQzM9MDY0bjkHNQFznH14hZ7nzxxRcPPPBAnz59vv322+vXr6enp1+/fv3YsWNa+sqyXF1d3ehLr1y5cteuXT/99NO5c+cuXLgwb948u80KCgpK/mvChAnjx4/38/Nr9EXhmBfjITU11WQy/frrr/n5+e3bt580aZLdZkajceHChStWrLB9adGiRdZQSUpKavRI4FbkHNRFzvFFsg76z+AONTU10dHRixYtqne8trZWluUrV64kJSW1bdu2Y8eOTz31VFlZmfJqQkLCsmXLhg4d2r17971795rN5nnz5kVHRxuNxilTphQWFirN1q5dGxMTExYW1r59+1dffdX26u3atfv444+Vr/fu3duiRYubN286GG1hYaGfn9+ePXt0ztod9Nxf34kN78ZDfHz8pk2blK/37t3brFmzW7duqQ01IyMjMjKy7pGUlJQlS5Y0dupu5K376ztxVRc5x1XIOeQcNS6oWLx7eXdQKugjR47YfXXw4MHTpk0rLi6+evXq4MGD586dqxxPSEi44447ioqKlG/Hjh07fvz4wsLC8vLyOXPmjBkzRpblM2fOBAcHnz17VpZlk8n0888/1zv51atX615aeap88OBBB6Nds2ZNt27ddEzXjcRIPV6MB1mWFy9e/MADDxQUFJjN5hkzZkyYMMHBUO2mnvbt20dHR/fv3//NN9+sqqpq+AK4BeVOXeQcVyHnkHPUUO7YsXv3bkmSrl+/bvvS6dOn676UlZXl7+9fU1Mjy3JCQsKGDRuU47m5uQaDwdrMbDYbDAaTyZSTkxMQEPDZZ58VFxfbvfSvv/4qSVJubq71SLNmzb755hsHo+3evfuaNWsaPktPECP1eDEelMbDhg1TVqNnz54XL150MFTb1LNr165Dhw6dPXt269atHTt2tP150Vsod+oi57gKOUc5Ts6xpf/+CvjenbZt20qSlJ+fb/vS5cuXg4KClAaSJMXFxVkslqKiIuXbDh06KF/k5eUZDIYBAwbExsbGxsb27t07LCwsPz8/Li4uLS3tL3/5S1RU1P/93//t37+/3vlDQkIkSTKbzcq3JSUltbW1oaGhn3zyifWdX3Xb7927Ny8vb+bMma6aO2x5MR5kWR4xYkRcXNyNGzdKS0snT548dOjQsrIytXiwNXLkyMGDB3ft2nXixIlvvvlmenq6nqWAm5BzUBc5x0d5t9pyB+X3ps8991y947W1tfUq67179/r5+Vkr6x07dijHz50717x5c5PJpHaJ8vLyN954o3Xr1srvYutq167d3/72N+Xrffv2Of49+pQpU6ZOndqw6XmQnvvrO7HhxXgoLCyUbH7R8N1336mdx/Ynrbo+++yzNm3aOJqqB3nr/vpOXNVFznEVco5ynJxjywUVi3cv7yY7d+709/d/6aWXcnJyLBbLyZMnU1NTDx48WFtbO2jQoBkzZpSUlFy7du3ee++dM2eO0qVuqMmy/NBDDyUlJV25ckWW5evXr2/ZskWW5V9++SUrK8tisciy/OGHH7Zr18429SxbtiwhISE3N7egoOC+++6bNm2a2iCvX7/eqlUr33zDoEKM1CN7NR5iYmJmz55tNpsrKipeeeWV4ODgGzdu2I7w1q1bFRUVaWlpkZGRFRUVyjlramo2bdqUl5dnMpn27dsXHx9v/TW/11Hu1EPOcQlyjvUM5Jx6KHdUHThw4KGHHgoPDw8MDLzzzjtXrVqlvAH+8uXLEyZMMBqN7du3T01NLS0tVdrXCzWTybRgwYLY2Njg4OC4uLhnnnlGluXDhw/fc889oaGhrVu3Hjhw4L/+9S/b61ZVVT399NPh4eHBwcHTpk0zm81qI3zrrbd89g2DCmFSj+y9eDh27NjIkSNbt24dGho6ePBgtX9p3n///brPXIOCgmRZrqmpGTFiRERERKtWreLi4l544YXy8nKXr0zjUO7YIufoR86xdifn1KP//hqsZ2kE5beAes4AX6bn/hIbYvPW/SWuxEbOgRr991fAtyoDAADURbkDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAE10L/KZz+hVU0WcQG3IG4ghpiA2p4ugMAAASn629mAQAA+D6e7gAAAMFR7gAAAMFR7gAAAMFR7gAAAMFR7gAAAMFR7gAAAMFR7gAAAMHp+lRlPr+yKWjcJzMRG02B5z+1i7hqCsg5UKMn5/B0BwAACM4FfzOLz2UWlf6flogNUXn3J2niSlTkHKjRHxs83QEAAIKj3AEAAIKj3AEAAIKj3AEAAIKj3AEAAIKj3AEAAIKj3JEkSerRo4fBYDAYDJGRkSkpKaWlpY04idFoPHfunMvHBu8iNuAOxBXUEBtuQrnzH1u2bJFl+dChQz/++OPq1au9PRz4EGID7kBcQQ2x4Q6UO/8jPj5+7Nixx48fV7598cUXO3fuHBoaOmjQoMOHDysHjUbj22+/PXDgwK5duz799NO2J9m3b19MTMz333/vuXHD/YgNuANxBTXEhmtR7vwPs9mclZXVq1cv5ds777zz559/vnHjxqRJk6ZOnWr9vM6jR48eOnToxIkTu3fvzsrKqnuGr7/+Ojk5eefOnQMHDvT06OFOxAbcgbiCGmLDxWQd9J/BRyQkJISHh0dGRrZo0WL06NHl5eW2bcLDwy9fvizLckRExPfff68cnDt37po1a5SvIyIiXn/99ZiYmFOnTnls5G6l5/4SG8SGSNd1OeLKLnKOTGyo0H9/ebrzH2vWrDl8+HBaWtrBgwdzcnKUg5988kn//v07deoUGxtbUlJSVFSkHG/Tpo3yhb+/f933kf3pT3+aPHlyYmKihwcPtyI24A7EFdQQG+5AufMf4eHhHTp0mDZt2pNPPrl06VJJks6ePfvcc8+lp6dfunQpLy8vNDRUdvbH57Zs2bJt27YNGzZ4ZMjwEGID7kBcQQ2x4Q4u+Ivognn++ee7dOly7NixW7duBQUFxcfHS5KUnp5+8+ZNp307duyYlZU1bNiwwMDAJ554wv2DhUcRG3AH4gpqiA0XotypLyoqKiUlZfXq1RkZGRMnTuzTp0/btm0HDx7cqVMnLd1jY2OzsrLuv/9+f3//Rx991N2jhScRG3AH4gpqiA0XMjh9IOaos8EgSZKeM8CX6bm/xIbYvHV/iSuxkXOgRv/95b07AABAcJQ7AABAcJQ7AABAcJQ7AABAcJQ7AABAcD5R7pjN5qSkpKCgoJiYmE8//dS2gcVimTt3bpcuXYKCgu6+++49e/Y4ONvp06f9/PxmzJih1mDjxo3x8fF+fn49e/Y8ffq0bYNffvll+PDhoaGh0dHRr7zyit2TrF27tk+fPi1atFA+A0r7XNT6apyj2nU1zv2243i+irlz53bo0MHPz69Lly5qfz34wQcfNPxX165dbRtYLBbD/7J7qpMnTw4bNiwkJKR79+5ff/11g8asZS5qbbTMUa2v07nfphysp9M9rtbX6d53sE8bnce07H2nbRzsfcd9Ha+Vg75a8qRa3GrJk3qo5Vgte1wtNpzufQdr5XTvO+jrdO876Ksl76nFpNO1cnBdLXlSbV5a8mTj+ES5s3DhQovFkp+f//HHH8+ZMyc7O7teg8rKysDAwG3btl28eHHatGnjxo0rLCxUO9v8+fPvuecetVe3bNny2muv/fWvf7127VpaWlp4eLhtm+Tk5N69excVFWVlZb333ns7d+60bdOpU6fXX3993LhxDZ2LWl+Nc1S7rpa5344cz1eRnJz8ww8/FBUVbd68edWqVbt27bLbLC0traKioqKiwu5N8ff3r/iv/Pz8li1bjh8/vl6b6urqRx55ZMSIEb/99tv69eunTJly6dIl7WPWMhe1Nlrm6OD8jud+m1Kbr5Y97mCdHe99B/u00XlMy9532sbB3nfQ1+laOeirJU+qxa2WPKmH3furZY+r9dWy9x2sldO973idHe99x7HheO+r9dWyVmp9NeZJtXlpyZONpOcPbuk/gyzLFoslICDgxx9/VL6dMGHCiy++qHz95JNPPvnkk7ZdWrduvW/fPrttPv300xkzZixZsmT69OnWg3Xb9OrVKzMz0/acddsEBgZa/+ja+PHj33jjDbXxpKSkLFmypHFzqddX+xzV+tqdux567q9LYsPKdr52Y+PKlStRUVHffvutbZtRo0ZlZGTYntnueTZs2DBw4EDbNidOnGjZsmV1dbVy/He/+93q1avVzqN2f7XMxUFsOJijWl+1uevh2vur57q289Wyx9X6at/7Cus+1ZnH1I5r7Os076n11b5Wtn0btFZ149bBWrk25zjYR2p7XK1vg/a+wvb+asxjdvvKGva+bd8G5T216zpdq3p9G7pW9ealsF0r/TnH+5+qnJubW1FR0bt3b+Xb3r17HzlyRPl61KhRtu3PnTtXWlras2dP2zbFxcUrVqzYv3//unXr6naxtikrKzt16lR2dnb79u2bN2/+2GOPvfbaa82bN693nocffvjvf/977969z58//+OPPy5btszBePTMRY2DOapRm7uo6q3Jc889l5aWVlxcvG7dukGDBtlts2TJksWLFycmJq5cuXLgwIF22yg2b948c+ZMtWtZ1dbWnjx50nEbLTT21TJHNXbnLiSNe1xNg/Z+3X2qM4+pHdfS12neU+vb0LWqd12Na2Ubtw7WymM07nE1Tve+2v2tR2Nf7Xvftq/2vKc2Zi1r5WC+DtbK7rzcSE+tpP8Msiz/8MMPfn5+1m/Xrl37wAMPqDUuKysbMGDAihUr7L66YMGCt956S5ZltSccZ86ckSRpxIgRRUVFZ8+ejY+P//Of/2zbLC8vT/nTJJIkvfTSSw4GX68CbdBc1H7ycDxHtb5O594Ieu6vS2LDyvGTMFmWzWbzhQsXPvroo/Dw8FOnTtk2+PLLLw8fPpydnf3iiy+GhIRcuHBB7VTZ2dmtWrX67bffbF+qqqqKjY19+eWXy8vLv/zyy+bNm48fP76hY3Y6F7U2Tueo1lf73LVz7f3Vc91689W4x+32lRuy9+vtU5fkMS1737aN9r1fr2+D1sr2uhrXyjZuHayVa3OO2l5zsMfV+jZo76vdRy17325fjXvftq/2va82Zi1rVa+v9rVyMC93PN3x/nt3goODKysrq6qqlG+Li4uDg4PttqysrHzkkUd69eq1fPly21ePHTu2e/fuhQsXOrhWQECAJEl/+MMfIiIiunbtOnv2bNt3UVVWVg4fPnzu3LkWiyUnJ+eLL754//33XT4XNY7nqEbL3MUWGhrauXPnJ554YsSIEenp6bYNxowZ07dv3549e77++us9e/b85ptv1E61efPmsWPHtmnTxvalli1b7tixY/fu3ZGRkatWrRo3blx0dLQrp+GQ0zmq0T53AWjZ42q0733bfao/j2nZ+7ZttO99277a18q2r/a1so1b/XlSJwd7XI32vd+4HO64r5a9b7evxr3vYMxO18q2r/a1anROaxzv/zIrLi7O39//+PHjd999tyRJJ06c6NWrl22z6urqpKSk8PDwTZs2KX87o55///vfubm57du3lySpvLy8trY2Ozv78OHDddtER0eHh4dbu9s9T25ubm5u7tNPP+3n5xcXF5eUlJSVlZWamurCuahxOkc1WubedLRq1cppg5qaGrsv1dbWpqenv/fee2p977rrrgMHDihf9+vXb8KECY0epx5O5+igo9rcxaBlj6vRuPft7lOdeUzL3rfbRuPet9tX41rZ7du4PKnErc48qZPTPa5Gy95vdA7X3tfu3tfSV23vO+jrdK3U+jYiTzY6p2nn/ac7fn5+U6ZMWblypdls3rt37z//+c/p06crL82aNWvWrFmSJNXU1EyfPr26uvqjjz6qrq62WCy1tbX12jz++ONnz549evTo0aNHH3/88dGjR1t/UrG2MRgMycnJb7/9tslkunDhwqZNm8aOHVuvTWxsbFhY2AcffFBdXX3p0qVt27b16dOnXhtJkm7dumWxWGpqampqapQvNM5Fra+WOar1dTD3253d+Up11sRkMm3YsCEvL++3335LT0//6quvHn744XptSkpKMjMzr169WlhY+O677/78888jR46s10axe/fuysrK0aNH1x1D3TbffffdtWvX8vPzlyxZUlZWNnXqVNs2amN2Ohe1NlrmqNbXwdxvd3bnq2WPq/XVsvfV9qmePKZl76u10ZL31PpqWSu1vlrWSi1uHayVW2ND4XSPq/V1uvcd3Eene1+tr5a9r9ZXS95zMGana+Wgr9O1cjAvB/dOLz2/CdN/BoXJZJowYUJAQECnTp3S09Otx0eOHPnRRx/Jsnz+/Pl6w7a+29zapq56v8Ou26a8vDwlJSUkJKRDhw5//OMfa2pqbNvs2bNnwIABQUFBkZGR8+bNq6iosHfzt+cAAAHsSURBVG2zZMmSuuN59913Nc5Fra/GOapdV23ueui5v66KDbX5WtekuLh41KhRrVu3DgoK6t+//86dO619rW3MZvPQoUNDQ0ODg4MHDRq0e/du2zaKRx99dP78+fXGULfNCy+8EBYWFhAQMGbMmPPnz9ttozZmp3NRa6Nljmp9HcxdD1fdXz3XVVtPLXtcra/Tve9gnzY6j2nZ+w7aWKnlPQd9na6Vg75O18pB3KqtlZ640hIbsoY9rtbX6d53sFZO975aXy17X62vlrznOK4cr5WDvk7XysG81NZKT2z85wy6Ouu+vAPV1dW9evWqqqq6jdr4Wl+dXJV6XM7X7jux4fvXvR3vUVPrK3sp59yOa9XU+squyDkG61kaQfldnZ4zwJfpub/Ehti8dX+JK7GRc6BG//31/nt3AAAA3IpyBwAACI5yBwAACI5yBwAACI5yBwAACM4Fn6rc0M+ORNNBbMAdiCuoITaghqc7AABAcLo+dwcAAMD38XQHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAI7v8BE+cBiPwLm7cAAAAASUVORK5CYII="
-/>
+![Binding to sockets and block:cyclic distribution](misc/mpi_socket_block_cyclic.png)
+{: align="center"}
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=16
-#SBATCH --cpus-per-task=1
+!!! example "Binding to sockets and block:cyclic distribution"
 
-srun --ntasks 32 -cpu_bind=sockets ./application
-```
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2
+    #SBATCH --tasks-per-node=16
+    #SBATCH --cpus-per-task=1
+
+    srun --ntasks 32 -cpu_bind=sockets ./application
+    ```
 
 #### Distribution: block:block
 
 This method allocates the tasks linearly to the cores.
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAvoAAADyCAIAAACzsfbGAAAABmJLR0QA/wD/AP+gvaeTAAAdq0lEQVR4nO3deVRU5/3H8TsUHWAQRhlkcQQEFUWjUWNc8zPHpJq4tSruFjSN24lbSBRN3BMbjYk21RNrtWnkkELdTRNTzxExrZqmGhUTFaMQRFFZijOswzLc3x+35VCGQWWYxYf36y/m3ufeeS73y9fP3BnvqGRZlgAAAMTl5uwJAAAA2BdxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAATnbsvGKpWqueYB4Ikjy7KDn5GeA7RktvQcru4AAADB2XR1R+H4V3gAnMu5V1noOUBLY3vP4eoOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMSdluv5559XqVRnz56tXRIYGHjkyJFH38OlS5e8vb0ffXxCQsLgwYM1Gk1gYOBjTBSAEBzfc15//fWoqCgvL6+QkJDly5dXVlY+xnQhFuJOi+bn57ds2TKHPZ1Op1u6dOm6desc9owAXIqDe05JScmuXbtu376dnJycnJy8du1ahz01XA1xp0V79dVXMzIyDh48aLnq3r17kyZNat++vV6vX7hwYVlZmbL89u3bI0eO1Gq1PXv2PHPmTO34oqKiBQsWdOzY0d/ff+rUqQUFBZb7HDVq1OTJkzt27GinwwHg4hzcc3bv3v3cc8/5+fkNHjx49uzZdTdHS0PcadG8vb3XrVu3cuXKqqqqeqsmTpzYqlWrjIyM8+fPX7hwIS4uTlk+adIkvV5///79Y8eO/f73v68dP2PGjNzc3IsXL2ZnZ/v6+s6aNcthRwHgSeHEnnP69Ol+/fo169HgiSLbwPY9wImGDRv2zjvvVFVVdevWbfv27bIsBwQEHD58WJbl9PR0SZLy8vKUkSkpKR4eHmazOT09XaVSFRYWKssTEhI0Go0sy5mZmSqVqna80WhUqVQGg6HB501KSgoICLD30cGunPW3T895ojmr58iyvGbNmk6dOhUUFNj1AGE/tv/tuzs6XsHFuLu7b9q0ac6cOTExMbUL79y5o9Fo/P39lYfh4eEmk6mgoODOnTt+fn5t27ZVlnfp0kX5ISsrS6VS9e/fv3YPvr6+OTk5vr6+jjoOAE8Gx/ecDRs2JCYmpqam+vn52euo4PKIO5B+8YtffPjhh5s2bapdotfrS0tL8/Pzle6TlZWlVqt1Ol2HDh0MBkNFRYVarZYk6f79+8r4kJAQlUqVlpZGvgHwUI7sOStWrDh06NDXX3+t1+vtdkB4AvDZHUiSJG3ZsuWjjz4qLi5WHnbt2nXgwIFxcXElJSW5ubmrVq2KjY11c3Pr1q1bnz59tm3bJklSRUXFRx99pIyPiIgYMWLEq6++eu/ePUmS8vPzDxw4YPksZrPZZDIp79mbTKaKigoHHR4AF+OYnrN48eJDhw4dP35cp9OZTCb+I3pLRtyBJEnSgAEDRo8eXftfIVQq1YEDB8rKyjp16tSnT59evXpt3bpVWbV///6UlJS+ffsOHz58+PDhtXtISkoKDg4ePHhwmzZtBg4cePr0actn2b17t6enZ0xMTG5urqenJxeWgRbLAT3HYDBs3779xo0b4eHhnp6enp6eUVFRjjk6uCBV7SeAmrKxSiVJki17APAkctbfPj0HaJls/9vn6g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABCcu+27UKlUtu8EAB4RPQfA4+LqDgAAEJxKlmVnzwEAAMCOuLoDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA4m24zyM2+WoKm3aqA2mgJHH8bC+qqJaDnwBpbeg5XdwAAgOCa4UskuFGhqGx/tURtiMq5r6SpK1HRc2CN7bXB1R0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAghM/7ly9enXs2LE6nc7Ly6tbt27x8fFN2Em3bt2OHDnyiIOffvrp5OTkBlclJCQMHjxYo9EEBgY2YRpoXi5VG6+//npUVJSXl1dISMjy5csrKyubMBm4ApeqK3qOS3Gp2mhpPUfwuFNTU/PSSy8FBwd///33BQUFycnJ4eHhTpyPTqdbunTpunXrnDgHKFytNkpKSnbt2nX79u3k5OTk5OS1a9c6cTJoMlerK3qO63C12mhxPUe2ge17sLfbt29LknT16lXLVXfv3o2Ojvb39+/QocNrr71WWlqqLH/w4MGCBQtCQkLatGnTp0+f9PR0WZYjIyMPHz6srB02bFhMTExlZaXRaJw/f75er9fpdFOmTMnPz5dleeHCha1atdLpdKGhoTExMQ3OKikpKSAgwF7H3HxsOb/URtNqQ7FmzZrnnnuu+Y+5+Tjr/FJX9Bx7bOsYrlkbipbQcwS/uhMcHNy1a9f58+f/5S9/yc7Orrtq4sSJrVq1ysjIOH/+/IULF+Li4pTl06ZNu3Xr1jfffGMwGPbu3dumTZvaTW7dujVkyJChQ4fu3bu3VatWM2bMyM3NvXjxYnZ2tq+v76xZsyRJ2r59e1RU1Pbt27Oysvbu3evAY8XjceXaOH36dL9+/Zr/mGF/rlxXcC5Xro0W0XOcm7YcIDc3d8WKFX379nV3d+/cuXNSUpIsy+np6ZIk5eXlKWNSUlI8PDzMZnNGRoYkSTk5OfV2EhkZuXr1ar1ev2vXLmVJZmamSqWq3YPRaFSpVAaDQZbl3r17K89iDa+0XIQL1oYsy2vWrOnUqVNBQUEzHmmzc9b5pa7oOfbY1mFcsDbkFtNzxI87tYqLiz/88EM3N7fLly+fOHFCo9HUrvrpp58kScrNzU1JSfHy8rLcNjIyMiAgYMCAASaTSVly8uRJNze30Dq0Wu2VK1dkWo/N2zqe69TG+vXrw8PDs7KymvX4mh9x51G4Tl3Rc1yN69RGy+k5gr+ZVZe3t3dcXJyHh8fly5f1en1paWl+fr6yKisrS61WK29wlpWV3bt3z3Lzjz76yN/ff9y4cWVlZZIkhYSEqFSqtLS0rP968OBBVFSUJElubi3otyoGF6mNFStWJCYmfv3116GhoXY4Sjiai9QVXJCL1EaL6jmC/5Hcv39/2bJlFy9eLC0tLSwsfO+996qqqvr379+1a9eBAwfGxcWVlJTk5uauWrUqNjbWzc0tIiJixIgRc+fOvXfvnizLP/zwQ22pqdXqQ4cO+fj4vPzyy8XFxcrIV199VRmQn59/4MABZWRgYOD169cbnI/ZbDaZTFVVVZIkmUymiooKh/wa0ABXq43FixcfOnTo+PHjOp3OZDIJ/59CReVqdUXPcR2uVhstruc49+KSvRmNxjlz5nTp0sXT01Or1Q4ZMuTLL79UVt25c2fChAk6nS4oKGjBggUlJSXK8sLCwjlz5nTo0KFNmzZ9+/a9fv26XOeT8NXV1b/61a+effbZwsJCg8GwePHisLAwb2/v8PDwJUuWKHs4depUly5dtFrtxIkT681n586ddX/5dS9guiBbzi+18Vi18eDBg3p/mBEREY77XTw+Z51f6oqeY49tHcOlaqMF9hxV7V6aQKVSKU/f5D3AldlyfqkNsTnr/FJXYqPnwBrbz6/gb2YBAAAQdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIzt32XShfyw5YojZgD9QVrKE2YA1XdwAAgOBUsiw7ew4AAAB2xNUdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgbLqrMvevbAmadmcmaqMlcPxdu6irloCeA2ts6Tlc3QEAAIJrhu/M4r7MorL91RK1ISrnvpKmrkRFz4E1ttcGV3cAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMEJG3fOnDkzevTodu3aaTSap556atWqVaWlpQ543urq6sWLF7dr187Hx2fGjBlFRUUNDvP29lbVoVarKyoqHDC9FstZ9ZCbmzt58mSdTqfVakeOHHn9+vUGhyUkJAwePFij0QQGBtZdPmvWrLp1kpyc7IA5o2noOaiLnuNqxIw7n3/++QsvvNC7d+9vvvkmLy8vMTExLy8vLS3tUbaVZbmqqqrJT71+/frjx4+fP3/+5s2bt27dmj9/foPDcnNzi/9rwoQJ48ePV6vVTX5SNM6J9bBgwQKDwfDjjz/m5OQEBQVNmjSpwWE6nW7p0qXr1q2zXBUXF1dbKtHR0U2eCeyKnoO66DmuSLaB7XuwB7PZrNfr4+Li6i2vqamRZfnu3bvR0dH+/v4dOnR47bXXSktLlbWRkZGrVq0aOnRo165dU1NTjUbj/Pnz9Xq9TqebMmVKfn6+Mmzr1q2hoaG+vr5BQUHvvPOO5bO3b9/+k08+UX5OTU11d3d/8OBBI7PNz89Xq9UnT5608ajtwZbz6zq14dx6iIiI2LNnj/Jzamqqm5tbdXW1takmJSUFBATUXRIbGxsfH9/UQ7cjZ51f16mruug5zYWeQ8+xphkSi3Of3h6UBH3x4sUG1w4aNGjatGlFRUX37t0bNGjQvHnzlOWRkZE9e/YsKChQHo4ZM2b8+PH5+fllZWVz584dPXq0LMvXr1/39va+ceOGLMsGg+G7776rt/N79+7VfWrlqvKZM2came2WLVu6dOliw+HakRitx4n1IMvy8uXLX3jhhdzcXKPROHPmzAkTJjQy1QZbT1BQkF6v79ev3+bNmysrKx//F2AXxJ266DnNhZ5Dz7GGuNOAEydOSJKUl5dnuSo9Pb3uqpSUFA8PD7PZLMtyZGTkjh07lOWZmZkqlap2mNFoVKlUBoMhIyPD09Nz3759RUVFDT71jz/+KElSZmZm7RI3N7evvvqqkdl27dp1y5Ytj3+UjiBG63FiPSiDhw0bpvw2unfvnp2d3chULVvP8ePHz549e+PGjQMHDnTo0MHy9aKzEHfqouc0F3qOspyeY8n28yvgZ3f8/f0lScrJybFcdefOHY1GowyQJCk8PNxkMhUUFCgPg4ODlR+ysrJUKlX//v3DwsLCwsJ69erl6+ubk5MTHh6ekJDw8ccfBwYG/t///d/XX39db/9t2rSRJMloNCoPi4uLa2pqfHx8Pv3009pPftUdn5qampWVNWvWrOY6dlhyYj3Isvziiy+Gh4cXFhaWlJRMnjx56NChpaWl1urB0ogRIwYNGtS5c+eJEydu3rw5MTHRll8F7ISeg7roOS7KuWnLHpT3Td944416y2tqauol69TUVLVaXZusDx8+rCy/efPmz372M4PBYO0pysrKfvOb37Rt21Z5L7au9u3b/+lPf1J+PnXqVOPvo0+ZMmXq1KmPd3gOZMv5dZ3acGI95OfnSxZvNPzzn/+0th/LV1p17du3r127do0dqgM56/y6Tl3VRc9pLvQcZTk9x1IzJBbnPr2dHD161MPDY/Xq1RkZGSaT6YcffliwYMGZM2dqamoGDhw4c+bM4uLi+/fvDxkyZO7cucomdUtNluWXX345Ojr67t27sizn5eXt379fluVr166lpKSYTCZZlnfv3t2+fXvL1rNq1arIyMjMzMzc3Nznnntu2rRp1iaZl5fXunVr1/zAoEKM1iM7tR5CQ0PnzJljNBrLy8s3bNjg7e1dWFhoOcPq6ury8vKEhISAgIDy8nJln2azec+ePVlZWQaD4dSpUxEREbVv8zsdcaceek6zoOfU7oGeUw9xx6rTp0+//PLLWq3Wy8vrqaeeeu+995QPwN+5c2fChAk6nS4oKGjBggUlJSXK+HqlZjAYFi9eHBYW5u3tHR4evmTJElmWL1y48Oyzz/r4+LRt23bAgAF///vfLZ+3srJy0aJFWq3W29t72rRpRqPR2gzff/99l/3AoEKY1iM7rx7S0tJGjBjRtm1bHx+fQYMGWfuXZufOnXWvuWo0GlmWzWbziy++6Ofn17p16/Dw8JUrV5aVlTX7b6ZpiDuW6Dm2o+fUbk7Pqcf286uq3UsTKO8C2rIHuDJbzi+1ITZnnV/qSmz0HFhj+/kV8KPKAAAAdRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBudu+i4d+wypaLGoD9kBdwRpqA9ZwdQcAAAjOpu/MAgAAcH1c3QEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACM6muypz/8qWoGl3ZqI2WgLH37WLumoJ6Dmwxpaew9UdAAAguGb4zizuyywq218tURuicu4raepKVPQcWGN7bXB1BwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdSZKkbt26qVQqlUoVEBAQGxtbUlLShJ3odLqbN282+9zgXNQG7IG6gjXUhp0Qd/5j//79siyfPXv23LlzmzZtcvZ04EKoDdgDdQVrqA17IO78j4iIiDFjxly+fFl5+NZbb4WEhPj4+AwcOPDChQvKQp1O98EHHwwYMKBz586LFi2y3MmpU6dCQ0O//fZbx80b9kdtwB6oK1hDbTQv4s7/MBqNKSkpPXr0UB4+9dRT3333XWFh4aRJk6ZOnVp7v85Lly6dPXv2+++/P3HiREpKSt09HDt2LCYm5ujRowMGDHD07GFP1AbsgbqCNdRGM5NtYPseXERkZKRWqw0ICHB3dx81alRZWZnlGK1We+fOHVmW/fz8vv32W2XhvHnztmzZovzs5+e3cePG0NDQK1euOGzmdmXL+aU2qA2RnrfZUVcNoufI1IYVtp9fru78x5YtWy5cuJCQkHDmzJmMjAxl4aefftqvX7+OHTuGhYUVFxcXFBQoy9u1a6f84OHhUfdzZL/97W8nT54cFRXl4MnDrqgN2AN1BWuoDXsg7vyHVqsNDg6eNm3ar3/96xUrVkiSdOPGjTfeeCMxMfH27dtZWVk+Pj7yw758bv/+/QcPHtyxY4dDpgwHoTZgD9QVrKE27KEZvhFdMMuWLevUqVNaWlp1dbVGo4mIiJAkKTEx8cGDBw/dtkOHDikpKcOGDfPy8nrllVfsP1k4FLUBe6CuYA210YyIO/UFBgbGxsZu2rQpKSlp4sSJvXv39vf3HzRoUMeOHR9l87CwsJSUlOeff97Dw2P69On2ni0cidqAPVBXsIbaaEaqh14Qa2xjlUqSJFv2AFdmy/mlNsTmrPNLXYmNngNrbD+/fHYHAAAIjrgDAAAER9wBAACCI+4AAADBEXcaYDQao6OjNRpNaGjoZ599ZjnAZDKp/hff4gbgobZu3dq7d293d3flZip17dq1KyIiQq1Wd+/ePT09vd5ak8k0b968Tp06aTSaZ5555uTJk7Wr5s2bFxwcrFarO3XqRCN6QjVyfhXp6elqtXrmzJkNbm6tBhqptxaIuNOApUuXmkymnJycTz75ZO7cuVevXq03wMPDo/y/cnJyWrVqNX78eKdMFQ5w7dq14cOH+/j46PX6DRs2NDjGWlt56aWXajNx586dHTJfuK6OHTtu3Lhx3Lhx9Zbv37//3Xff/cMf/nD//v2EhAStVltvQEVFhZeX18GDB7Ozs6dNmzZu3Lj8/HxlVUxMzL/+9a+CgoK9e/e+9957x48fd8SRoFk1cn4VCxcufPbZZ61tbq0GrNVbC2XLN1DYvgcXZDKZPD09z507pzycMGHCW2+91cj4HTt2DBgwwCFTczRbzq9ItfHMM88sWbKkoqIiPT29ffv2R44csRyzb9++v/71r+PHj4+Pj6+7fOTIkQkJCUoyrqiocNSU7c5Z51eMuoqNja1XJz169EhOTn70PbRt2/bUqVP1Ft69ezcwMPCbb75phik6CT1HUe/8fvbZZzNnzoyPj58xY0bjGzZYA5b19iSy/fxydae+zMzM8vLyXr16KQ979ep15cqVRsbv3bs3JibGIVODc1y9enX69OmtW7eOjIwcMmSI5dU+SZImTZo0ZswYHx8fy1WtWrXy8PDw8PBo3bq1/SeLJ09paemVK1euXr0aFBSk1+tXrlxpNpsbGX/z5s2SkpLu3bvXLnnjjTf8/f3DwsLWrl07cOBA+08ZdlTv/BYVFa1bt+79999vfCtq4KGIO/WVlJSo1eraf5l8fHzqfulaPdeuXUtLS5s6daqjZgcnGDt27J///GeTyXTt2rVz586NHDnysTaPj48PCQl56aWXvv32WzvNEE+0nJwcSZLOnj37ww8/nDp1av/+/R9//LG1wWVlZdOnT3/77bfbt29fu3Dt2rXffffdzp07V65c2WAcx5PC8vyuXr16zpw5QUFBjW9IDTwUcac+b2/vioqKyspK5WFRUZG3t7ckSTt27FA+gTFmzJjawXv37h0zZkztF9JCSJs3b/7iiy88PT2joqJmz57dt2/fR9920aJFR44cOX78eL9+/X7+859nZ2fbb554Qnl6ekqS9Oabb/r5+XXu3HnOnDnHjh2TGuo5FRUVv/zlL3v06LFmzZq6e/Dx8QkJCXnllVdefPHFxMRExx8CmoXl+U1LSztx4sTSpUvrjbSsDWrgoYg79YWHh3t4eFy+fFl5+P333/fo0UOSpIULFyrv/33xxRfKqpqamsTERN7JEltFRcXw4cPnzZtnMpkyMjI+//zznTt3Slbir6XRo0f36dOne/fuGzdu7N69+1dffeWoieOJodfrtVqtco986b83y5csek5VVVV0dLRWq92zZ0/tGEu8Z/qEavD8/uMf/8jMzAwKCtLpdL/73e8OHDigvNyy/PeoLmqgQcSd+tRq9ZQpU9avX280GlNTU//2t7/NmDGjwZEnTpyoqKgYNWqUg2cIR8rMzMzMzFy0aJFarQ4PD4+Ojk5JSZEe1m4a1Lp168Y/kwHhVVdXm0wms9lsNpuVHyRJUqlUMTExH3zwgcFguHXr1p49eywztNlsnjFjRlVV1R//+MeqqiqTyVRTUyNJksFg2LFjR1ZW1r///e/ExMQvv/xy7NixTjgw2Mba+Z09e/aNGzcuXbp06dKl2bNnjxo1SrnyV1cjNdBgvbVctnzO2fY9uCaDwTBhwgRPT8+OHTsmJiZaGzZ9+vTaf/OEZMv5FaY2ysrKfH19t23bVllZmZ2d/fTTT2/YsMFyWFVVVXl5+cyZM998883y8vLq6mpZlouKipKSku7evZuXl7d161ZPT88bN244/Ajswlnn90mvq/j4+Lrtd9u2bcrysrKy2NjYNm3aBAcHv/3222azud6GP/30U73WnZSUJMtyUVHRyJEj27Ztq9Fo+vXrd/ToUUcfUrNqsT3H2vmty9r/zGqkBqzV25PI9vPLN6LDKr6dWJGamhofH3/16lVvb+/x48dv27bNw8Oj3pgVK1Zs3ry59uG2bduWLl1aVFQ0evToy5cv19TU9OzZ8913333hhRccO3d74RvRYQ/0HFhj+/kl7sAqWg+sIe7AHug5sMb288tndwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABOdu+y4auZ05WjhqA/ZAXcEaagPWcHUHAAAIzqbbDAIAALg+ru4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcP8PtzynrHMtHtYAAAAASUVORK5CYII="
-/>
+![Binding to sockets and block:block distribution](misc/mpi_socket_block_block.png)
+{: align="center"}
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=16
-#SBATCH --cpus-per-task=1
+!!! example "Binding to sockets and block:block distribution"
 
-srun --ntasks 32 --cpu_bind=sockets --distribution=block:block ./application
-```
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2
+    #SBATCH --tasks-per-node=16
+    #SBATCH --cpus-per-task=1
+
+    srun --ntasks 32 --cpu_bind=sockets --distribution=block:block ./application
+    ```
 
 #### Distribution: block:cyclic
 
-The block:cyclic distribution will allocate the tasks of your job in
+The `block:cyclic` distribution will allocate the tasks of your job in
 alternation between the first node and the second node while filling the
 sockets linearly.
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAvoAAADyCAIAAACzsfbGAAAABmJLR0QA/wD/AP+gvaeTAAAgAElEQVR4nO3daXQUVdrA8Wq27AukIQECCQkQCAoCIov44iAHFJARCJtggsqWIyLiCOggghsoisOAo4zoSE6cZGQTj8twDmGZAdzZiSAkhCVASITurJ2EpN4PNdMnk+7qrqR64+b/+5RU31t1763nPjypNB2DLMsSAACAuJp5ewAAAADuRbkDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAE10JPZ4PB4KpxALjtyLLs4SuSc4CmTE/O4ekOAAAQnK6nOwrP/4QHwLu8+5SFnAM0NfpzDk93AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3AACA4Ch3mq7777/fYDAcOnTIeiQqKurzzz/XfoajR48GBwdrb5+WljZkyJCgoKCoqKgGDBSAEDyfc5599tnExMTAwMDOnTsvXry4qqqqAcOFWCh3mrSIiIjnn3/eY5czGo0LFy5csWKFx64IwKd4OOeUlpZu3Ljx0qVLmZmZmZmZL7/8sscuDV9DudOkzZo1KycnZ9u2bbYvXb16ddKkSe3atYuOjp4/f355ebly/NKlS6NGjQoPD7/jjjsOHjxobV9cXJyamtqpU6e2bdtOnTq1qKjI9pyjR4+ePHlyp06d3DQdAD7Owznnww8/vO+++yIiIoYMGfL444/X7Y6mhnKnSQsODl6xYsULL7xQXV1d76WJEye2bNkyJyfnp59+Onz48KJFi5TjkyZNio6Ovnbt2tdff/3BBx9Y20+fPr2goODIkSMXL14MCwubOXOmx2YB4HbhxZxz4MCB/v37u3Q2uK3IOug/A7xo2LBhr776anV1dY8ePdavXy/LcmRk5I4dO2RZPn36tCRJ169fV1pmZWX5+/vX1NScPn3aYDDcuHFDOZ6WlhYUFCTLcm5ursFgsLY3m80Gg8FkMtm9bkZGRmRkpLtnB7fy1t4n59zWvJVzZFlevnx5ly5dioqK3DpBuI/+vd/C0+UVfEyLFi1Wr149e/bs5ORk68HLly8HBQW1bdtW+TYuLs5isRQVFV2+fDkiIqJ169bK8W7duilf5OXlGQyGAQMGWM8QFhaWn58fFhbmqXkAuD14Pue88sor6enpe/fujYiIcNes4PModyD9/ve/f+edd1avXm09Eh0dXVZWVlhYqGSfvLw8Pz8/o9HYsWNHk8lUWVnp5+cnSdK1a9eU9p07dzYYDMeOHaO+AeCUJ3PO0qVLt2/fvn///ujoaLdNCLcB3rsDSZKkNWvWrFu3rqSkRPm2e/fugwYNWrRoUWlpaUFBwbJly1JSUpo1a9ajR4++ffu+++67kiRVVlauW7dOaR8fHz9y5MhZs2ZdvXpVkqTCwsKtW7faXqWmpsZisSi/s7dYLJWVlR6aHgAf45mcs2DBgu3bt+/atctoNFosFv4jelNGuQNJkqSBAweOGTPG+l8hDAbD1q1by8vLu3Tp0rdv3969e69du1Z5acuWLVlZWf369Rs+fPjw4cOtZ8jIyOjQocOQIUNCQkIGDRp04MAB26t8+OGHAQEBycnJBQUFAQEBPFgGmiwP5ByTybR+/fqzZ8/GxcUFBAQEBAQkJiZ6ZnbwQQbrO4Aa09lgkCRJzxkA3I68tffJOUDTpH/v83QHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIroX+UxgMBv0nAQCNyDkAGoqnOwAAQHAGWZa9PQYAAAA34ukOAAAQHOUOAAAQHOUOAAAQHOUOAAAQHOUOAAAQHOUOAAAQnK6PGeTDvpqCxn1UAbHRFHj+YyyIq6aAnAM1enIOT3cAAIDgXPBHJPigQlHp/2mJ2BCVd3+SJq5ERc6BGv2xwdMdAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOModAAAgOPHLnezs7IcffthoNAYGBvbo0WPJkiWNOEmPHj0+//xzjY3vuuuuzMxMuy+lpaUNGTIkKCgoKiqqEcOAa/lUbDz77LOJiYmBgYGdO3devHhxVVVVIwYDX+BTcUXO8Sk+FRtNLecIXu7U1tY++OCDHTp0OHHiRFFRUWZmZlxcnBfHYzQaFy5cuGLFCi+OAQpfi43S0tKNGzdeunQpMzMzMzPz5Zdf9uJg0Gi+FlfkHN/ha7HR5HKOrIP+M7jbpUuXJEnKzs62fenKlStJSUlt27bt2LHjU089VVZWphy/efNmampq586dQ0JC+vbte/r0aVmWExISduzYobw6bNiw5OTkqqoqs9k8b9686Ohoo9E4ZcqUwsJCWZbnz5/fsmVLo9EYExOTnJxsd1QZGRmRkZHumrPr6Lm/xEbjYkOxfPny++67z/Vzdh1v3V/iipzjjr6e4ZuxoWgKOUfwpzsdOnTo3r37vHnz/vGPf1y8eLHuSxMnTmzZsmVOTs5PP/10+PDhRYsWKcenTZt24cKFb7/91mQybd68OSQkxNrlwoUL995779ChQzdv3tyyZcvp06cXFBQcOXLk4sWLYWFhM2fOlCRp/fr1iYmJ69evz8vL27x5swfniobx5dg4cOBA//79XT9nuJ8vxxW8y5djo0nkHO9WWx5QUFCwdOnSfv36tWjRomvXrhkZGbIsnz59WpKk69evK22ysrL8/f1rampycnIkScrPz693koSEhJdeeik6Onrjxo3KkdzcXIPBYD2D2Ww2GAwmk0mW5T59+ihXUcNPWj7CB2NDluXly5d36dKlqKjIhTN1OW/dX+KKnOOOvh7jg7EhN5mcI365Y1VSUvLOO+80a9bs+PHju3fvDgoKsr50/vx5SZIKCgqysrICAwNt+yYkJERGRg4cONBisShH9uzZ06xZs5g6wsPDT506JZN6dPf1PN+JjZUrV8bFxeXl5bl0fq5HuaOF78QVOcfX+E5sNJ2cI/gvs+oKDg5etGiRv7//8ePHo6Ojy8rKCgsLlZfy8vL8/PyUX3CWl5dfvXrVtvu6devatm07bty48vJySZI6d+5sMBiOHTuW9183b95MTEyUJKlZsya0qmLwkdhYunRpenr6/v37Y2Ji3DBLeJqPxBV8kI/ERpPKOYJvkmvXrj3//PNHjhwpKyu7cePGqlWrqqurBwwY0L1790GDBi1atKi0tLSgoGDZsmUpKSnNmjWLj48fOXLknDlzrl69KsvyyZMnraHm5+e3ffv20NDQhx56qKSkRGk5a9YspUFhYeHWrVuVllFRUWfOnLE7npqaGovFUl1dLUmSxWKprKz0yDLADl+LjQULFmzfvn3Xrl1Go9FisQj/n0JF5WtxRc7xHb4WG00u53j34ZK7mc3m2bNnd+vWLSAgIDw8/N577/3qq6+Uly5fvjxhwgSj0di+ffvU1NTS0lLl+I0bN2bPnt2xY8eQkJB+/fqdOXNGrvNO+Fu3bj322GP33HPPjRs3TCbTggULYmNjg4OD4+LinnnmGeUM+/bt69atW3h4+MSJE+uN5/3336+7+HUfYPogPfeX2GhQbNy8ebPexoyPj/fcWjSct+4vcUXOcUdfz/Cp2GiCOcdgPUsjGAwG5fKNPgN8mZ77S2yIzVv3l7gSGzkHavTfX8F/mQUAAEC5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABEe5AwAABNdC/ymUP8sO2CI24A7EFdQQG1DD0x0AACA4gyzL3h4DAACAG/F0BwAACI5yBwAACI5yBwAACI5yBwAACI5yBwAACI5yBwAACI5yBwAACE7Xpyrz+ZVNQeM+mYnYaAo8/6ldxFVTQM6BGj05h6c7AABAcC74m1l8LrOo9P+0RGyIyrs/SRNXoiLnQI3+2ODpDgAAEBzlDgAAEBzlDgAAEBzlDgAAEBzlDgAAEBzlDgAAEBzlDgAAEBzlDgAAEJyw5c7BgwfHjBnTpk2boKCgO++8c9myZWVlZR647q1btxYsWNCmTZvQ0NDp06cXFxfbbRYcHGyow8/Pr7Ky0gPDa7K8FQ8FBQWTJ082Go3h4eGjRo06c+aM3WZpaWlDhgwJCgqKioqqe3zmzJl14yQzM9MDY0bjkHNQFznH14hZ7nzxxRcPPPBAnz59vv322+vXr6enp1+/fv3YsWNa+sqyXF1d3ehLr1y5cteuXT/99NO5c+cuXLgwb948u80KCgpK/mvChAnjx4/38/Nr9EXhmBfjITU11WQy/frrr/n5+e3bt580aZLdZkajceHChStWrLB9adGiRdZQSUpKavRI4FbkHNRFzvFFsg76z+AONTU10dHRixYtqne8trZWluUrV64kJSW1bdu2Y8eOTz31VFlZmfJqQkLCsmXLhg4d2r17971795rN5nnz5kVHRxuNxilTphQWFirN1q5dGxMTExYW1r59+1dffdX26u3atfv444+Vr/fu3duiRYubN286GG1hYaGfn9+ePXt0ztod9Nxf34kN78ZDfHz8pk2blK/37t3brFmzW7duqQ01IyMjMjKy7pGUlJQlS5Y0dupu5K376ztxVRc5x1XIOeQcNS6oWLx7eXdQKugjR47YfXXw4MHTpk0rLi6+evXq4MGD586dqxxPSEi44447ioqKlG/Hjh07fvz4wsLC8vLyOXPmjBkzRpblM2fOBAcHnz17VpZlk8n0888/1zv51atX615aeap88OBBB6Nds2ZNt27ddEzXjcRIPV6MB1mWFy9e/MADDxQUFJjN5hkzZkyYMMHBUO2mnvbt20dHR/fv3//NN9+sqqpq+AK4BeVOXeQcVyHnkHPUUO7YsXv3bkmSrl+/bvvS6dOn676UlZXl7+9fU1Mjy3JCQsKGDRuU47m5uQaDwdrMbDYbDAaTyZSTkxMQEPDZZ58VFxfbvfSvv/4qSVJubq71SLNmzb755hsHo+3evfuaNWsaPktPECP1eDEelMbDhg1TVqNnz54XL150MFTb1LNr165Dhw6dPXt269atHTt2tP150Vsod+oi57gKOUc5Ts6xpf/+CvjenbZt20qSlJ+fb/vS5cuXg4KClAaSJMXFxVkslqKiIuXbDh06KF/k5eUZDIYBAwbExsbGxsb27t07LCwsPz8/Li4uLS3tL3/5S1RU1P/93//t37+/3vlDQkIkSTKbzcq3JSUltbW1oaGhn3zyifWdX3Xb7927Ny8vb+bMma6aO2x5MR5kWR4xYkRcXNyNGzdKS0snT548dOjQsrIytXiwNXLkyMGDB3ft2nXixIlvvvlmenq6nqWAm5BzUBc5x0d5t9pyB+X3ps8991y947W1tfUq67179/r5+Vkr6x07dijHz50717x5c5PJpHaJ8vLyN954o3Xr1srvYutq167d3/72N+Xrffv2Of49+pQpU6ZOndqw6XmQnvvrO7HhxXgoLCyUbH7R8N1336mdx/Ynrbo+++yzNm3aOJqqB3nr/vpOXNVFznEVco5ynJxjywUVi3cv7yY7d+709/d/6aWXcnJyLBbLyZMnU1NTDx48WFtbO2jQoBkzZpSUlFy7du3ee++dM2eO0qVuqMmy/NBDDyUlJV25ckWW5evXr2/ZskWW5V9++SUrK8tisciy/OGHH7Zr18429SxbtiwhISE3N7egoOC+++6bNm2a2iCvX7/eqlUr33zDoEKM1CN7NR5iYmJmz55tNpsrKipeeeWV4ODgGzdu2I7w1q1bFRUVaWlpkZGRFRUVyjlramo2bdqUl5dnMpn27dsXHx9v/TW/11Hu1EPOcQlyjvUM5Jx6KHdUHThw4KGHHgoPDw8MDLzzzjtXrVqlvAH+8uXLEyZMMBqN7du3T01NLS0tVdrXCzWTybRgwYLY2Njg4OC4uLhnnnlGluXDhw/fc889oaGhrVu3Hjhw4L/+9S/b61ZVVT399NPh4eHBwcHTpk0zm81qI3zrrbd89g2DCmFSj+y9eDh27NjIkSNbt24dGho6ePBgtX9p3n///brPXIOCgmRZrqmpGTFiRERERKtWreLi4l544YXy8nKXr0zjUO7YIufoR86xdifn1KP//hqsZ2kE5beAes4AX6bn/hIbYvPW/SWuxEbOgRr991fAtyoDAADURbkDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAER7kDAAAE10L/KZz+hVU0WcQG3IG4ghpiA2p4ugMAAASn629mAQAA+D6e7gAAAMFR7gAAAMFR7gAAAMFR7gAAAMFR7gAAAMFR7gAAAMFR7gAAAMHp+lRlPr+yKWjcJzMRG02B5z+1i7hqCsg5UKMn5/B0BwAACM4FfzOLz2UWlf6flogNUXn3J2niSlTkHKjRHxs83QEAAIKj3AEAAIKj3AEAAIKj3AEAAIKj3AEAAIKj3AEAAIKj3JEkSerRo4fBYDAYDJGRkSkpKaWlpY04idFoPHfunMvHBu8iNuAOxBXUEBtuQrnzH1u2bJFl+dChQz/++OPq1au9PRz4EGID7kBcQQ2x4Q6UO/8jPj5+7Nixx48fV7598cUXO3fuHBoaOmjQoMOHDysHjUbj22+/PXDgwK5duz799NO2J9m3b19MTMz333/vuXHD/YgNuANxBTXEhmtR7vwPs9mclZXVq1cv5ds777zz559/vnHjxqRJk6ZOnWr9vM6jR48eOnToxIkTu3fvzsrKqnuGr7/+Ojk5eefOnQMHDvT06OFOxAbcgbiCGmLDxWQd9J/BRyQkJISHh0dGRrZo0WL06NHl5eW2bcLDwy9fvizLckRExPfff68cnDt37po1a5SvIyIiXn/99ZiYmFOnTnls5G6l5/4SG8SGSNd1OeLKLnKOTGyo0H9/ebrzH2vWrDl8+HBaWtrBgwdzcnKUg5988kn//v07deoUGxtbUlJSVFSkHG/Tpo3yhb+/f933kf3pT3+aPHlyYmKihwcPtyI24A7EFdQQG+5AufMf4eHhHTp0mDZt2pNPPrl06VJJks6ePfvcc8+lp6dfunQpLy8vNDRUdvbH57Zs2bJt27YNGzZ4ZMjwEGID7kBcQQ2x4Q4u+Ivognn++ee7dOly7NixW7duBQUFxcfHS5KUnp5+8+ZNp307duyYlZU1bNiwwMDAJ554wv2DhUcRG3AH4gpqiA0XotypLyoqKiUlZfXq1RkZGRMnTuzTp0/btm0HDx7cqVMnLd1jY2OzsrLuv/9+f3//Rx991N2jhScRG3AH4gpqiA0XMjh9IOaos8EgSZKeM8CX6bm/xIbYvHV/iSuxkXOgRv/95b07AABAcJQ7AABAcJQ7AABAcJQ7AABAcJQ7AABAcD5R7pjN5qSkpKCgoJiYmE8//dS2gcVimTt3bpcuXYKCgu6+++49e/Y4ONvp06f9/PxmzJih1mDjxo3x8fF+fn49e/Y8ffq0bYNffvll+PDhoaGh0dHRr7zyit2TrF27tk+fPi1atFA+A0r7XNT6apyj2nU1zv2243i+irlz53bo0MHPz69Lly5qfz34wQcfNPxX165dbRtYLBbD/7J7qpMnTw4bNiwkJKR79+5ff/11g8asZS5qbbTMUa2v07nfphysp9M9rtbX6d53sE8bnce07H2nbRzsfcd9Ha+Vg75a8qRa3GrJk3qo5Vgte1wtNpzufQdr5XTvO+jrdO876Ksl76nFpNO1cnBdLXlSbV5a8mTj+ES5s3DhQovFkp+f//HHH8+ZMyc7O7teg8rKysDAwG3btl28eHHatGnjxo0rLCxUO9v8+fPvuecetVe3bNny2muv/fWvf7127VpaWlp4eLhtm+Tk5N69excVFWVlZb333ns7d+60bdOpU6fXX3993LhxDZ2LWl+Nc1S7rpa5344cz1eRnJz8ww8/FBUVbd68edWqVbt27bLbLC0traKioqKiwu5N8ff3r/iv/Pz8li1bjh8/vl6b6urqRx55ZMSIEb/99tv69eunTJly6dIl7WPWMhe1Nlrm6OD8jud+m1Kbr5Y97mCdHe99B/u00XlMy9532sbB3nfQ1+laOeirJU+qxa2WPKmH3furZY+r9dWy9x2sldO973idHe99x7HheO+r9dWyVmp9NeZJtXlpyZONpOcPbuk/gyzLFoslICDgxx9/VL6dMGHCiy++qHz95JNPPvnkk7ZdWrduvW/fPrttPv300xkzZixZsmT69OnWg3Xb9OrVKzMz0/acddsEBgZa/+ja+PHj33jjDbXxpKSkLFmypHFzqddX+xzV+tqdux567q9LYsPKdr52Y+PKlStRUVHffvutbZtRo0ZlZGTYntnueTZs2DBw4EDbNidOnGjZsmV1dbVy/He/+93q1avVzqN2f7XMxUFsOJijWl+1uevh2vur57q289Wyx9X6at/7Cus+1ZnH1I5r7Os076n11b5Wtn0btFZ149bBWrk25zjYR2p7XK1vg/a+wvb+asxjdvvKGva+bd8G5T216zpdq3p9G7pW9ealsF0r/TnH+5+qnJubW1FR0bt3b+Xb3r17HzlyRPl61KhRtu3PnTtXWlras2dP2zbFxcUrVqzYv3//unXr6naxtikrKzt16lR2dnb79u2bN2/+2GOPvfbaa82bN693nocffvjvf/977969z58//+OPPy5btszBePTMRY2DOapRm7uo6q3Jc889l5aWVlxcvG7dukGDBtlts2TJksWLFycmJq5cuXLgwIF22yg2b948c+ZMtWtZ1dbWnjx50nEbLTT21TJHNXbnLiSNe1xNg/Z+3X2qM4+pHdfS12neU+vb0LWqd12Na2Ubtw7WymM07nE1Tve+2v2tR2Nf7Xvftq/2vKc2Zi1r5WC+DtbK7rzcSE+tpP8Msiz/8MMPfn5+1m/Xrl37wAMPqDUuKysbMGDAihUr7L66YMGCt956S5ZltSccZ86ckSRpxIgRRUVFZ8+ejY+P//Of/2zbLC8vT/nTJJIkvfTSSw4GX68CbdBc1H7ycDxHtb5O594Ieu6vS2LDyvGTMFmWzWbzhQsXPvroo/Dw8FOnTtk2+PLLLw8fPpydnf3iiy+GhIRcuHBB7VTZ2dmtWrX67bffbF+qqqqKjY19+eWXy8vLv/zyy+bNm48fP76hY3Y6F7U2Tueo1lf73LVz7f3Vc91689W4x+32lRuy9+vtU5fkMS1737aN9r1fr2+D1sr2uhrXyjZuHayVa3OO2l5zsMfV+jZo76vdRy17325fjXvftq/2va82Zi1rVa+v9rVyMC93PN3x/nt3goODKysrq6qqlG+Li4uDg4PttqysrHzkkUd69eq1fPly21ePHTu2e/fuhQsXOrhWQECAJEl/+MMfIiIiunbtOnv2bNt3UVVWVg4fPnzu3LkWiyUnJ+eLL754//33XT4XNY7nqEbL3MUWGhrauXPnJ554YsSIEenp6bYNxowZ07dv3549e77++us9e/b85ptv1E61efPmsWPHtmnTxvalli1b7tixY/fu3ZGRkatWrRo3blx0dLQrp+GQ0zmq0T53AWjZ42q0733bfao/j2nZ+7ZttO99277a18q2r/a1so1b/XlSJwd7XI32vd+4HO64r5a9b7evxr3vYMxO18q2r/a1anROaxzv/zIrLi7O39//+PHjd999tyRJJ06c6NWrl22z6urqpKSk8PDwTZs2KX87o55///vfubm57du3lySpvLy8trY2Ozv78OHDddtER0eHh4dbu9s9T25ubm5u7tNPP+3n5xcXF5eUlJSVlZWamurCuahxOkc1WubedLRq1cppg5qaGrsv1dbWpqenv/fee2p977rrrgMHDihf9+vXb8KECY0epx5O5+igo9rcxaBlj6vRuPft7lOdeUzL3rfbRuPet9tX41rZ7du4PKnErc48qZPTPa5Gy95vdA7X3tfu3tfSV23vO+jrdK3U+jYiTzY6p2nn/ac7fn5+U6ZMWblypdls3rt37z//+c/p06crL82aNWvWrFmSJNXU1EyfPr26uvqjjz6qrq62WCy1tbX12jz++ONnz549evTo0aNHH3/88dGjR1t/UrG2MRgMycnJb7/9tslkunDhwqZNm8aOHVuvTWxsbFhY2AcffFBdXX3p0qVt27b16dOnXhtJkm7dumWxWGpqampqapQvNM5Fra+WOar1dTD3253d+Up11sRkMm3YsCEvL++3335LT0//6quvHn744XptSkpKMjMzr169WlhY+O677/78888jR46s10axe/fuysrK0aNH1x1D3TbffffdtWvX8vPzlyxZUlZWNnXqVNs2amN2Ohe1NlrmqNbXwdxvd3bnq2WPq/XVsvfV9qmePKZl76u10ZL31PpqWSu1vlrWSi1uHayVW2ND4XSPq/V1uvcd3Eene1+tr5a9r9ZXS95zMGana+Wgr9O1cjAvB/dOLz2/CdN/BoXJZJowYUJAQECnTp3S09Otx0eOHPnRRx/Jsnz+/Pl6w7a+29zapq56v8Ou26a8vDwlJSUkJKRDhw5//OMfa2pqbNvs2bNnwIABQUFBkZGR8+bNq6iosHfzt+cAAAHsSURBVG2zZMmSuuN59913Nc5Fra/GOapdV23ueui5v66KDbX5WtekuLh41KhRrVu3DgoK6t+//86dO619rW3MZvPQoUNDQ0ODg4MHDRq0e/du2zaKRx99dP78+fXGULfNCy+8EBYWFhAQMGbMmPPnz9ttozZmp3NRa6Nljmp9HcxdD1fdXz3XVVtPLXtcra/Tve9gnzY6j2nZ+w7aWKnlPQd9na6Vg75O18pB3KqtlZ640hIbsoY9rtbX6d53sFZO975aXy17X62vlrznOK4cr5WDvk7XysG81NZKT2z85wy6Ouu+vAPV1dW9evWqqqq6jdr4Wl+dXJV6XM7X7jux4fvXvR3vUVPrK3sp59yOa9XU+squyDkG61kaQfldnZ4zwJfpub/Ehti8dX+JK7GRc6BG//31/nt3AAAA3IpyBwAACI5yBwAACI5yBwAACI5yBwAACM4Fn6rc0M+ORNNBbMAdiCuoITaghqc7AABAcLo+dwcAAMD38XQHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAIjnIHAAAI7v8BE+cBiPwLm7cAAAAASUVORK5CYII="
-/>
+![Binding to sockets and block:cyclic distribution](misc/mpi_socket_block_cyclic.png)
+{: align="center"}
+
+!!! example "Binding to sockets and block:cyclic distribution"
 
+    ```bash
     #!/bin/bash
     #SBATCH --nodes=2
     #SBATCh --tasks-per-node=16
     #SBATCH --cpus-per-task=1
 
     srun --ntasks 32 --cpu_bind=sockets --distribution=block:cyclic ./application
+    ```
 
 ## Hybrid Strategies
 
 ### Default Binding and Distribution Pattern
 
-The default binding pattern of hybrid jobs will split the cores
-allocated to a rank between the sockets of a node. The example shows
-that Rank 0 has 4 cores at its disposal. Two of them on first socket
-inside the first node and two on the second socket inside the first
-node.
+The default binding pattern of hybrid jobs will split the cores allocated to a rank between the
+sockets of a node. The example shows that Rank 0 has 4 cores at its disposal. Two of them on first
+socket inside the first node and two on the second socket inside the first node.
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAvoAAADyCAIAAACzsfbGAAAABmJLR0QA/wD/AP+gvaeTAAAgAElEQVR4nO3de1iUdf7/8XsQA+SoDgdhZHA4CaUlpijmYdXooJvrsbxqzXa1dCsPbFlt5qF2O2xbXV52bdtlV25c7iVrhrVXWVaEupJ2gjxUYAIDgjgcZJCDIIf7+8f9a36zjCAwM/eMn3k+/oJ77rnf9z3z5u1r7hnn1siyLAEAAIjLy9U7AAAA4FzEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABCctz131mg0jtoPANccWZZVrsjMATyZPTOHszsAAEBwdp3dUaj/Cg+Aa7n2LAszB/A09s8czu4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9zxXDNmzNBoNF9++aVlSURExPvvv9/3LXz//fcBAQF9Xz8zMzMtLc3f3z8iIqIfOwpACOrPnPXr1ycnJw8ZMiQ6OnrDhg2XL1/ux+5CLMQdjzZ8+PDHH39ctXJarXbdunVbtmxRrSIAt6LyzGlqanrzzTfPnj2blZWVlZW1efNm1UrD3RB3PNqKFSuKi4vfe+8925uqqqoWL14cFham0+keeeSRlpYWZfnZs2dvu+22kJCQG264IS8vz7L+xYsXV69ePXLkyNDQ0Hvuuae2ttZ2m3feeeeSJUtGjhzppMMB4OZUnjk7duyYOnXq8OHD09LSHnjgAeu7w9MQdzxaQEDAli1bnnrqqfb29m43LVy4cPDgwcXFxd9++21+fn5GRoayfPHixTqd7vz58/v37//HP/5hWf/ee+81mUwFBQXl5eXBwcHLly9X7SgAXCtcOHOOHDkyfvx4hx4NrimyHezfAlxo+vTpzz33XHt7++jRo7dv3y7Lcnh4+L59+2RZLiwslCSpurpaWTMnJ8fX17ezs7OwsFCj0Vy4cEFZnpmZ6e/vL8tySUmJRqOxrN/Q0KDRaMxm8xXr7t69Ozw83NlHB6dy1d8+M+ea5qqZI8vypk2bRo0aVVtb69QDhPPY/7fvrXa8gpvx9vZ+8cUXV65cuWzZMsvCiooKf3//0NBQ5VeDwdDa2lpbW1tRUTF8+PChQ4cqy+Pj45UfjEajRqOZMGGCZQvBwcGVlZXBwcFqHQeAa4P6M+fZZ5/dtWtXbm7u8OHDnXVUcHvEHUjz5s175ZVXXnzxRcsSnU7X3NxcU1OjTB+j0ejj46PVaqOiosxmc1tbm4+PjyRJ58+fV9aPjo7WaDTHjx8n3wC4KjVnzpNPPpmdnX3o0CGdTue0A8I1gM/uQJIk6eWXX962bVtjY6Pya0JCwqRJkzIyMpqamkwm08aNG++//34vL6/Ro0ePGzfutddekySpra1t27ZtyvqxsbHp6ekrVqyoqqqSJKmmpmbv3r22VTo7O1tbW5X37FtbW9va2lQ6PABuRp2Zs2bNmuzs7AMHDmi12tbWVv4juicj7kCSJCk1NXXOnDmW/wqh0Wj27t3b0tIyatSocePGjR079tVXX1Vuevfdd3NyclJSUmbOnDlz5kzLFnbv3h0ZGZmWlhYYGDhp0qQjR47YVtmxY4efn9+yZctMJpOfnx8nlgGPpcLMMZvN27dv//nnnw0Gg5+fn5+fX3JysjpHBzeksXwCaCB31mgkSbJnCwCuRa7622fmAJ7J/r99zu4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBedu/CY1GY/9GAKCPmDkA+ouzOwAAQHAaWZZdvQ8AAABOxNkdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADB2fU1g3zZlycY2FcV0BueQP2vsaCvPAEzBz2xZ+ZwdgcAAAjOAReR4IsKRWX/qyV6Q1SufSVNX4mKmYOe2N8bnN0BAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjx486PP/7461//WqvVDhkyZPTo0U888cQANjJ69Oj333+/jyvfdNNNWVlZV7wpMzMzLS3N398/IiJiALsBx3Kr3li/fn1ycvKQIUOio6M3bNhw+fLlAewM3IFb9RUzx624VW942swRPO50dXXdfvvtkZGRJ0+erK2tzcrKMhgMLtwfrVa7bt26LVu2uHAfoHC33mhqanrzzTfPnj2blZWVlZW1efNmF+4MBszd+oqZ4z7crTc8bubIdrB/C8529uxZSZJ+/PFH25vOnTu3aNGi0NDQqKiohx9+uLm5WVleX1+/evXq6OjowMDAcePGFRYWyrKcmJi4b98+5dbp06cvW7bs8uXLDQ0Nq1at0ul0Wq327rvvrqmpkWX5kUceGTx4sFar1ev1y5Ytu+Je7d69Ozw83FnH7Dj2PL/0xsB6Q7Fp06apU6c6/pgdx1XPL33FzHHGfdXhnr2h8ISZI/jZncjIyISEhFWrVv373/8uLy+3vmnhwoWDBw8uLi7+9ttv8/PzMzIylOVLly4tKys7evSo2Wx+5513AgMDLXcpKyubMmXKLbfc8s477wwePPjee+81mUwFBQXl5eXBwcHLly+XJGn79u3Jycnbt283Go3vvPOOiseK/nHn3jhy5Mj48eMdf8xwPnfuK7iWO/eGR8wc16YtFZhMpieffDIlJcXb2zsuLm737t2yLBcWFkqSVF1drayTk5Pj6+vb2dlZXFwsSVJlZWW3jSQmJj7zzDM6ne7NN99UlpSUlGg0GssWGhoaNBqN2WyWZfnGG29UqvSEV1puwg17Q5blTZs2jRo1qra21oFH6nCuen7pK2aOM+6rGjfsDdljZo74cceisbHxlVde8fLyOnHixOeff+7v72+5qbS0VJIkk8mUk5MzZMgQ2/smJiaGh4enpqa2trYqS7744gsvLy+9lZCQkB9++EFm9Nh9X/W5T29s3brVYDAYjUaHHp/jEXf6wn36ipnjbtynNzxn5gj+Zpa1gICAjIwMX1/fEydO6HS65ubmmpoa5Saj0ejj46O8wdnS0lJVVWV7923btoWGht51110tLS2SJEVHR2s0muPHjxt/UV9fn5ycLEmSl5cHPapicJPeePLJJ3ft2nXo0CG9Xu+Eo4Ta3KSv4IbcpDc8auYI/kdy/vz5xx9/vKCgoLm5+cKFCy+88EJ7e/uECRMSEhImTZqUkZHR1NRkMpk2btx4//33e3l5xcbGpqenP/jgg1VVVbIsnzp1ytJqPj4+2dnZQUFBd9xxR2Njo7LmihUrlBVqamr27t2rrBkREVFUVHTF/ens7GxtbW1vb5ckqbW1ta2tTZWHAVfgbr2xZs2a7OzsAwcOaLXa1tZW4f9TqKjcra+YOe7D3XrD42aOa08uOVtDQ8PKlSvj4+P9/PxCQkKmTJny0UcfKTdVVFQsWLBAq9WOGDFi9erVTU1NyvILFy6sXLkyKioqMDAwJSWlqKhItvokfEdHx29/+9uJEydeuHDBbDavWbMmJiYmICDAYDCsXbtW2cLBgwfj4+NDQkIWLlzYbX/eeOMN6wff+gSmG7Ln+aU3+tUb9fX13f4wY2Nj1Xss+s9Vzy99xcxxxn3V4Va94YEzR2PZygBoNBql/IC3AHdmz/NLb4jNVc8vfSU2Zg56Yv/zK/ibWQAAAMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAILztn8TymXZAVv0BpyBvkJP6A30hLM7AABAcBpZll29DwAAAE7E2R0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgODs+lZlvr/SEwzsm5noDU+g/rd20VeegJmDntgzczi7AwAABOeAa2bxvcyisv/VEr0hKte+kqavRMXMQU/s7w3O7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAghM27uTl5c2ZM2fYsGH+/v5jxozZuHFjc3OzCnU7OjrWrFkzbNiwoKCge++99+LFi1dcLSAgQGPFx8enra1Nhd3zWK7qB5PJtGTJEq1WGxIScttttxUVFV1xtczMzLS0NH9//4iICOvly5cvt+6TrKwsFfYZA8PMgTVmjrsRM+785z//mTVr1o033nj06NHq6updu3ZVV1cfP368L/eVZbm9vX3Apbdu3XrgwIFvv/32zJkzZWVlq1atuuJqJpOp8RcLFiyYP3++j4/PgIuidy7sh9WrV5vN5tOnT1dWVo4YMWLx4sVXXE2r1a5bt27Lli22N2VkZFhaZdGiRQPeEzgVMwfWmDnuSLaD/Vtwhs7OTp1Ol5GR0W15V1eXLMvnzp1btGhRaGhoVFTUww8/3NzcrNyamJi4cePGW265JSEhITc3t6GhYdWqVTqdTqvV3n333TU1Ncpqr776ql6vDw4OHjFixHPPPWdbPSws7O2331Z+zs3N9fb2rq+v72Vva2pqfHx8vvjiCzuP2hnseX7dpzdc2w+xsbFvvfWW8nNubq6Xl1dHR0dPu7p79+7w8HDrJffff/8TTzwx0EN3Ilc9v+7TV9aYOY7CzGHm9MQBicW15Z1BSdAFBQVXvHXy5MlLly69ePFiVVXV5MmTH3roIWV5YmLiDTfcUFtbq/w6d+7c+fPn19TUtLS0PPjgg3PmzJFluaioKCAg4Oeff5Zl2Ww2f/fdd902XlVVZV1aOaucl5fXy96+/PLL8fHxdhyuE4kxelzYD7Isb9iwYdasWSaTqaGh4b777luwYEEvu3rF0TNixAidTjd+/PiXXnrp8uXL/X8AnIK4Y42Z4yjMHGZOT4g7V/D5559LklRdXW17U2FhofVNOTk5vr6+nZ2dsiwnJia+/vrryvKSkhKNRmNZraGhQaPRmM3m4uJiPz+/PXv2XLx48YqlT58+LUlSSUmJZYmXl9fHH3/cy94mJCS8/PLL/T9KNYgxelzYD8rK06dPVx6NpKSk8vLyXnbVdvQcOHDgyy+//Pnnn/fu3RsVFWX7etFViDvWmDmOwsxRljNzbNn//Ar42Z3Q0FBJkiorK21vqqio8Pf3V1aQJMlgMLS2ttbW1iq/RkZGKj8YjUaNRjNhwoSYmJiYmJixY8cGBwdXVlYaDIbMzMy///3vERER06ZNO3ToULftBwYGSpLU0NCg/NrY2NjV1RUUFPTPf/7T8skv6/Vzc3ONRuPy5csddeyw5cJ+kGV59uzZBoPhwoULTU1NS5YsueWWW5qbm3vqB1vp6emTJ0+Oi4tbuHDhSy+9tGvXLnseCjgJMwfWmDluyrVpyxmU903/+Mc/dlve1dXVLVnn5ub6+PhYkvW+ffuU5WfOnBk0aJDZbO6pREtLy/PPPz906FDlvVhrYWFhO3fuVH4+ePBg7++j33333ffcc0//Dk9F9jy/7tMbLuyHmpoayeaNhmPHjvW0HdtXWtb27NkzbNiw3g5VRa56ft2nr6wxcxyFmaMsZ+bYckBicW15J/nggw98fX2feeaZ4uLi1tbWU6dOrV69Oi8vr6ura9KkSffdd19jY+P58+enTJny4IMPKnexbjVZlu+4445FixadO3dOluXq6up3331XluWffvopJyentbVVluUdO3aEhYXZjp6NGzcmJiaWlJSYTKapU6cuXbq0p52srq6+7rrr3PMDgwoxRo/s0n7Q6/UrV65saGi4dOnSs88+GxAQcOHCBds97OjouHTpUmZmZnh4+KVLl5RtdnZ2vvXWW0aj0Ww2Hzx4MDY21vI2v8sRd7ph5jgEM8eyBWZON8SdHh05cuSOO+4ICQkZMmTImDFjXnjhBeUD8BUVFQsWLNBqtSNGjFi9enVTU5OyfrdWM5vNa9asiYmJCQgIMBgMa9eulWU5Pz9/4sSJQUFBQ4cOTU1NPXz4sG3dy5cvP/rooyEhIQEBAUuXLm1oaOhpD//617+67QcGFcKMHtl1/XD8+PH09PShQ4cGBQVNnjy5p39p3njjDetzrv7+/rIsd3Z2zp49e/jw4dddd53BYHjqqadaWloc/sgMDHHHFjPHfswcy92ZOd3Y//xqLFsZAOVdQHu2AHdmz/NLb4jNVc8vfSU2Zg56Yv/zK+BHlQEAAKwRdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwXnbv4mrXmEVHovegDPQV+gJvYGecHYHAAAIzq5rZgEAALg/zu4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARn17cq8/2VnmBg38xEb3gC9b+1i77yBMwc9MSemcPZHQAAIDgHXDPLVa/wqKtOXXt42mPlaXVdxdMeZ0+raw9Pe6w8ra49OLsDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABCcw+KO2Wz29vaOiYnR6/V/+MMf+v6f8o1G4+zZs3u69cMPPzQYDDExMZmZmWrWnT9/fkhIyKJFi3pawRl1S0tLZ86cGRUVlZSU9Mknn6hWt6WlJSUlRafT6fX6bdu29XGDfUdv2F9X1N6wh5OeX0mSWlpa9Hr9unXr1Kzr7++v0+l0Ot3ixYvVrHv27NmZM2eGhYUlJSW1traqU7egoED3C29v77y8vD5us4/oDYfUFa03ZDtYb6G+vj4qKkqW5dbW1gkTJnz88cd93EhpaemsWbOueFN7e7vBYDAajTU1NdHR0Q0NDerUlWU5Nzc3Ozt74cKF1gudXbe4uPjo0aOyLJ86dSo8PLyzs1Oduh0dHefPn5dlua6uLjIyUvm5W93+ojccW1ek3rCHCs+vLMsbN25cvHjx2rVr1ayr1+ttF6pQd/bs2Tt27JBluby8vL29XbW6ipqamhEjRnR0dNjW7S96w+F1hekNhePfzPLx8Zk4ceKZM2ckSWpra5s1a1ZKSsq4ceMOHTokSZLRaExNTX3ooYduvfXWRx991PqOeXl5kydPrqmpsSz5+uuvExIS9Hq9VqudMWNGTk6OOnUlSZoxY0ZgYKDKx2swGCZNmiRJ0vXXXy9JUnNzszp1Bw0aFB4eLklSR0dHQECAn59fXw58AOgNesMZHPv8lpSU/Pjjj3feeafKdV1yvKWlpUajccWKFZIkjRw50tu7t+/Zd8bxvvfee3fdddegQYMG9lBcFb1Bb/x/9mQl6y1YUt7FixfHjh2bm5sry3JnZ2d9fb0sy1VVVWlpabIsl5aWBgcH19TUyLI8bdq0kpISJeXl5eWlpqaaTCbr7b/77ru///3vlZ//9Kc/bd++XZ26is8++6wvr+AdXleW5U8//XTKlClq1m1oaIiOjh40aNAbb7xxxbr9RW84o64sRG/YQ4XjXbhwYWFh4c6dO6/6Ct6xdQMCAgwGw/jx4z/55BPV6n766aczZsyYP3/+TTfdtHnzZjWPVzFz5sycnJwr1u0vesOxdUXqjf+3Bbvu/L+HPWjQIL1ef9111y1btkxZ2NXV9fTTT6elpU2fPj04OFiW5dLS0mnTpim3rly5Mjc3t7S0VK/XjxkzxnKe3KKP/6Q5vK7iqv+kOaluWVlZUlLSTz/9pHJd5V6jRo0qLy+3rdtf9Aa94QzOPt5PPvlk/fr1siz3/k+aMx5no9Eoy3J+fn5kZGRdXZ06dT/++GNfX9/CwsJLly5NnTrV8maEOn1lMpkiIyMt71bI7j1z6A3Vjld2dG8oHPlmVkREhNFoLCsr++qrr3744QdJkvbv319cXHzo0KGDBw/6+voqqw0ePFj5wcvLq6OjQ5KksLAwPz+/EydOdNtgZGTkuXPnlJ8rKysjIyPVqeuq45UkyWw233XXXdu3bx89erSadRUxMTGpqamnTp3q/4NxFfQGveEMDj/eY8eO7dmzJyYm5rHHHnv77befffZZdepKkqTX6yVJGjduXHJy8unTp9WpGxUVlZiYmJiY6Ovre+utt548eVK145Uk6b333ps3b56T3smiN+iNbhz/2Z2IiIgtW7Zs3bpVkqT6+nqDweDt7f3111+bTKae7hIUFPTBBx889thj33zzjfXyiRMnFhUVlZeX19XV5ebm9v6BeQfW7RcH1r18+fKCBQvWr18/a9YsNetWVVUp0aGiouLYsWPJyclXrT4w9Aa94QwOPN7NmzdXVFQYjca//e1vv/vd7zZt2qRO3bq6ugsXLkiSVFRUdOrUqdjYWHXq3nDDDV1dXRUVFZ2dnf/973+TkpLUqavYs2fPkiVLeqloP3qD3rBwyvfuLF68+MSJE4WFhfPmzfv666+XLl36r3/9Kzo6upe7REREZGdnP/DAA0VFRZaF3t7er7322owZM1JSUrZu3RoUFKROXUmSbrvttqVLl+7fv1+n0xUUFKhT9/PPPz98+PDTTz+t/B88o9GoTt26urrZs2dHRUXNmjXrz3/+s/JKwknoDXrDGRz4/Lqk7tmzZ1NTU6Oion7zm9+8/vrroaGh6tTVaDTbtm1LT09PSkq6/vrr586dq05dSZJMJtPp06enTZvWe0X70Rv0hkJjeUtsIHfWaCRJsmcL1BW17rW4z9SlLnWv3brX4j5TV826fKsyAAAQHHEHAAAIjrgDAAAEp17c6eUKR1e9CNGAlfZ8pSEVLgbU09VVrnoBFHv0dJUTZ1+kxh5/+ctf4uPj4+Li1q9fb3tTQkJCQkLCvn377Kxi22b96skBd2m3O/bSk7Yr29OlV9zhnnrSdmWndqk6mDkWzJxumDk9rSzyzLHnS3v6vgXbKxw1NDR0dXUpt17xIkQOqWt7pSFL3Z4uBuSQugrrq6tYH+8VL4DiqLrdrnJiXVfR7UIkjqo74PuePXtWr9e3tLS0t7enpKR88803ln3+7rvvbrrppkuXLtXV1SmT1J663dqsvz3Ze5f2vW4vPWm78lW7tO91FT31pO3KvXep/dNjYJg5vWPm9GVNZo5nzhyVzu7YXuFo7NixlZWVyq19vwhRf9leachS19kXA+p2dRXr43WeUpurnNjWdfZFavorICDA19e3ra1NuQTd8OHDLftcWFiYmprq6+s7bNiwkSNHHj582J5C3dqsvz054C7tdsdeetJ2ZXu61HaHe+lJ5/0Nugozh5nTE2aOZ84cleLOuXPnoqKilJ91Ol1lZWVWVtZVvz/AgT777LO4uLjAwEDruhcvXtTr9ZGRkevXr7/qF7f014YNG55//nnLr9Z16+rqYmNjb7755gMHDji26JkzZ3Q63YIFC8aNG7dly5ZudRUqfLVXv4SEhGRkZERHR0dGRs6bN2/UqFGWfR4zZsyRI0caGxvPnz+fn5/v2Nntnj1py4Fd2ktP2nJel6rDPZ9fZo47YOZ45szp7RqnTqWETXWUl5evXbs2Ozu7W92goKCysjKj0Thz5sw5c+aMHDnSURUPHDgQHR2dmJh49OhRZYl13VOnTun1+oKCgrlz5548eXLYsGGOqtvZ2Xns2LHvv/9er9enp6dPmjTp9ttvt16hurq6sLBw+vTpjqpov/Ly8ldffbWkpMTX1/dXv/rV3LlzLY/VmDFjVq1aNX369IiIiLS0tN4vyWs/d+hJW47q0t570pbzutRV3OH5Zea4A2aOZ84clc7u9PEKR85w1SsNOeNiQL1fXaUvF0AZmKte5cSpF6kZmIKCgptvvlmr1QYEBMycOfOrr76yvvWRRx7Jz8/fv39/fX19XFycA+u6c0/asr9L+3jFHwvndak63Pn5Zea4FjOnL8SbOSrFHdsrHG3evNlsNju7ru2Vhix1nXoxINurq1jq9usCKP1le5WTbo+zu51VliQpPj7+m2++aWpqamtrO3z4cEJCgvU+l5WVSZL04Ycfms3m1NRUB9Z1w5605cAu7aUnbTm1S9Xhhs8vM8dNMHM8dObY8znnfm3hgw8+GDVqVHR09M6dO2VZHjlyZGNjo3JTenq6Vqv18/OLiorKz893YN2PPvpo0KBBUb8oLS211D158mRSUlJkZGRCQsKuXbv6srUBPGI7d+5UPpFuqVtQUBAXFxcZGTl69Oi9e/c6vO4XX3yRlJQUHx+/bt06+X8f5/Pnz0dGRnZ2dvZxU/Z0SL/u+/zzz8fFxcXGxmZkZMj/u88TJ04MCwu7+eabT506ZWdd2zbrV0/23qV9r9tLT9qufNUu7dfxKmx70nblq3ap/dNjYJg5V8XM6QtmjgfOHPXijrWioqJHH32UuqLWtee+nvZYeVpdO11zx0tdderac19Pe6w8ra4FlwilrlPqXov7TF3qUvfarXst7jN11azLRSQAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAME54GsGITZ7vvILYnPVV41BbMwc9ISvGQQAAOiRXWd3AAAA3B9ndwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACC4/wNeW27o5DoAAAACSURBVCEI/r8gawAAAABJRU5ErkJggg=="
-/>
+![sockets binding and block:block distribution](misc/hybrid.png)
+{: align="center"}
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=4
-#SBATCH --cpus-per-task=4
+!!! example "Binding to sockets and block:block distribution"
 
-export OMP_NUM_THREADS=4
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2
+    #SBATCH --tasks-per-node=4
+    #SBATCH --cpus-per-task=4
 
-srun --ntasks 8 --cpus-per-task $OMP_NUM_THREADS ./application
-```
+    export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
+    srun --ntasks 8 --cpus-per-task $OMP_NUM_THREADS ./application
+    ```
 
 ### Core Bound
 
@@ -195,36 +232,37 @@ srun --ntasks 8 --cpus-per-task $OMP_NUM_THREADS ./application
 
 This method allocates the tasks linearly to the cores.
 
-\<img alt=""
-src="<data:;base64,iVBORw0KGgoAAAANSUhEUgAAAvoAAADyCAIAAACzsfbGAAAABmJLR0QA/wD/AP+gvaeTAAAgAElEQVR4nO3df1RUdf7H8TuIIgy/lOGXICgoGPZjEUtByX54aK3VVhDStVXyqMlapLSZfsOf7UnUU2buycgs5XBytkzdPadadkWyg9nZTDHN/AEK/uLn6gy/HPl1v3/csxyOMIQMd2b4zPPxF3Pnzr2f952Pb19zZ+aORpZlCQAAQFxOth4AAACAuog7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDhnSx6s0Wj6ahwA+h1Zlq28R3oO4Mgs6Tmc3QEAAIKz6OyOwvqv8ADYlm3PstBzAEdjec/h7A4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB3H9dhjj2k0mu+++659SUBAwMGDB3u+haKiInd3956vn5OTExcXp9VqAwIC7mGgAIRg/Z6zfPnyqKgoNze3kJCQFStWNDU13cNwIRbijkPz8fF57bXXrLY7nU63bNmydevWWW2PAOyKlXtOfX19dnb21atX9Xq9Xq9fu3at1XYNe0PccWgLFy4sKSn54osvOt9VXl6enJzs5+cXHBz80ksvNTY2KsuvXr361FNPeXt733///UePHm1fv7a2Ni0tbfjw4b6+vrNnz66pqem8zaeffjolJWX48OEqlQPAzlm55+zcuTM+Pt7HxycuLu6FF17o+HA4GuKOQ3N3d1+3bt2qVauam5vvuispKWngwIElJSXHjx8/ceJERkaGsjw5OTk4OLiiouKrr7764IMP2tefO3duZWXlyZMnr1y54uXllZqaarUqAPQXNuw5hYWFMTExfVoN+hXZApZvATY0ZcqUN998s7m5ecyYMdu3b5dl2d/f/8CBA7Isnzt3TpKkqqoqZc38/PzBgwe3traeO3dOo9HcvHlTWZ6Tk6PVamVZvnTpkkajaV/faDRqNBqDwdDlfvfu3evv7692dVCVrf7t03P6NVv1HFmW16xZM3LkyJqaGlULhHos/7fvbO14BTvj7OyclZW1aNGiefPmtS+8du2aVqv19fVVboaFhZlMppqammvXrvn4+AwZMkRZPnr0aOWP0tJSjUbz8MMPt2/By8vr+vXrXl5e1qoDQP9g/Z6zYcOG3NzcgoICHx8ftaqC3SPuQHr22WfffvvtrKys9iXBwcENDQ3V1dVK9yktLXVxcdHpdEFBQQaD4c6dOy4uLpIkVVRUKOuHhIRoNJpTp06RbwD8Kmv2nJUrV+7fv//IkSPBwcGqFYR+gM/uQJIkacuWLdu2baurq1NuRkRETJw4MSMjo76+vrKyMjMzc/78+U5OTmPGjImOjt66daskSXfu3Nm2bZuyfnh4eEJCwsKFC8vLyyVJqq6u3rdvX+e9tLa2mkwm5T17k8l0584dK5UHwM5Yp+ekp6fv378/Ly9Pp9OZTCa+iO7IiDuQJEmaMGHCM8880/5VCI1Gs2/fvsbGxpEjR0ZHRz/44IPvvPOOctfnn3+en58/bty4J5544oknnmjfwt69e4cNGxYXF+fh4TFx4sTCwsLOe9m5c6erq+u8efMqKytdXV05sQw4LCv0HIPBsH379osXL4aFhbm6urq6ukZFRVmnOtghTfsngHrzYI1GkiRLtgCgP7LVv316DuCYLP+3z9kdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOGfLN6HRaCzfCAD0ED0HwL3i7A4AABCcRpZlW48BAABARZzdAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQnEWXGeRiX46gd5cqYG44AutfxoJ55QjoOTDHkp7D2R0AACC4PvgRCS5UKCrLXy0xN0Rl21fSzCtR0XNgjuVzg7M7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAAQnftw5e/bs9OnTdTqdm5vbmDFjXn/99V5sZMyYMQcPHuzhyr/5zW/0en2Xd+Xk5MTFxWm12oCAgF4MA33LrubG8uXLo6Ki3NzcQkJCVqxY0dTU1IvBwB7Y1byi59gVu5objtZzBI87bW1tv/3tb4cNG3b69Omamhq9Xh8WFmbD8eh0umXLlq1bt86GY4DC3uZGfX19dnb21atX9Xq9Xq9fu3atDQeDXrO3eUXPsR/2NjccrufIFrB8C2q7evWqJElnz57tfNeNGzdmzZrl6+sbFBS0dOnShoYGZfmtW7fS0tJCQkI8PDyio6PPnTsny3JkZOSBAweUe6dMmTJv3rympiaj0bhkyZLg4GCdTvfcc89VV1fLsvzSSy8NHDhQp9OFhobOmzevy1Ht3bvX399frZr7jiXPL3Ojd3NDsWbNmvj4+L6vue/Y6vllXtFz1Hisddjn3FA4Qs8R/OzOsGHDIiIilixZ8re//e3KlSsd70pKSho4cGBJScnx48dPnDiRkZGhLJ8zZ05ZWdmxY8cMBsOePXs8PDzaH1JWVjZp0qTJkyfv2bNn4MCBc+fOraysPHny5JUrV7y8vFJTUyVJ2r59e1RU1Pbt20tLS/fs2WPFWnFv7HluFBYWxsTE9H3NUJ89zyvYlj3PDYfoObZNW1ZQWVm5cuXKcePGOTs7jxo1au/evbIsnzt3TpKkqqoqZZ38/PzBgwe3traWlJRIknT9+vW7NhIZGbl69erg4ODs7GxlyaVLlzQaTfsWjEajRqMxGAyyLD/00EPKXszhlZadsMO5IcvymjVrRo4cWVNT04eV9jlbPb/MK3qOGo+1GjucG7LD9Bzx4067urq6t99+28nJ6aeffjp06JBWq22/6/Lly5IkVVZW5ufnu7m5dX5sZGSkv7//hAkTTCaTsuTw4cNOTk6hHXh7e//8888yrcfix1qf/cyN9evXh4WFlZaW9ml9fY+40xP2M6/oOfbGfuaG4/Qcwd/M6sjd3T0jI2Pw4ME//fRTcHBwQ0NDdXW1cldpaamLi4vyBmdjY2N5eXnnh2/bts3X13fGjBmNjY2SJIWEhGg0mlOnTpX+z61bt6KioiRJcnJyoKMqBjuZGytXrszNzT1y5EhoaKgKVcLa7GRewQ7ZydxwqJ4j+D+SioqK11577eTJkw0NDTdv3ty4cWNzc/PDDz8cERExceLEjIyM+vr6ysrKzMzM+fPnOzk5hYeHJyQkLF68uLy8XJblM2fOtE81FxeX/fv3e3p6Tps2ra6uTllz4cKFygrV1dX79u1T1gwICDh//nyX42ltbTWZTM3NzZIkmUymO3fuWOUwoAv2NjfS09P379+fl5en0+lMJpPwXwoVlb3NK3qO/bC3ueFwPce2J5fUZjQaFy1aNHr0aFdXV29v70mTJn355ZfKXdeuXUtMTNTpdIGBgWlpafX19crymzdvLlq0KCgoyMPDY9y4cefPn5c7fBK+paXlj3/84yOPPHLz5k2DwZCenj5ixAh3d/ewsLBXXnlF2cI333wzevRob2/vpKSku8azY8eOjge/4wlMO2TJ88vcuKe5cevWrbv+YYaHh1vvWNw7Wz2/zCt6jhqPtQ67mhsO2HM07VvpBY1Go+y+11uAPbPk+WVuiM1Wzy/zSmz0HJhj+fMr+JtZAAAAxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgnO2fBPKz7IDnTE3oAbmFcxhbsAczu4AAADBaWRZtvUYAAAAVMTZHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Cy6qjLXr3QEvbsyE3PDEVj/ql3MK0dAz4E5lvQczu4AAADB9cFvZnFdZlFZ/mqJuSEq276SZl6Jip4DcyyfG5zdAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAEJ2zcOXr06DPPPDN06FCtVvvAAw9kZmY2NDRYYb8tLS3p6elDhw719PScO3dubW1tl6u5u7trOnBxcblz544VhuewbDUfKisrU1JSdDqdt7f3U089df78+S5Xy8nJiYuL02q1AQEBHZenpqZ2nCd6vd4KY0bv0HPQET3H3ogZd/7xj388+eSTDz300LFjx6qqqnJzc6uqqk6dOtWTx8qy3Nzc3Otdr1+/Pi8v7/jx48XFxWVlZUuWLOlytcrKyrr/SUxMnDlzpouLS693iu7ZcD6kpaUZDIYLFy5cv349MDAwOTm5y9V0Ot2yZcvWrVvX+a6MjIz2qTJr1qxejwSqouegI3qOPZItYPkW1NDa2hocHJyRkXHX8ra2NlmWb9y4MWvWLF9f36CgoKVLlzY0NCj3RkZGZmZmTp48OSIioqCgwGg0LlmyJDg4WKfTPffcc9XV1cpq77zzTmhoqJeXV2Bg4Jtvvtl5735+fh9//LHyd0FBgbOz861bt7oZbXV1tYuLy+HDhy2sWg2WPL/2MzdsOx/Cw8M/+ugj5e+CggInJ6eWlhZzQ927d6+/v3/HJfPnz3/99dd7W7qKbPX82s+86oie01foOfQcc/ogsdh292pQEvTJkye7vDc2NnbOnDm1tbXl5eWxsbEvvviisjwyMvL++++vqalRbv7ud7+bOXNmdXV1Y2Pj4sWLn3nmGVmWz58/7+7ufvHiRVmWDQbDjz/+eNfGy8vLO+5aOat89OjRbka7ZcuW0aNHW1CuisRoPTacD7Isr1ix4sknn6ysrDQajc8//3xiYmI3Q+2y9QQGBgYHB8fExGzatKmpqeneD4AqiDsd0XP6Cj2HnmMOcacLhw4dkiSpqqqq813nzp3reFd+fv7gwYNbW1tlWY6MjPzrX/+qLL906ZJGo2lfzWg0ajQag8FQUlLi6ur62Wef1dbWdrnrCxcuSJJ06dKl9iVOTk5ff/11N6ONiIjYsmXLvVdpDWK0HhvOB2XlKVOmKEfjvvvuu3LlSjdD7dx68vLyvvvuu4sXL+7bty8oKKjz60VbIe50RM/pK/QcZTk9pzPLn18BP7vj6+srSdL169c733Xt2jWtVqusIElSWFiYyWSqqalRbg4bNkz5o7S0VKPRPPzwwyNGjBgxYsSDDz7o5eV1/fr1sLCwnJyc999/PyAg4NFHHz1y5Mhd2/fw8JAkyWg0Kjfr6ura2to8PT13797d/smvjusXFBSUlpampqb2Ve3ozIbzQZblqVOnhoWF3bx5s76+PiUlZfLkyQ0NDebmQ2cJCQmxsbGjRo1KSkratGlTbm6uJYcCKqHnoCN6jp2ybdpSg/K+6auvvnrX8ra2truSdUFBgYuLS3uyPnDggLK8uLh4wIABBoPB3C4aGxvfeuutIUOGKO/FduTn5/fJJ58of3/zzTfdv4/+3HPPzZ49+97KsyJLnl/7mRs2nA/V1dVSpzcavv/+e3Pb6fxKq6PPPvts6NCh3ZVqRbZ6fu1nXnVEz+kr9BxlOT2nsz5ILLbdvUr+/ve/Dx48ePXq1SUlJSaT6cyZM2lpaUePHm1ra5s4ceLzzz9fV1dXUVExadKkxYsXKw/pONVkWZ42bdqsWbNu3Lghy3JVVdXnn38uy/Ivv/ySn59vMplkWd65c6efn1/n1pOZmRkZGXnp0qXKysr4+Pg5c+aYG2RVVdWgQYPs8wODCjFaj2zT+RAaGrpo0SKj0Xj79u0NGza4u7vfvHmz8whbWlpu376dk5Pj7+9/+/ZtZZutra0fffRRaWmpwWD45ptvwsPD29/mtznizl3oOX2CntO+BXrOXYg7ZhUWFk6bNs3b29vNze2BBx7YuHGj8gH4a9euJSYm6nS6wMDAtLS0+vp6Zf27pprBYEhPTx8xYoS7u3tYWNgrr7wiy/KJEyceeeQRT0/PIUOGTJgw4dtvv+2836amppdfftnb29vd3X3OnDlGo9HcCDdv3my3HxhUCNN6ZNvNh1OnTiUkJAwZMsTT0zM2Ntbc/zQ7duzoeM5Vq9XKstza2jp16lQfH59BgwaFhYWtWrWqsbGxz49M7xB3OqPnWI6e0/5wes5dLH9+Ne1b6QXlXUBLtgB7Zsnzy9wQm62eX+aV2Og5MMfy51fAjyoDAAB0RNwBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAATnbPkmfvUXVuGwmBtQA/MK5jA3YA5ndwAAgOAs+s0sAAAA+8fZHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Cy6qjLXr3QEvbsyE3PDEVj/ql3MK0dAz4E5lvQczu4AAADB9cFvZjnOdZmVVw+OVq8lHO1YOVq9tuJox9nR6rWEox0rR6vXEpzdAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOJvFnccee0yj0Wg0Gg8Pj0ceeSQvL6/XmxozZszBgwe7WaGlpSU9PX3o0KGenp5z586tra3t9b56zZr15uTkxMXFabXagICAXu/Fhqx5rJYvXx4VFeXm5hYSErJixYqmpqZe76vXrFlvZmbmyJEjXVxcfHx8ZsyYUVxc3Ot99TvWPM6KlpaW6OhojUZTUVHR6331mjXrTU1N1XSg1+t7vS+bsPLc+Ne//jVhwoTBgwf7+vquWLGi1/vqNWvW6+7u3nFuuLi43Llzp9e7s4Qtz+783//9X3Nz89WrV6dOnTpz5syamhqVdrR+/fq8vLzjx48XFxeXlZUtWbJEpR11z2r16nS6ZcuWrVu3TqXtW4HVjlV9fX12dvbVq1f1er1er1+7dq1KO+qe1eqdPn16fn5+TU3N8ePHnZyc5s+fr9KO7JPVjrMiKyvLx8dH1V10z5r1ZmRk1P3PrFmz1NuRSqx2rA4fPpyUlLRw4cKysrITJ05Mnz5dpR11z2r1VlZWtk+MxMTEmTNnuri4qLSv7tky7mg0GmdnZ29v71deeeX27du//PKLsvydd96JjIz08PAYMWLEW2+91b7+mDFj1q1b9/jjj99///3jx48/ffr0XRs0GAyPPfbY/Pnzm5ubOy7/8MMPV65cGRYW5ufn95e//OXzzz83GAxqV9eZ1ep9+umnU1JShg8frnZF6rHasdq5c2d8fLyPj09cXNwLL7xw9OhRtUvrktXqnTBhQlhYmIeHR3Bw8LBhw7y9vdUuza5Y7ThLknT27Nndu3dv3LhR1Yq6Z816Bw4c6P4/zs59cEU3K7PasVq9evXSpUsXLVrk7+8/fPjw+Ph4tUvrktXq1Wq1yqwwmUxffvnliy++qHZp5tjFZ3f0er2rq2tkZKRyMzg4+J///Gdtbe2BAwfee++9L774on3NL7/88sCBA2fOnElOTl66dGnHjZSVlU2aNGny5Ml79uwZOHBg+/KKioqqqqro6GjlZkxMTEtLy9mzZ9UvyyxV6xWMNY9VYWFhTEyMSoX0kBXqzcnJCQgI8PDwOH369Keffqp2RfZJ7ePc2tq6YMGCrVu3enh4WKGcX2WdeTV8+PDx48dv3ry5cxjqR1Q9ViaT6fvvv29tbb3vvvuGDBny5JNP/vTTT9apyxyr9djdu3eHhIQ8/vjj6tXyK2QLWLKFKVOmaLVaf39/JfodPny4y9VWrFiRlpam/B0ZGblz507l77Nnz7q6urYvX716dXBwcHZ2ductXLhwQZKkS5cutS9xcnL6+uuvezHmflFvu7179/r7+/dutApL6u1fx0qW5TVr1owcObKmpqZ3Y+5H9TY2Nt64cePbb7+Njo5euHBh78Zsefew/n6teZy3bNmSnJwsy7Lyorm8vLx3Y+4v9ebl5X333XcXL17ct29fUFBQRkZG78YsfM8pLy+XJGnkyJFnzpypr69ftmxZUFBQfX19L8bcL+rtKCIiYsuWLb0bsNwXPceWZ3cWLVpUVFT07bffRkVFffLJJ+3LDx48+Oijj4aEhISGhn744YfV1dXtd+l0OuUPV1fX27dvt7S0KDc//PDDoKCgLj+IoLy6MhqNys26urq2tjZPT0+ViuqGdeoVg5WP1YYNG3JzcwsKCmz1SQtr1uvq6hoYGBgfH79t27Zdu3Y1NjaqU5M9ss5xLi4u3rp16/bt29UspUesNq8SEhJiY2NHjRqVlJS0adOm3Nxc1WpSi3WOlbu7uyRJaWlpY8eO1Wq1GzdurKio+PHHH1UszAwr99iCgoLS0tLU1NS+r6THbBl3lK8OjRs37tNPP9Xr9YWFhZIklZeXp6SkrF27tqysTPlYsdyD3wTZtm2br6/vjBkzOvfugIAAPz+/oqIi5eaJEyecnZ2joqL6vJxfZZ16xWDNY7Vy5crc3NwjR46Ehob2cRk9Zqu5MWDAgAEDBvRBAf2EdY5zYWFhTU3N2LFjdTpdbGysJEljx47dtWuXGhV1zybzatCgQe3/EfYj1jlW7u7uo0aNav/5Jxv+9pyV50Z2dnZiYmJ7YLIJu/jsTnh4+IIFC1avXi1JUl1dnSRJDz74oEajuXHjRg8/W+Di4rJ//35PT89p06YpW+ho8eLFWVlZly9frqqqWr16dXJysm0/oal2va2trSaTSXn73GQy2epbf31C7WOVnp6+f//+vLw8nU5nMpls8kX0jlStt7m5OSsr6/z580aj8YcffsjIyHj22Wdt9S0J21L1OKekpJSUlBQVFRUVFSnf0T106NDs2bNVqKOnVK23ra1t165dZWVlRqPxyJEjq1atSk5OVqMK61C75/zpT3/asWPHhQsXTCZTZmbmsGHDxo8f3+dV9Jza9UqSVF1dfeDAgcWLF/ftyO+VXcQdSZLeeOONY8eOHT58OCIiYu3atZMmTZo0adKSJUsSEhJ6uIWBAwfq9frQ0NCpU6feunWr411r1qxJSEgYN25ceHh4cHDwBx98oEIF90bVenfu3Onq6jpv3rzKykpXV1fbfhXWcuodK4PBsH379osXL4aFhbm6urq6utrktN9d1KtXo9EcO3ZsypQpfn5+KSkp8fHxH3/8sTpF9APqHWc3N7fg//H395ckKTAwUKvVqlJGj6nac/R6fUxMjJ+f34IFC1JSUrZu3apCBdaj6rFatmzZ888//+ijj/r7+584ceKrr75yc3NToYh7oGq9kiTt3r07NDTUlh9SliRJkjQ9OVVl9sEO+QP01Kv2Y/sj6hV7v7ZCvdZ5bH9EvffKXs7uAAAAqIS4AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcPYbd1paWtLT04cOHerp6Tl37tza2lpza2ZmZo4cOdLFxcXHx2fGjBnFxcXWHGffamlpiY6O1mg0FRUV5tZxd3fXdODi4tKvLyTYQ5WVlSkpKTqdztvb+6mnnjp//nyXq+Xk5MTFxSkXDO35XXbC3AiXL18eFRXl5uYWEhKyYsWKbq6FaG4LqampHeeMXq9Xq4b+jJ5jbh16Dj3nXrdghz3HfuPO+vXr8/Lyjh8/XlxcrFzN2tya06dPz8/Pr6mpOX78uJOTU7/+JamsrKxfvSpgZWVl3f8kJibOnDnTES6Mm5aWZjAYLly4cP369cDAQHOXbdXpdMuWLVu3bt093WUnzI2wvr4+Ozv76tWrer1er9evXbv2XrcgSVJGRkb7tJk1a1afDlwQ9Bxz6Dn0nHvdgmSHPceS3xe1fAvd8PPz+/jjj5W/CwoKnJ2db9261f1Dmpqa0tLSnn76aZWGpGq9siz//PPP4eHh//nPf6Se/YRydXW1i4uLuR+ztZwl9fb5sQoPD//oo4+UvwsKCpycnFpaWsyt3M2vwVv+Q/Fd6sN6ux/hmjVr4uPj73UL8+fPf/311/tkeAq1/y3YZL/0nF9dn55jbmV6jv33HDs9u1NRUVFVVRUdHa3cjImJaWlpOXv2rLn1c3JyAgICPDw8Tp8+3cOf+bA3ra2tCxYs2Lp1q/IT7j2xe/fukJAQm1+Z2zqSkpL27t1bVVVVW1u7a9eu3//+9w7125btCgsLY2JievHAnJyc4cOHjx8/fvPmzcrvqaEjek5P0HNsPSgbEKbn2GncUX5mzMvLS7np4eHh5OTUzVvpycnJJ0+e/Pe//93Q0PDnP//ZSqPsU1u3bg0JCZk+fXrPH7Jz506b/+ia1bzxxhstLS3+/v5eXl4//vjju+++a+sR2cDatWsvX76cmZl5rw/8wx/+8MUXXxQUFKxateq9995buXKlGsPr1+g5PUHPcTQi9Rw7jTvKqw2j0ajcrKura2tr8/T0lCRp9+7d7Z9+al/f1dU1MDAwPj5+27Ztu3bt6uZn6O1TcXHx1q1bt2/f3vmuLuuVJKmgoKC0tDQ1NdVKQ7QpWZanTp0aFhZ28+bN+vr6lJSUyZMnNzQ0mDs4QtqwYUNubm5BQUH7Jy16Xn5CQkJsbOyoUaOSkpI2bdqUm5ur/nj7GXpOO3qORM+RJEm4nmOncScgIMDPz6+oqEi5eeLECWdnZ+XXqlNTU+96M+8uAwYM6HenHAsLC2tqasaOHavT6WJjYyVJGjt27K5duyTz9WZnZycmJup0OtuM2Lr++9///vDDD+np6UOGDDMTiKgAAAKYSURBVNFqta+++uqVK1fOnDnzq5NBGCtXrszNzT1y5EhoaGj7wt6VP2jQoJaWFhXG2L/Rc+g5HdFzxOs5dhp3JElavHhxVlbW5cuXq6qqVq9enZyc7O3t3Xm15ubmrKys8+fPG43GH374ISMj49lnn+133xpISUkpKSkpKioqKio6ePCgJEmHDh2aPXu2ufWrq6sPHDjgOGeVdTpdaGjo+++/X1tbazKZ3n33XXd394iIiM5rtra2mkwm5X1ik8nU8euy3dxlJ8yNMD09ff/+/Xl5eTqdzmQydfOl0C630NbWtmvXrrKyMqPReOTIkVWrVpn7jomDo+fQc9rRcwTsOZZ8ztnyLXSjqanp5Zdf9vb2dnd3nzNnjtFo7HK15ubmGTNm+Pv7Dxo0aMSIEcuXLze3puVUrbfdL7/8Iv3atyQ2b948evRotUdiSb19fqxOnTqVkJAwZMgQT0/P2NhYc98N2bFjR8fprdVqe3KX5fqk3i5HeOvWrbv+zYaHh9/TFlpbW6dOnerj4zNo0KCwsLBVq1Y1NjZaOFTr/Fuw8n7pOd2sQ8+h5/R8C/bZczSyBWfklHfvLNlC/0K91nlsf0S9Yu/XVqjXOo/tj6j3Xtnvm1kAAAB9grgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwzpZvwhGupd2Ro9VrCUc7Vo5Wr6042nF2tHot4WjHytHqtQRndwAAgOAsuswgAACA/ePsDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAAT3/8Z/zKE559m+AAAAAElFTkSuQmCC>"
-/>
+![Binding to cores and block:block distribution](misc/hybrid_cores_block_block.png)
+{: align="center"}
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=4
-#SBATCH --cpus-per-task=4
+!!! example "Binding to cores and block:block distribution"
 
-export OMP_NUM_THREADS=4
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2
+    #SBATCH --tasks-per-node=4
+    #SBATCH --cpus-per-task=4
 
-srun --ntasks 8 --cpus-per-task $OMP_NUM_THREADS --cpu_bind=cores --distribution=block:block ./application
-```
+    export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
+    srun --ntasks 8 --cpus-per-task $OMP_NUM_THREADS --cpu_bind=cores --distribution=block:block ./application
+    ```
 
 #### Distribution: cyclic:block
 
-The cyclic:block distribution will allocate the tasks of your job in
-alternation between the first node and the second node while filling the
-sockets linearly.
+The `cyclic:block` distribution will allocate the tasks of your job in alternation between the first
+node and the second node while filling the sockets linearly.
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAvoAAADyCAIAAACzsfbGAAAABmJLR0QA/wD/AP+gvaeTAAAgAElEQVR4nO3df1RUdf7H8TuIIszwQxl+CYKCgmE/FrEUleyHh9ZabQUhXVslj5qsRUqb6Tf82Z5EPWXmnozMUg4nZ8vU3XOqZVckO5idzRTTzB+g4C9+rs7wy5Ff9/vHPTuHI0LIcGeGzzwffzl37tz7ed/5zNsXd2buaGRZlgAAAMTlYu8BAAAAqIu4AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIJztebBGo2mt8YBoM+RZdnGe6TnAM7Mmp7D2R0AACA4q87uKGz/Fx4A+7LvWRZ6DuBsrO85nN0BAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrjjvB577DGNRvPdd99ZlgQGBh44cKD7WygqKtLpdN1fPycnZ8KECVqtNjAw8B4GCkAItu85y5Yti46O9vDwCA0NXb58eVNT0z0MF2Ih7jg1X1/f1157zWa70+v1S5cuXbt2rc32CMCh2Ljn1NfXZ2dnX7lyxWAwGAyGNWvW2GzXcDTEHae2YMGCkpKSL774ouNd5eXlycnJ/v7+ISEhL730UmNjo7L8ypUrTz31lI+Pz/3333/kyBHL+rW1tWlpaUOHDvXz85s1a1ZNTU3HbT799NMpKSlDhw5VqRwADs7GPWfHjh3x8fG+vr4TJkx44YUX2j8czoa449R0Ot3atWtXrlzZ3Nx8x11JSUn9+/cvKSk5duzY8ePHMzIylOXJyckhISEVFRVfffXVBx98YFl/zpw5lZWVJ06cuHz5sre3d2pqqs2qANBX2LHnFBYWxsbG9mo16FNkK1i/BdjR5MmT33zzzebm5lGjRm3btk2W5YCAgP3798uyfPbsWUmSqqqqlDXz8/MHDhzY2tp69uxZjUZz48YNZXlOTo5Wq5Vl+eLFixqNxrK+yWTSaDRGo/Gu+92zZ09AQIDa1UFV9nrt03P6NHv1HFmWV69ePXz48JqaGlULhHqsf+272jpewcG4urpmZWUtXLhw7ty5loVXr17VarV+fn7KzfDwcLPZXFNTc/XqVV9f30GDBinLR44cqfyjtLRUo9E8/PDDli14e3tfu3bN29vbVnUA6Bts33PWr1+fm5tbUFDg6+urVlVweMQdSM8+++zbb7+dlZVlWRISEtLQ0FBdXa10n9LSUjc3N71eHxwcbDQab9++7ebmJklSRUWFsn5oaKhGozl58iT5BsCvsmXPWbFixb59+w4fPhwSEqJaQegD+OwOJEmSNm/evHXr1rq6OuVmZGTk+PHjMzIy6uvrKysrMzMz582b5+LiMmrUqJiYmC1btkiSdPv27a1btyrrR0REJCQkLFiwoLy8XJKk6urqvXv3dtxLa2ur2WxW3rM3m823b9+2UXkAHIxtek56evq+ffvy8vL0er3ZbOaL6M6MuANJkqRx48Y988wzlq9CaDSavXv3NjY2Dh8+PCYm5sEHH3znnXeUuz7//PP8/PwxY8Y88cQTTzzxhGULe/bsGTJkyIQJEzw9PcePH19YWNhxLzt27HB3d587d25lZaW7uzsnlgGnZYOeYzQat23bduHChfDwcHd3d3d39+joaNtUBweksXwCqCcP1mgkSbJmCwD6Inu99uk5gHOy/rXP2R0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA4V+s3odForN8IAHQTPQfAveLsDgAAEJxGlmV7jwEAAEBFnN0BAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABCcVZcZ5GJfzqBnlypgbjgD21/GgnnlDOg56Iw1PYezOwAAQHC98CMSXKhQVNb/tcTcEJV9/5JmXomKnoPOWD83OLsDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDix50zZ85MmzZNr9d7eHiMGjXq9ddf78FGRo0adeDAgW6u/Jvf/MZgMNz1rpycnAkTJmi12sDAwB4MA73LoebGsmXLoqOjPTw8QkNDly9f3tTU1IPBwBE41Lyi5zgUh5obztZzBI87bW1tv/3tb4cMGXLq1KmamhqDwRAeHm7H8ej1+qVLl65du9aOY4DC0eZGfX19dnb2lStXDAaDwWBYs2aNHQeDHnO0eUXPcRyONjecrufIVrB+C2q7cuWKJElnzpzpeNf169dnzpzp5+cXHBy8ZMmShoYGZfnNmzfT0tJCQ0M9PT1jYmLOnj0ry3JUVNT+/fuVeydPnjx37tympiaTybR48eKQkBC9Xv/cc89VV1fLsvzSSy/1799fr9eHhYXNnTv3rqPas2dPQECAWjX3HmueX+ZGz+aGYvXq1fHx8b1fc++x1/PLvKLnqPFY23DMuaFwhp4j+NmdIUOGREZGLl68+G9/+9vly5fb35WUlNS/f/+SkpJjx44dP348IyNDWT579uyysrKjR48ajcbdu3d7enpaHlJWVjZx4sRJkybt3r27f//+c+bMqaysPHHixOXLl729vVNTUyVJ2rZtW3R09LZt20pLS3fv3m3DWnFvHHluFBYWxsbG9n7NUJ8jzyvYlyPPDafoOfZNWzZQWVm5YsWKMWPGuLq6jhgxYs+ePbIsnz17VpKkqqoqZZ38/PyBAwe2traWlJRIknTt2rU7NhIVFbVq1aqQkJDs7GxlycWLFzUajWULJpNJo9EYjUZZlh966CFlL53hLy0H4YBzQ5bl1atXDx8+vKamphcr7XX2en6ZV/QcNR5rMw44N2Sn6Tnixx2Lurq6t99+28XF5aeffjp48KBWq7XcdenSJUmSKisr8/PzPTw8Oj42KioqICBg3LhxZrNZWXLo0CEXF5ewdnx8fH7++WeZ1mP1Y23PcebGunXrwsPDS0tLe7W+3kfc6Q7HmVf0HEfjOHPDeXqO4G9mtafT6TIyMgYOHPjTTz+FhIQ0NDRUV1crd5WWlrq5uSlvcDY2NpaXl3d8+NatW/38/KZPn97Y2ChJUmhoqEajOXnyZOn/3Lx5Mzo6WpIkFxcnOqpicJC5sWLFitzc3MOHD4eFhalQJWzNQeYVHJCDzA2n6jmCv0gqKipee+21EydONDQ03LhxY8OGDc3NzQ8//HBkZOT48eMzMjLq6+srKyszMzPnzZvn4uISERGRkJCwaNGi8vJyWZZPnz5tmWpubm779u3z8vKaOnVqXV2dsuaCBQuUFaqrq/fu3ausGRgYeO7cubuOp7W11Ww2Nzc3S5JkNptv375tk8OAu3C0uZGenr5v3768vDy9Xm82m4X/UqioHG1e0XMch6PNDafrOfY9uaQ2k8m0cOHCkSNHuru7+/j4TJw48csvv1Tuunr1amJiol6vDwoKSktLq6+vV5bfuHFj4cKFwcHBnp6eY8aMOXfunNzuk/AtLS1//OMfH3nkkRs3bhiNxvT09GHDhul0uvDw8FdeeUXZwjfffDNy5EgfH5+kpKQ7xrN9+/b2B7/9CUwHZM3zy9y4p7lx8+bNO16YERERtjsW985ezy/zip6jxmNtw6HmhhP2HI1lKz2g0WiU3fd4C3Bk1jy/zA2x2ev5ZV6JjZ6Dzlj//Ar+ZhYAAABxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgXK3fhPKz7EBHzA2ogXmFzjA30BnO7gAAAMFpZFm29xgAAABUxNkdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgrLqqMtevdAY9uzITc8MZ2P6qXcwrZ0DPQWes6Tmc3QEAAILrhd/M4rrMorL+ryXmhqjs+5c080pU9Bx0xvq5wdkdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwwsadI0eOPPPMM4MHD9ZqtQ888EBmZmZDQ4MN9tvS0pKenj548GAvL685c+bU1tbedTWdTqdpx83N7fbt2zYYntOy13yorKxMSUnR6/U+Pj5PPfXUuXPn7rpaTk7OhAkTtFptYGBg++Wpqant54nBYLDBmNEz9By0R89xNGLGnX/84x9PPvnkQw89dPTo0aqqqtzc3KqqqpMnT3bnsbIsNzc393jX69aty8vLO3bsWHFxcVlZ2eLFi++6WmVlZd3/JCYmzpgxw83Nrcc7RdfsOB/S0tKMRuP58+evXbsWFBSUnJx819X0ev3SpUvXrl3b8a6MjAzLVJk5c2aPRwJV0XPQHj3HEclWsH4LamhtbQ0JCcnIyLhjeVtbmyzL169fnzlzpp+fX3Bw8JIlSxoaGpR7o6KiMjMzJ02aFBkZWVBQYDKZFi9eHBISotfrn3vuuerqamW1d955JywszNvbOygo6M033+y4d39//48//lj5d0FBgaur682bN7sYbXV1tZub26FDh6ysWg3WPL+OMzfsOx8iIiI++ugj5d8FBQUuLi4tLS2dDXXPnj0BAQHtl8ybN+/111/vaekqstfz6zjzqj16Tm+h59BzOtMLicW+u1eDkqBPnDhx13vj4uJmz55dW1tbXl4eFxf34osvKsujoqLuv//+mpoa5ebvfve7GTNmVFdXNzY2Llq06JlnnpFl+dy5czqd7sKFC7IsG43GH3/88Y6Nl5eXt9+1clb5yJEjXYx28+bNI0eOtKJcFYnReuw4H2RZXr58+ZNPPllZWWkymZ5//vnExMQuhnrX1hMUFBQSEhIbG7tx48ampqZ7PwCqIO60R8/pLfQcek5niDt3cfDgQUmSqqqqOt519uzZ9nfl5+cPHDiwtbVVluWoqKi//vWvyvKLFy9qNBrLaiaTSaPRGI3GkpISd3f3zz77rLa29q67Pn/+vCRJFy9etCxxcXH5+uuvuxhtZGTk5s2b771KWxCj9dhxPigrT548WTka99133+XLl7sYasfWk5eX99133124cGHv3r3BwcEd/160F+JOe/Sc3kLPUZbTczqy/vkV8LM7fn5+kiRdu3at411Xr17VarXKCpIkhYeHm83mmpoa5eaQIUOUf5SWlmo0mocffnjYsGHDhg178MEHvb29r127Fh4enpOT8/777wcGBj766KOHDx++Y/uenp6SJJlMJuVmXV1dW1ubl5fXrl27LJ/8ar9+QUFBaWlpampqb9WOjuw4H2RZnjJlSnh4+I0bN+rr61NSUiZNmtTQ0NDZfOgoISEhLi5uxIgRSUlJGzduzM3NteZQQCX0HLRHz3FQ9k1balDeN3311VfvWN7W1nZHsi4oKHBzc7Mk6/379yvLi4uL+/XrZzQaO9tFY2PjW2+9NWjQIOW92Pb8/f0/+eQT5d/ffPNN1++jP/fcc7Nmzbq38mzImufXceaGHedDdXW11OGNhu+//76z7XT8S6u9zz77bPDgwV2VakP2en4dZ161R8/pLfQcZTk9p6NeSCz23b1K/v73vw8cOHDVqlUlJSVms/n06dNpaWlHjhxpa2sbP378888/X1dXV1FRMXHixEWLFikPaT/VZFmeOnXqzJkzr1+/LstyVVXV559/LsvyL7/8kp+fbzabZVnesWOHv79/x9aTmZkZFRV18eLFysrK+Pj42bNndzbIqqqqAQMGOOYHBhVitB7ZrvMhLCxs4cKFJpPp1q1b69ev1+l0N27c6DjClpaWW7du5eTkBAQE3Lp1S9lma2vrRx99VFpaajQav/nmm4iICMvb/HZH3LkDPadX0HMsW6Dn3IG406nCwsKpU6f6+Ph4eHg88MADGzZsUD4Af/Xq1cTERL1eHxQUlJaWVl9fr6x/x1QzGo3p6enDhg3T6XTh4eGvvPKKLMvHjx9/5JFHvLy8Bg0aNG7cuG+//bbjfpuaml5++WUfHx+dTjd79myTydTZCDdt2uSwHxhUCNN6ZPvNh5MnTyYkJAwaNMjLyysuLq6z/2m2b9/e/pyrVquVZbm1tXXKlCm+vr4DBgwIDw9fuXJlY2Njrx+ZniHudETPsR49x/Jwes4drH9+NZat9IDyLqA1W4Ajs+b5ZW6IzV7PL/NKbPQcdMb651fAjyoDAAC0R9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAATnav0mfvUXVuG0mBtQA/MKnWFuoDOc3QEAAIKz6jezAAAAHB9ndwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgrPqqspcv9IZ9OzKTMwNZ2D7q3Yxr5wBPQedsabncHYHAAAIrhd+M8t5rsus/PXgbPVaw9mOlbPVay/OdpydrV5rONuxcrZ6rcHZHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgrNb3Hnsscc0Go1Go/H09HzkkUfy8vJ6vKlRo0YdOHCgixVaWlrS09MHDx7s5eU1Z86c2traHu+rx2xZ77Jly6Kjoz08PEJDQ5cvX97U1NTjfdmFLY+VoqWlJSYmRqPRVFRU9HhfPWbjev/1r3+NGzdu4MCBfn5+y5cv7/G++hxbHuecnJwJEyZotdrAwMAe78VKtqw3MzNz+PDhbm5uvr6+06dPLy4u7vG+7MKWxyo1NVXTjsFg6PG+esyW9ep0uvb1urm53b59u8e7s4Y9z+783//9X3Nz85UrV6ZMmTJjxoyamhqVdrRu3bq8vLxjx44VFxeXlZUtXrxYpR11zWb11tfXZ2dnX7lyxWAwGAyGNWvWqLQj9djsWCmysrJ8fX1V3UXXbFbvoUOHkpKSFixYUFZWdvz48WnTpqm0I8dks+Os1+uXLl26du1albbfTTard9q0afn5+TU1NceOHXNxcZk3b55KO1KPLXtORkZG3f/MnDlTvR11wWb1VlZWWopNTEycMWOGm5ubSvvqmj3jjkajcXV19fHxeeWVV27duvXLL78oy995552oqChPT89hw4a99dZblvVHjRq1du3axx9//P777x87duypU6fu2KDRaHzsscfmzZvX3NzcfvmHH364YsWK8PBwf3//v/zlL59//rnRaFS7uo5sVu+OHTvi4+N9fX0nTJjwwgsvHDlyRO3Sep3NjpUkSWfOnNm1a9eGDRtUrahrNqt31apVS5YsWbhwYUBAwNChQ+Pj49UuzaHY7Dg//fTTKSkpQ4cOVbuirtms3nHjxoWHh3t6eoaEhAwZMsTHx0ft0nqdLXtO//79df/j6toLV7/rAZvVq9VqlUrNZvOXX3754osvql1aZxziszsGg8Hd3T0qKkq5GRIS8s9//rO2tnb//v3vvffeF198YVnzyy+/3L9//+nTp5OTk5csWdJ+I2VlZRMnTpw0adLu3bv79+9vWV5RUVFVVRUTE6PcjI2NbWlpOXPmjPpldUrVeu9QWFgYGxurUiE2oPaxam1tnT9//pYtWzw9PW1Qzq9StV6z2fz999+3trbed999gwYNevLJJ3/66Sfb1OVobPkadAQ2qDcnJycwMNDT0/PUqVOffvqp2hWpxzbHaujQoWPHjt20aVPHMGRjNnst7Nq1KzQ09PHHH1evll8hW8GaLUyePFmr1QYEBCjR79ChQ3ddbfny5Wlpacq/o6KiduzYofz7zJkz7u7uluWrVq0KCQnJzs7uuIXz589LknTx4kXLEhcXl6+//roHY+4T9ba3evXq4cOH19TU9GzM1tTbV47V5s2bk5OTZVlW/rgpLy/v2Zj7RL3l5eWSJA0fPvz06dP19fVLly4NDg6ur6/vwZit7x490yeOs8WePXsCAgJ6NlpFH6q3sbHx+vXr3377bUxMzIIFC3o2ZmfoOXl5ed99992FCxf27t0bHByckZHRszH3lXotIiMjN2/e3LMBy73Rc+x5dmfhwoVFRUXffvttdHT0J598Yll+4MCBRx99NDQ0NCws7MMPP6yurrbcpdfrlX+4u7vfunWrpaVFufnhhx8GBwff9Q1j5a92k8mk3Kyrq2tra/Py8lKpqC7Ypl6L9evX5+bmFhQU2PdTKT1jm2NVXFy8ZcuWbdu2qVlKt9imXp1OJ0lSWlra6NGjtVrthg0bKioqfvzxRxULczA2fg3anS3rdXd3DwoKio+P37p1686dOxsbG9WpSS02O1YJCQlxcXEjRoxISkrauHFjbm6uajV1xcavhYKCgtLS0tTU1N6vpNvsGXeUry2MGTPm008/NRgMhYWFkiSVl5enpKSsWbOmrKxM+Vix3I3fBNm6daufn9/06dM7vsYCAwP9/f2LioqUm8ePH3d1dY2Oju71cn6VbepVrFixIjc39/Dhw2FhYb1chk3Y5lgVFhbW1NSMHj1ar9fHxcVJkjR69OidO3eqUVHXbFOvTqcbMWKE5adnnPAXpG35GnQE9qq3X79+/fr164UCbMgux2rAgAGW0GBjNq43Ozs7MTHREpjswiE+uxMRETF//vxVq1ZJklRXVydJ0oMPPqjRaK5fv97N94Dd3Nz27dvn5eU1depUZQvtLVq0KCsr69KlS1VVVatWrUpOTrbvJ+nUrjc9PX3fvn15eXl6vd5sNve5L6K3p+qxSklJKSkpKSoqKioqUr5LefDgwVmzZqlQR3epPTf+9Kc/bd++/fz582azOTMzc8iQIWPHju31Khyf2se5tbXVbDYrH8swm832+uathar1Njc3Z2VlnTt3zmQy/fDDDxkZGc8++6y9vn1jPVWPVVtb286dO8vKykwm0+HDh1euXJmcnKxGFd2n9mtBkqTq6ur9+/cvWrSod0d+rxwi7kiS9MYbbxw9evTQoUORkZFr1qyZOHHixIkTFy9enJCQ0M0t9O/f32AwhIWFTZky5ebNm+3vWr16dUJCwpgxYyIiIkJCQj744AMVKrg36tVrNBq3bdt24cKF8PBwd3d3d3d3u5zK6kXqHSsPD4+Q/wkICJAkKSgoSKvVqlJGt6n6Wli6dOnzzz//6KOPBgQEHD9+/KuvvvLw8FChiD5A1eO8Y8cOd3f3uXPnVlZWuru7O8IbyurVq9Fojh49OnnyZH9//5SUlPj4+I8//lidImxE1blhMBhiY2P9/f3nz5+fkpKyZcsWFSq4N6rWK0nSrl27wsLC7PkhZUmSJEnTnVNVnT7YKX+AnnrVfmxfRL1i79deqNc2j+2LqPdeOcrZHQAAAJUQdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACM5x405LS0t6evrgwYO9vLzmzJlTW1vb2ZqZmZnDhw93c3Pz9fWdPn16cXGxLcfZu1paWmJiYjQaTUVFRWfr6HQ6TTtubm52v4iZDVRWVqakpOj1eh8fn6eeeurcuXN3XS0nJ2fChAnKBUO7f5eD6GyEy5Yti46O9vDwCA0NXb58eRfXjexsC6mpqe3njMFgUKuGvoye09k69Bx6zr1uwQF7juPGnXXr1uXl5R07dqy4uFi5mnVna06bNi0/P7+mpubYsWMuLi4O/is2XcvKyvrVK5JVVlbW/U9iYuKMGTP67gVMuy8tLc1oNJ4/f/7atWtBQUGdXYpUr9cvXbp07dq193SXg+hshPX19dnZ2VeuXDEYDAaDYc2aNfe6BUmSMjIyLNNm5syZvTpwQdBzOkPPoefc6xYkB+w51vy+qPVb6IK/v//HH3+s/LugoMDV1fXmzZtdP6SpqSktLe3pp59WaUiq1ivL8s8//xwREfGf//xH6t5Pc1dXV7u5uXX2Y7bWs6beXj9WERERH330kfLvgoICFxeXlpaWzlbu4peorf+R6rvqxXq7HuHq1avj4+PvdQvz5s17/fXXe2V4CrVfC3bZLz3nV9en53S2Mj3H8XuOg57dqaioqKqqiomJUW7Gxsa2tLScOXOms/VzcnICAwM9PT1PnTrVzZ/5cDStra3z58/fsmWL8hPu3bFr167Q0FC7X5nbNpKSkvbs2VNVVVVbW7tz587f//73fe43CHtFYWFhbGxsDx6Yk5MzdOjQsWPHbtq0SfktJ7RHz+kOeo69B2UHwvQcB407ys+MeXt7Kzc9PT1dXFy6eCs9OTn5xIkT//73vxsaGv785z/baJS9asuWLaGhodOmTev+Q3bs2GH3H12zmTfeeKOlpSUgIMDb2/vHH39899137T0iO1izZs2lS5cyMzPv9YF/+MMfvvjii4KCgpUrV7733nsrVqxQY3h9Gj2nO+g5zkaknuOgcUf5a8NkMik36+rq2travLy8JEnatWuX5dNPlvXd3d2DgoLi4+O3bt26c+fOLn6G3jEVFxdv2bJl27ZtHe+6a72SJBUUFJSWlqamptpoiHYly/KUKVPCw8Nv3LhRX1+fkpIyadKkhoaGzg6OkNavX5+bm1tQUGD5pEX3y09ISIiLixsxYkRSUtLGjRtzc3PVH28fQ8+xoOdI9BxJkoTrOQ4adwIDA/39/YuKipSbx48fd3V1VX7ZOzU19Y438+7Qr1+/PnfKsbCwsKamZvTo0Xq9Pi4uTpKk0aNH79y5U+q83uzs7MTERL1eb58R29Z///vfH374IWpri2kAAAKdSURBVD09fdCgQVqt9tVXX718+fLp06d/dTIIY8WKFbm5uYcPHw4LC7Ms7Fn5AwYMaGlpUWGMfRs9h57THj1HvJ7joHFHkqRFixZlZWVdunSpqqpq1apVycnJPj4+HVdrbm7Oyso6d+6cyWT64YcfMjIynn322T73rYGUlJSSkpKioqKioqIDBw5IknTw4MFZs2Z1tn51dfX+/fud56yyXq8PCwt7//33a2trzWbzu+++q9PpIiMjO67Z2tpqNpuV94nNZnP7r8t2cZeD6GyE6enp+/bty8vL0+v1ZrO5iy+F3nULbW1tO3fuLCsrM5lMhw8fXrlyZWffMXFy9Bx6jgU9R8CeY83nnK3fQheamppefvllHx8fnU43e/Zsk8l019Wam5unT58eEBAwYMCAYcOGLVu2rLM1radqvRa//PKL9Gvfkti0adPIkSPVHok19fb6sTp58mRCQsKgQYO8vLzi4uI6+27I9u3b209vrVbbnbus1yv13nWEN2/evOM1GxERcU9baG1tnTJliq+v74ABA8LDw1euXNnY2GjlUG3zWrDxfuk5XaxDz6HndH8LjtlzNLIVZ+SUd++s2ULfQr22eWxfRL1i79deqNc2j+2LqPdeOe6bWQAAAL2CuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHCu1m/CGa6l3Z6z1WsNZztWzlavvTjbcXa2eq3hbMfK2eq1Bmd3AACA4Ky6zCAAAIDj4+wOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABPf/Kg/MoRR2M6oAAAAASUVORK5CYII="
-/>
+![binding to cores and cyclic:block distribution](misc/hybrid_cores_cyclic_block.png)
+{: align="center"}
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=4
-#SBATCH --cpus-per-task=4
+!!! example "Binding to cores and cyclic:block distribution"
 
-export OMP_NUM_THREADS=4<br /><br />srun --ntasks 8 --cpus-per-task $OMP_NUM_THREADS --cpu_bind=cores --distribution=cyclic:block ./application
-```
+    ```bash
+    #!/bin/bash
+    #SBATCH --nodes=2
+    #SBATCH --tasks-per-node=4
+    #SBATCH --cpus-per-task=4
+
+    export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
+    srun --ntasks 8 --cpus-per-task $OMP_NUM_THREADS --cpu_bind=cores --distribution=cyclic:block ./application
+    ```
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/checkpoint_restart.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/checkpoint_restart.md
index ea3343fe1a5d21a296207fc374aa181e3ccc0855..38d6686d7a655c1c5d7161d6607be9d6f55d8b5c 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/checkpoint_restart.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/checkpoint_restart.md
@@ -12,6 +12,15 @@ from the very beginning, you should be familiar with the concept of checkpointin
 Another motivation is to use checkpoint/restart to split long running jobs into several shorter
 ones. This might improve the overall job throughput, since shorter jobs can "fill holes" in the job
 queue.
+Here is an extreme example from literature for the waste of large computing resources due to missing
+checkpoints:
+
+!!! cite "Adams, D. The Hitchhikers Guide Through the Galaxy"
+
+    Earth was a supercomputer constructed to find the question to the answer to the Life, the Universe,
+    and Everything by a race of hyper-intelligent pan-dimensional beings. Unfortunately 10 million years
+    later, and five minutes before the program had run to completion, the Earth was destroyed by
+    Vogons.
 
 If you wish to do checkpointing, your first step should always be to check if your application
 already has such capabilities built-in, as that is the most stable and safe way of doing it.
@@ -21,7 +30,7 @@ Abaqus, Amber, Gaussian, GROMACS, LAMMPS, NAMD, NWChem, Quantum Espresso, STAR-C
 
 In case your program does not natively support checkpointing, there are attempts at creating generic
 checkpoint/restart solutions that should work application-agnostic. One such project which we
-recommend is [Distributed MultiThreaded CheckPointing](http://dmtcp.sourceforge.net) (DMTCP).
+recommend is [Distributed Multi-Threaded Check-Pointing](http://dmtcp.sourceforge.net) (DMTCP).
 
 DMTCP is available on ZIH systems after having loaded the `dmtcp` module
 
@@ -47,8 +56,8 @@ checkpoint/restart bits transparently to your batch script. You just have to spe
 total runtime of your calculation and the interval in which you wish to do checkpoints. The latter
 (plus the time it takes to write the checkpoint) will then be the runtime of the individual jobs.
 This should be targeted at below 24 hours in order to be able to run on all
-[haswell64 partitions](../jobs_and_resources/system_taurus.md#run-time-limits). For increased
-fault-tolerance, it can be chosen even shorter.
+[partitions haswell64](../jobs_and_resources/partitions_and_limits.md#runtime-limits). For
+increased fault-tolerance, it can be chosen even shorter.
 
 To use it, first add a `dmtcp_launch` before your application call in your batch script. In the case
 of MPI applications, you have to add the parameters `--ib --rm` and put it between `srun` and your
@@ -85,7 +94,7 @@ about 2 days in total.
 
 !!! Hints
 
-    - If you see your first job running into the timelimit, that probably
+    - If you see your first job running into the time limit, that probably
     means the timeout for writing out checkpoint files does not suffice
     and should be increased. Our tests have shown that it takes
     approximately 5 minutes to write out the memory content of a fully
@@ -95,7 +104,7 @@ about 2 days in total.
     content is rather incompressible, it might be a good idea to disable
     the checkpoint file compression by setting: `export DMTCP_GZIP=0`
     - Note that all jobs the script deems necessary for your chosen
-    timelimit/interval values are submitted right when first calling the
+    time limit/interval values are submitted right when first calling the
     script. If your applications take considerably less time than what
     you specified, some of the individual jobs will be unnecessary. As
     soon as one job does not find a checkpoint to resume from, it will
@@ -115,7 +124,7 @@ What happens in your work directory?
 
 If you wish to restart manually from one of your checkpoints (e.g., if something went wrong in your
 later jobs or the jobs vanished from the queue for some reason), you have to call `dmtcp_sbatch`
-with the `-r, --resume` parameter, specifying a cpkt\_\* directory to resume from.  Then it will use
+with the `-r, --resume` parameter, specifying a `cpkt_` directory to resume from.  Then it will use
 the same parameters as in the initial run of this job chain. If you wish to adjust the time limit,
 for instance, because you realized that your original limit was too short, just use the `-t, --time`
 parameter again on resume.
@@ -126,7 +135,7 @@ If for some reason our automatic chain job script is not suitable for your use c
 just use DMTCP on its own. In the following we will give you step-by-step instructions on how to
 checkpoint your job manually:
 
-* Load the dmtcp module: `module load dmtcp`
+* Load the DMTCP module: `module load dmtcp`
 * DMTCP usually runs an additional process that
 manages the creation of checkpoints and such, the so-called `coordinator`. It must be started in
 your batch script before the actual start of your application. To help you with this process, we
@@ -138,9 +147,9 @@ first checkpoint has been created, which can be useful if you wish to implement
 chaining on your own.
 * In front of your program call, you have to add the wrapper
 script `dmtcp_launch`.  This will create a checkpoint automatically after 40 seconds and then
-terminate your application and with it the job. If the job runs into its timelimit (here: 60
+terminate your application and with it the job. If the job runs into its time limit (here: 60
 seconds), the time to write out the checkpoint was probably not long enough. If all went well, you
-should find cpkt\* files in your work directory together with a script called
+should find `cpkt` files in your work directory together with a script called
 `./dmtcp_restart_script.sh` that can be used to resume from the checkpoint.
 
 ???+ example
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md
new file mode 100644
index 0000000000000000000000000000000000000000..1b395644aa972113ac887c764c9a651f56826093
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md
@@ -0,0 +1,127 @@
+# ZIH Systems
+
+ZIH systems comprises the *High Performance Computing and Storage Complex* and its
+extension *High Performance Computing – Data Analytics*. In total it offers scientists
+about 60,000 CPU cores and a peak performance of more than 1.5 quadrillion floating point
+operations per second. The architecture specifically tailored to data-intensive computing, Big Data
+analytics, and artificial intelligence methods with extensive capabilities for energy measurement
+and performance monitoring provides ideal conditions to achieve the ambitious research goals of the
+users and the ZIH.
+
+## Login Nodes
+
+- Login-Nodes (`tauruslogin[3-6].hrsk.tu-dresden.de`)
+  - each with 2x Intel(R) Xeon(R) CPU E5-2680 v3 each with 12 cores
+    @ 2.50GHz, MultiThreading Disabled, 64 GB RAM, 128 GB SSD local disk
+  - IPs: 141.30.73.\[102-105\]
+- Transfer-Nodes (`taurusexport3/4.hrsk.tu-dresden.de`, DNS Alias
+  `taurusexport.hrsk.tu-dresden.de`)
+  - 2 Servers without interactive login, only available via file transfer protocols (`rsync`, `ftp`)
+  - IPs: 141.30.73.82/83
+- Direct access to these nodes is granted via IP whitelisting (contact
+  hpcsupport@zih.tu-dresden.de) - otherwise use TU Dresden VPN.
+
+## AMD Rome CPUs + NVIDIA A100
+
+- 32 nodes, each with
+  - 8 x NVIDIA A100-SXM4
+  - 2 x AMD EPYC CPU 7352 (24 cores) @ 2.3 GHz, MultiThreading disabled
+  - 1 TB RAM
+  - 3.5 TB local memory at NVMe device at `/tmp`
+- Hostnames: `taurusi[8001-8034]`
+- Slurm partition `alpha`
+- Dedicated mostly for ScaDS-AI
+
+## Island 7 - AMD Rome CPUs
+
+- 192 nodes, each with
+  - 2x AMD EPYC CPU 7702 (64 cores) @ 2.0GHz, MultiThreading
+    enabled,
+  - 512 GB RAM
+  - 200 GB /tmp on local SSD local disk
+- Hostnames: `taurusi[7001-7192]`
+- Slurm partition `romeo`
+- More information under [Rome Nodes](rome_nodes.md)
+
+## Large SMP System HPE Superdome Flex
+
+- 32 x Intel(R) Xeon(R) Platinum 8276M CPU @ 2.20GHz (28 cores)
+- 47 TB RAM
+- Currently configured as one single node
+  - Hostname: `taurussmp8`
+- Slurm partition `julia`
+- More information under [HPE SD Flex](sd_flex.md)
+
+## IBM Power9 Nodes for Machine Learning
+
+For machine learning, we have 32 IBM AC922 nodes installed with this configuration:
+
+- 2 x IBM Power9 CPU (2.80 GHz, 3.10 GHz boost, 22 cores)
+- 256 GB RAM DDR4 2666MHz
+- 6x NVIDIA VOLTA V100 with 32GB HBM2
+- NVLINK bandwidth 150 GB/s between GPUs and host
+- Slurm partition `ml`
+- Hostnames: `taurusml[1-32]`
+
+## Island 4 to 6 - Intel Haswell CPUs
+
+- 1456 nodes, each with 2x Intel(R) Xeon(R) CPU E5-2680 v3 (12 cores)
+  @ 2.50GHz, MultiThreading disabled, 128 GB SSD local disk
+- Hostname: `taurusi4[001-232]`, `taurusi5[001-612]`,
+  `taurusi6[001-612]`
+- Varying amounts of main memory (selected automatically by the batch
+  system for you according to your job requirements)
+  - 1328 nodes with 2.67 GB RAM per core (64 GB total):
+    `taurusi[4001-4104,5001-5612,6001-6612]`
+  - 84 nodes with 5.34 GB RAM per core (128 GB total):
+    `taurusi[4105-4188]`
+  - 44 nodes with 10.67 GB RAM per core (256 GB total):
+    `taurusi[4189-4232]`
+- Slurm Partition `haswell`
+
+??? hint "Node topology"
+
+    ![Node topology](misc/i4000.png)
+    {: align=center}
+
+### Extension of Island 4 with Broadwell CPUs
+
+* 32 nodes, each witch 2 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
+  (**14 cores**), MultiThreading disabled, 64 GB RAM, 256 GB SSD local disk
+* from the users' perspective: Broadwell is like Haswell
+* Hostname: `taurusi[4233-4264]`
+* Slurm partition `broadwell`
+
+## Island 2 Phase 2 - Intel Haswell CPUs + NVIDIA K80 GPUs
+
+* 64 nodes, each with 2x Intel(R) Xeon(R) CPU E5-E5-2680 v3 (12 cores)
+  @ 2.50GHz, MultiThreading Disabled, 64 GB RAM (2.67 GB per core),
+  128 GB SSD local disk, 4x NVIDIA Tesla K80 (12 GB GDDR RAM) GPUs
+* Hostname: `taurusi2[045-108]`
+* Slurm Partition `gpu`
+* Node topology, same as [island 4 - 6](#island-4-to-6-intel-haswell-cpus)
+
+## SMP Nodes - up to 2 TB RAM
+
+- 5 Nodes each with 4x Intel(R) Xeon(R) CPU E7-4850 v3 (14 cores) @
+  2.20GHz, MultiThreading Disabled, 2 TB RAM
+  - Hostname: `taurussmp[3-7]`
+  - Slurm partition `smp2`
+
+??? hint "Node topology"
+
+    ![Node topology](misc/smp2.png)
+    {: align=center}
+
+## Island 2 Phase 1 - Intel Sandybridge CPUs + NVIDIA K20x GPUs
+
+- 44 nodes, each with 2x Intel(R) Xeon(R) CPU E5-2450 (8 cores) @
+  2.10GHz, MultiThreading Disabled, 48 GB RAM (3 GB per core), 128 GB
+  SSD local disk, 2x NVIDIA Tesla K20x (6 GB GDDR RAM) GPUs
+- Hostname: `taurusi2[001-044]`
+- Slurm partition `gpu1`
+
+??? hint "Node topology"
+
+    ![Node topology](misc/i2000.png)
+    {: align=center}
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_taurus.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_taurus.md
deleted file mode 100644
index ff28e9b69d95496f299b80b45179f3787ad996cb..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_taurus.md
+++ /dev/null
@@ -1,110 +0,0 @@
-# Central Components
-
--   Login-Nodes (`tauruslogin[3-6].hrsk.tu-dresden.de`)
-    -   each with 2x Intel(R) Xeon(R) CPU E5-2680 v3 each with 12 cores
-        @ 2.50GHz, MultiThreading Disabled, 64 GB RAM, 128 GB SSD local
-        disk
-    -   IPs: 141.30.73.\[102-105\]
--   Transfer-Nodes (`taurusexport3/4.hrsk.tu-dresden.de`, DNS Alias
-    `taurusexport.hrsk.tu-dresden.de`)
-    -   2 Servers without interactive login, only available via file
-        transfer protocols (rsync, ftp)
-    -   IPs: 141.30.73.82/83
--   Direct access to these nodes is granted via IP whitelisting (contact
-    <hpcsupport@zih.tu-dresden.de>) - otherwise use TU Dresden VPN.
-
-## AMD Rome CPUs + NVIDIA A100
-
-- 32 nodes, each with
-  -   8 x NVIDIA A100-SXM4
-  -   2 x AMD EPYC CPU 7352 (24 cores) @ 2.3 GHz, MultiThreading
-      disabled
-  -   1 TB RAM
-  -   3.5 TB /tmp local NVMe device
-- Hostnames: taurusi\[8001-8034\]
-- SLURM partition `alpha`
-- dedicated mostly for ScaDS-AI
-
-## Island 7 - AMD Rome CPUs
-
--   192 nodes, each with
-    -   2x AMD EPYC CPU 7702 (64 cores) @ 2.0GHz, MultiThreading
-        enabled,
-    -   512 GB RAM
-    -   200 GB /tmp on local SSD local disk
--   Hostnames: taurusi\[7001-7192\]
--   SLURM partition `romeo`
--   more information under [RomeNodes](rome_nodes.md)
-
-## Large SMP System HPE Superdome Flex
-
--   32 x Intel(R) Xeon(R) Platinum 8276M CPU @ 2.20GHz (28 cores)
--   47 TB RAM
--   currently configured as one single node
-    -   Hostname: taurussmp8
--   SLURM partition `julia`
--   more information under [HPE SD Flex](sd_flex.md)
-
-## IBM Power9 Nodes for Machine Learning
-
-For machine learning, we have 32 IBM AC922 nodes installed with this
-configuration:
-
--   2 x IBM Power9 CPU (2.80 GHz, 3.10 GHz boost, 22 cores)
--   256 GB RAM DDR4 2666MHz
--   6x NVIDIA VOLTA V100 with 32GB HBM2
--   NVLINK bandwidth 150 GB/s between GPUs and host
--   SLURM partition `ml`
--   Hostnames: taurusml\[1-32\]
-
-## Island 4 to 6 - Intel Haswell CPUs
-
--   1456 nodes, each with 2x Intel(R) Xeon(R) CPU E5-2680 v3 (12 cores)
-    @ 2.50GHz, MultiThreading disabled, 128 GB SSD local disk
--   Hostname: taurusi4\[001-232\], taurusi5\[001-612\],
-    taurusi6\[001-612\]
--   varying amounts of main memory (selected automatically by the batch
-    system for you according to your job requirements)
-    -   1328 nodes with 2.67 GB RAM per core (64 GB total):
-        taurusi\[4001-4104,5001-5612,6001-6612\]
-    -   84 nodes with 5.34 GB RAM per core (128 GB total):
-        taurusi\[4105-4188\]
-    -   44 nodes with 10.67 GB RAM per core (256 GB total):
-        taurusi\[4189-4232\]
--   SLURM Partition `haswell`
--   [Node topology] **todo** %ATTACHURL%/i4000.png
-
-### Extension of Island 4 with Broadwell CPUs
-
--   32 nodes, eachs witch 2 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
-    (**14 cores**) , MultiThreading disabled, 64 GB RAM, 256 GB SSD
-    local disk
--   from the users' perspective: Broadwell is like Haswell
--   Hostname: taurusi\[4233-4264\]
--   SLURM partition `broadwell`
-
-## Island 2 Phase 2 - Intel Haswell CPUs + NVIDIA K80 GPUs
-
--   64 nodes, each with 2x Intel(R) Xeon(R) CPU E5-E5-2680 v3 (12 cores)
-    @ 2.50GHz, MultiThreading Disabled, 64 GB RAM (2.67 GB per core),
-    128 GB SSD local disk, 4x NVIDIA Tesla K80 (12 GB GDDR RAM) GPUs
--   Hostname: taurusi2\[045-108\]
--   SLURM Partition `gpu`
--   [Node topology] **todo %ATTACHURL%/i4000.png** (without GPUs)
-
-## SMP Nodes - up to 2 TB RAM
-
--   5 Nodes each with 4x Intel(R) Xeon(R) CPU E7-4850 v3 (14 cores) @
-    2.20GHz, MultiThreading Disabled, 2 TB RAM
-    -   Hostname: `taurussmp[3-7]`
-    -   SLURM Partition `smp2`
-    -   [Node topology] **todo** %ATTACHURL%/smp2.png
-
-## Island 2 Phase 1 - Intel Sandybridge CPUs + NVIDIA K20x GPUs
-
--   44 nodes, each with 2x Intel(R) Xeon(R) CPU E5-2450 (8 cores) @
-    2.10GHz, MultiThreading Disabled, 48 GB RAM (3 GB per core), 128 GB
-    SSD local disk, 2x NVIDIA Tesla K20x (6 GB GDDR RAM) GPUs
--   Hostname: `taurusi2[001-044]`
--   SLURM Partition `gpu1`
--   [Node topology] **todo** %ATTACHURL%/i2000.png (without GPUs)
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/index.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/index.md
deleted file mode 100644
index 911449758f01a2fce79f5179b5d81f51c79abe84..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/index.md
+++ /dev/null
@@ -1,65 +0,0 @@
-# Batch System
-
-Applications on an HPC system can not be run on the login node. They have to be submitted to compute
-nodes with dedicated resources for user jobs. Normally a job can be submitted with these data:
-
-* number of CPU cores,
-* requested CPU cores have to belong on one node (OpenMP programs) or can distributed (MPI),
-* memory per process,
-* maximum wall clock time (after reaching this limit the process is killed automatically),
-* files for redirection of output and error messages,
-* executable and command line parameters.
-
-*Comment:* Please keep in mind that for a large runtime a computation may not reach its end. Try to
-create shorter runs (4...8 hours) and use checkpointing. Here is an extreme example from literature
-for the waste of large computing resources due to missing checkpoints:
-
->Earth was a supercomputer constructed to find the question to the answer to the Life, the Universe,
->and Everything by a race of hyper-intelligent pan-dimensional beings. Unfortunately 10 million years
->later, and five minutes before the program had run to completion, the Earth was destroyed by
->Vogons.
-
-(Adams, D. The Hitchhikers Guide Through the Galaxy)
-
-## Slurm
-
-The HRSK-II systems are operated with the batch system [Slurm](https://slurm.schedmd.com). Just
-specify the resources you need in terms of cores, memory, and time and your job will be placed on
-the system.
-
-### Job Submission
-
-Job submission can be done with the command: `srun [options] <command>`
-
-However, using `srun` directly on the shell will be blocking and launch an interactive job. Apart
-from short test runs, it is recommended to launch your jobs into the background by using batch jobs.
-For that, you can conveniently put the parameters directly in a job file which you can submit using
-`sbatch [options] <job file>`
-
-Some options of srun/sbatch are:
-
-| Slurm Option | Description |
-|------------|-------|
-| `-n <N>` or `--ntasks <N>`         | set a number of tasks to N(default=1). This determines how many processes will be spawned by srun (for MPI jobs). |
-| `-N <N>` or `--nodes <N>`          | set number of nodes that will be part of a job, on each node there will be --ntasks-per-node processes started, if the option --ntasks-per-node is not given, 1 process per node will be started |
-| `--ntasks-per-node <N>`            | how many tasks per allocated node to start, as stated in the line before |
-| `-c <N>` or `--cpus-per-task <N>`  | this option is needed for multithreaded (e.g. OpenMP) jobs, it tells SLURM to allocate N cores per task allocated; typically N should be equal to the number of threads you program spawns, e.g. it should be set to the same number as OMP_NUM_THREADS |
-| `-p <name>` or `--partition <name>`| select the type of nodes where you want to execute your job, on Taurus we currently have haswell, smp, sandy, west, ml and gpu available |
-| `--mem-per-cpu <name>`             | specify the memory need per allocated CPU in MB |
-| `--time <HH:MM:SS>`                | specify the maximum runtime of your job, if you just put a single number in, it will be interpreted as minutes |
-| `--mail-user <your email>`         | tell the batch system your email address to get updates about the status of the jobs |
-| `--mail-type ALL`                  | specify for what type of events you want to get a mail; valid options beside ALL are: BEGIN, END, FAIL, REQUEUE |
-| `-J <name> or --job-name <name>`   | give your job a name which is shown in the queue, the name will also be included in job emails (but cut after 24 chars within emails) |
-| `--exclusive`                      | tell SLURM that only your job is allowed on the nodes allocated to this job; please be aware that you will be charged for all CPUs/cores on the node |
-| `-A <project>`                     | Charge resources used by this job to the specified project, useful if a user belongs to multiple projects. |
-| `-o <filename>` or `--output <filename>` | specify a file name that will be used to store all normal output (stdout), you can use %j (job id) and %N (name of first node) to automatically adopt the file name to the job, per default stdout goes to "slurm-%j.out" |
-
-<!--NOTE: the target path of this parameter must be writeable on the compute nodes, i.e. it may not point to a read-only mounted file system like /projects.-->
-<!---e <filename> or --error <filename>-->
-
-<!--specify a file name that will be used to store all error output (stderr), you can use %j (job id) and %N (name of first node) to automatically adopt the file name to the job, per default stderr goes to "slurm-%j.out" as well-->
-
-<!--NOTE: the target path of this parameter must be writeable on the compute nodes, i.e. it may not point to a read-only mounted file system like /projects.-->
-<!---a or --array 	submit an array job, see the extra section below-->
-<!---w <node1>,<node2>,... 	restrict job to run on specific nodes only-->
-<!---x <node1>,<node2>,... 	exclude specific nodes from job-->
diff --git a/Compendium_attachments/Slurm/hdfview_memory.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hdfview_memory.png
similarity index 100%
rename from Compendium_attachments/Slurm/hdfview_memory.png
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hdfview_memory.png
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hybrid.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hybrid.png
new file mode 100644
index 0000000000000000000000000000000000000000..116e03dd0785492be3f896cda69959a025f5ac49
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hybrid.png differ
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hybrid_cores_block_block.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hybrid_cores_block_block.png
new file mode 100644
index 0000000000000000000000000000000000000000..4c196df91b2fe410609a8e76505eca95f283ce29
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hybrid_cores_block_block.png differ
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hybrid_cores_cyclic_block.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hybrid_cores_cyclic_block.png
new file mode 100644
index 0000000000000000000000000000000000000000..dfccaf451553c710fcddd648ae9721866668f9e8
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/hybrid_cores_cyclic_block.png differ
diff --git a/Compendium_attachments/HardwareTaurus/i2000.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/i2000.png
similarity index 100%
rename from Compendium_attachments/HardwareTaurus/i2000.png
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/i2000.png
diff --git a/Compendium_attachments/HardwareTaurus/i4000.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/i4000.png
similarity index 100%
rename from Compendium_attachments/HardwareTaurus/i4000.png
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/i4000.png
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi.png
new file mode 100644
index 0000000000000000000000000000000000000000..82087209059e535401724c493fff74d743da58e4
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi.png differ
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_block_block.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_block_block.png
new file mode 100644
index 0000000000000000000000000000000000000000..0c6e9bbfa0e7f0614ede7e89f292e2d5f1a74316
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_block_block.png differ
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_cyclic_block.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_cyclic_block.png
new file mode 100644
index 0000000000000000000000000000000000000000..dab17e83ed4930b253818e15bc42ef1b1b2c9918
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_cyclic_block.png differ
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_cyclic_cyclic.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_cyclic_cyclic.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b9361dd1f0a2b76b063ad64652844c425aacbdf
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_cyclic_cyclic.png differ
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_default.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_default.png
new file mode 100644
index 0000000000000000000000000000000000000000..82087209059e535401724c493fff74d743da58e4
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_default.png differ
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_socket_block_block.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_socket_block_block.png
new file mode 100644
index 0000000000000000000000000000000000000000..be12c78d1a85297cd60161a1808462941def94fb
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_socket_block_block.png differ
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_socket_block_cyclic.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_socket_block_cyclic.png
new file mode 100644
index 0000000000000000000000000000000000000000..08f2a90100ed88175f7ef6fa3d867a70ad0880d7
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/mpi_socket_block_cyclic.png differ
diff --git a/Compendium_attachments/NvmeStorage/nvme.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/nvme.png
similarity index 100%
rename from Compendium_attachments/NvmeStorage/nvme.png
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/nvme.png
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/openmp.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/openmp.png
new file mode 100644
index 0000000000000000000000000000000000000000..0cf284368f10bdd8c4a3b4c97530151e0142aad6
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/openmp.png differ
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/part.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/part.png
new file mode 100644
index 0000000000000000000000000000000000000000..e2b5418f622d3fa32ba2c6ce44889e84e4d1cddd
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/part.png differ
diff --git a/Compendium_attachments/HardwareTaurus/smp2.png b/doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/smp2.png
similarity index 100%
rename from Compendium_attachments/HardwareTaurus/smp2.png
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/misc/smp2.png
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md
index 40a0d6af3e6f62fe69a76fc01e806b63fa8dc9df..78b8175ccbba3fb0eee8be7b946ebe2bee31219b 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md
@@ -1,6 +1,5 @@
 # NVMe Storage
 
-**TODO image nvme.png**
 90 NVMe storage nodes, each with
 
 -   8x Intel NVMe Datacenter SSD P4610, 3.2 TB
@@ -11,3 +10,6 @@
 -   64 GB RAM
 
 NVMe cards can saturate the HCAs
+
+![Configuration](misc/nvme.png)
+{: align=center}
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/overview.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/overview.md
index 67cc21a6cd4a4c68cbaec377151106bf63428b75..5240db14cb506d8719b9e46fe3feb89aede4a95f 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/overview.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/overview.md
@@ -1,57 +1,53 @@
 # HPC Resources and Jobs
 
-When log in to ZIH systems, you are placed on a *login node* **TODO** link to login nodes section
-where you can [manage data life cycle](../data_lifecycle/overview.md),
-[setup experiments](../data_lifecycle/experiments.md), execute short tests and compile moderate
-projects. The login nodes cannot be used for real experiments and computations. Long and extensive
-computational work and experiments have to be encapsulated into so called **jobs** and scheduled to
-the compute nodes.
-
-<!--Login nodes which are using for login can not be used for your computations.-->
-<!--To run software, do calculations and experiments, or compile your code compute nodes have to be used.-->
-
-ZIH uses the batch system Slurm for resource management and job scheduling.
-<!--[HPC Introduction]**todo link** is a good resource to get started with it.-->
-
-??? note "Batch Job"
-
-    In order to allow the batch scheduler an efficient job placement it needs these
-    specifications:
-
-    * **requirements:** cores, memory per core, (nodes), additional resources (GPU),
-    * maximum run-time,
-    * HPC project (normally use primary group which gives id),
-    * who gets an email on which occasion,
-
-    The runtime environment (see [here](../software/overview.md)) as well as the executable and
-    certain command-line arguments have to be specified to run the computational work.
-
-??? note "Batch System"
-
-    The batch system is the central organ of every HPC system users interact with its compute
-    resources. The batch system finds an adequate compute system (partition/island) for your compute
-    jobs. It organizes the queueing and messaging, if all resources are in use. If resources are
-    available for your job, the batch system allocates and connects to these resources, transfers
-    run-time environment, and starts the job.
+ZIH operates a high performance computing (HPC) system with more than 60.000 cores, 720 GPUs, and a
+flexible storage hierarchy with about 16 PB total capacity. The HPC system provides an optimal
+research environment especially in the area of data analytics and machine learning as well as for
+processing extremely large data sets. Moreover it is also a perfect platform for highly scalable,
+data-intensive and compute-intensive applications.
+
+With shared [login nodes](#login-nodes) and [filesystems](../data_lifecycle/file_systems.md) our
+HPC system enables users to easily switch between [the components](hardware_overview.md), each
+specialized for different application scenarios.
+
+When log in to ZIH systems, you are placed on a login node where you can
+[manage data life cycle](../data_lifecycle/overview.md),
+[setup experiments](../data_lifecycle/experiments.md),
+execute short tests and compile moderate projects. The login nodes cannot be used for real
+experiments and computations. Long and extensive computational work and experiments have to be
+encapsulated into so called **jobs** and scheduled to the compute nodes.
 
 Follow the page [Slurm](slurm.md) for comprehensive documentation using the batch system at
 ZIH systems. There is also a page with extensive set of [Slurm examples](slurm_examples.md).
 
 ## Selection of Suitable Hardware
 
-### What do I need a CPU or GPU?
+### What do I need, a CPU or GPU?
+
+If an application is designed to run on GPUs this is normally announced unmistakable since the
+efforts of adapting an existing software to make use of a GPU can be overwhelming.
+And even if the software was listed in [NVIDIA's list of GPU-Accelerated Applications](https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/tesla-product-literature/gpu-applications-catalog.pdf)
+only certain parts of the computations may run on the GPU.
+
+To answer the question: The easiest way is to compare a typical computation
+on a normal node and on a GPU node. (Make sure to eliminate the influence of different
+CPU types and different number of cores.) If the execution time with GPU is better
+by a significant factor then this might be the obvious choice.
+
+??? note "Difference in Architecture"
 
-The main difference between CPU and GPU architecture is that a CPU is designed to handle a wide
-range of tasks quickly, but are limited in the concurrency of tasks that can be running. While GPUs
-can process data much faster than a CPU due to massive parallelism (but the amount of data which
-a single GPU's core can handle is small), GPUs are not as versatile as CPUs.
+    The main difference between CPU and GPU architecture is that a CPU is designed to handle a wide
+    range of tasks quickly, but are limited in the concurrency of tasks that can be running.
+    While GPUs can process data much faster than a CPU due to massive parallelism
+    (but the amount of data which
+    a single GPU's core can handle is small), GPUs are not as versatile as CPUs.
 
 ### Available Hardware
 
 ZIH provides a broad variety of compute resources ranging from normal server CPUs of different
-manufactures, to large shared memory nodes, GPU-assisted nodes up to highly specialized resources for
+manufactures, large shared memory nodes, GPU-assisted nodes up to highly specialized resources for
 [Machine Learning](../software/machine_learning.md) and AI.
-The page [Hardware Taurus](hardware_taurus.md) holds a comprehensive overview.
+The page [ZIH Systems](hardware_overview.md) holds a comprehensive overview.
 
 The desired hardware can be specified by the partition `-p, --partition` flag in Slurm.
 The majority of the basic tasks can be executed on the conventional nodes like a Haswell. Slurm will
@@ -60,19 +56,19 @@ automatically select a suitable partition depending on your memory and GPU requi
 ### Parallel Jobs
 
 **MPI jobs:** For MPI jobs typically allocates one core per task. Several nodes could be allocated
-if it is necessary. Slurm will automatically find suitable hardware. Normal compute nodes are
-perfect for this task.
+if it is necessary. The batch system [Slurm](slurm.md) will automatically find suitable hardware.
+Normal compute nodes are perfect for this task.
 
 **OpenMP jobs:** SMP-parallel applications can only run **within a node**, so it is necessary to
-include the options `-N 1` and `-n 1`. Using `--cpus-per-task N` Slurm will start one task and you
-will have N CPUs. The maximum number of processors for an SMP-parallel program is 896 on Taurus
-([SMP]**todo link** island).
+include the [batch system](slurm.md) options `-N 1` and `-n 1`. Using `--cpus-per-task N` Slurm will
+start one task and you will have `N` CPUs. The maximum number of processors for an SMP-parallel
+program is 896 on partition `julia`, see [partitions](partitions_and_limits.md).
 
-**GPUs** partitions are best suited for **repetitive** and **highly-parallel** computing tasks. If
-you have a task with potential [data parallelism]**todo link** most likely that you need the GPUs.
-Beyond video rendering, GPUs excel in tasks such as machine learning, financial simulations and risk
-modeling. Use the gpu2 and ml partition only if you need GPUs! Otherwise using the x86 partitions
-(e.g Haswell) most likely would be more beneficial.
+Partitions with GPUs are best suited for **repetitive** and **highly-parallel** computing tasks. If
+you have a task with potential [data parallelism](../software/gpu_programming.md) most likely that
+you need the GPUs.  Beyond video rendering, GPUs excel in tasks such as machine learning, financial
+simulations and risk modeling. Use the partitions `gpu2` and `ml` only if you need GPUs! Otherwise
+using the x86-based partitions most likely would be more beneficial.
 
 **Interactive jobs:** Slurm can forward your X11 credentials to the first node (or even all) for a job
 with the `--x11` option. To use an interactive job you have to specify `-X` flag for the ssh login.
@@ -91,5 +87,31 @@ projects. The quality of this work influence on the computations. However, pre-
 in many cases can be done completely or partially on a local system and then transferred to ZIH
 systems. Please use ZIH systems primarily for the computation-intensive tasks.
 
-<!--Useful links: [Batch Systems]**todo link**, [Hardware Taurus]**todo link**, [HPC-DA]**todo link**,-->
-<!--[Slurm]**todo link**-->
+## Exclusive Reservation of Hardware
+
+If you need for some special reasons, e.g., for benchmarking, a project or paper deadline, parts of
+our machines exclusively, we offer the opportunity to request and reserve these parts for your
+project.
+
+Please send your request **7 working days** before the reservation should start (as that's our
+maximum time limit for jobs and it is therefore not guaranteed that resources are available on
+shorter notice) with the following information to the
+[HPC support](mailto:hpcsupport@zih.tu-dresden.de?subject=Request%20for%20a%20exclusive%20reservation%20of%20hardware&body=Dear%20HPC%20support%2C%0A%0AI%20have%20the%20following%20request%20for%20a%20exclusive%20reservation%20of%20hardware%3A%0A%0AProject%3A%0AReservation%20owner%3A%0ASystem%3A%0AHardware%20requirements%3A%0ATime%20window%3A%20%3C%5Byear%5D%3Amonth%3Aday%3Ahour%3Aminute%20-%20%5Byear%5D%3Amonth%3Aday%3Ahour%3Aminute%3E%0AReason%3A):
+
+- `Project:` *Which project will be credited for the reservation?*
+- `Reservation owner:` *Who should be able to run jobs on the
+  reservation? I.e., name of an individual user or a group of users
+  within the specified project.*
+- `System:` *Which machine should be used?*
+- `Hardware requirements:` *How many nodes and cores do you need? Do
+  you have special requirements, e.g., minimum on main memory,
+  equipped with a graphic card, special placement within the network
+  topology?*
+- `Time window:` *Begin and end of the reservation in the form
+  `year:month:dayThour:minute:second` e.g.: 2020-05-21T09:00:00*
+- `Reason:` *Reason for the reservation.*
+
+!!! hint
+
+    Please note that your project CPU hour budget will be credited for the reserved hardware even if
+    you don't use it.
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/partitions_and_limits.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/partitions_and_limits.md
new file mode 100644
index 0000000000000000000000000000000000000000..edf5bae8582cff37ba5dca68d70c70a35438f341
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/partitions_and_limits.md
@@ -0,0 +1,78 @@
+# Partitions, Memory and Run Time Limits
+
+There is no such thing as free lunch at ZIH systems. Since, compute nodes are operated in multi-user
+node by default, jobs of several users can run at the same time at the very same node sharing
+resources, like memory (but not CPU). On the other hand, a higher throughput can be achieved by
+smaller jobs. Thus, restrictions w.r.t. [memory](#memory-limits) and
+[runtime limits](#runtime-limits) have to be respected when submitting jobs.
+
+## Runtime Limits
+
+!!! note "Runtime limits are enforced."
+
+    This means, a job will be canceled as soon as it exceeds its requested limit. Currently, the
+    maximum run time is 7 days.
+
+Shorter jobs come with multiple advantages:
+
+- lower risk of loss of computing time,
+- shorter waiting time for scheduling,
+- higher job fluctuation; thus, jobs with high priorities may start faster.
+
+To bring down the percentage of long running jobs we restrict the number of cores with jobs longer
+than 2 days to approximately 50% and with jobs longer than 24 to 75% of the total number of cores.
+(These numbers are subject to changes.) As best practice we advise a run time of about 8h.
+
+!!! hint "Please always try to make a good estimation of your needed time limit."
+
+    For this, you can use a command line like this to compare the requested timelimit with the
+    elapsed time for your completed jobs that started after a given date:
+
+    ```console
+    marie@login$ sacct -X -S 2021-01-01 -E now --format=start,JobID,jobname,elapsed,timelimit -s COMPLETED
+    ```
+
+Instead of running one long job, you should split it up into a chain job. Even applications that are
+not capable of checkpoint/restart can be adapted. Please refer to the section
+[Checkpoint/Restart](../jobs_and_resources/checkpoint_restart.md) for further documentation.
+
+![Partitions](misc/part.png)
+{: align="center"}
+
+## Memory Limits
+
+!!! note "Memory limits are enforced."
+
+    This means that jobs which exceed their per-node memory limit will be killed automatically by
+    the batch system.
+
+Memory requirements for your job can be specified via the `sbatch/srun` parameters:
+
+`--mem-per-cpu=<MB>` or `--mem=<MB>` (which is "memory per node"). The **default limit** is quite
+low at **300 MB** per CPU.
+
+ZIH systems comprises different sets of nodes with different amount of installed memory which affect
+where your job may be run. To achieve the shortest possible waiting time for your jobs, you should
+be aware of the limits shown in the following table.
+
+??? hint "Partitions and memory limits"
+
+    | Partition          | Nodes                                    | # Nodes | Cores per Node  | MB per Core | MB per Node | GPUs per Node     |
+    |:-------------------|:-----------------------------------------|:--------|:----------------|:------------|:------------|:------------------|
+    | `haswell64`        | `taurusi[4001-4104,5001-5612,6001-6612]` | `1328`  | `24`            | `2541`       | `61000`    | `-`               |
+    | `haswell128`       | `taurusi[4105-4188]`                     | `84`    | `24`            | `5250`       | `126000`   | `-`               |
+    | `haswell256`       | `taurusi[4189-4232]`                     | `44`    | `24`            | `10583`      | `254000`   | `-`               |
+    | `broadwell`        | `taurusi[4233-4264]`                     | `32`    | `28`            | `2214`       | `62000`    | `-`               |
+    | `smp2`             | `taurussmp[3-7]`                         | `5`     | `56`            | `36500`      | `2044000`  | `-`               |
+    | `gpu2`             | `taurusi[2045-2106]`                     | `62`    | `24`            | `2583`       | `62000`    | `4 (2 dual GPUs)` |
+    | `gpu2-interactive` | `taurusi[2045-2108]`                     | `64`    | `24`            | `2583`       | `62000`    | `4 (2 dual GPUs)` |
+    | `hpdlf`            | `taurusa[3-16]`                          | `14`    | `12`            | `7916`       | `95000`    | `3`               |
+    | `ml`               | `taurusml[1-32]`                         | `32`    | `44 (HT: 176)`  | `1443*`      | `254000`   | `6`               |
+    | `romeo`            | `taurusi[7001-7192]`                     | `192`   | `128 (HT: 256)` | `1972*`      | `505000`   | `-`               |
+    | `julia`            | `taurussmp8`                             | `1`     | `896`           | `27343*`     | `49000000` | `-`               |
+
+!!! note
+
+    The ML nodes have 4way-SMT, so for every physical core allocated (,e.g., with
+    `SLURM_HINT=nomultithread`), you will always get 4*1443 MB because the memory of the other
+    threads is allocated implicitly, too.
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md
index a6cdfba8bd47659bc3a14473cad74c10b73089d0..57ab511938f3eb515b9e38ca831e91cede692418 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md
@@ -2,50 +2,48 @@
 
 ## Hardware
 
-- Slurm partiton: romeo
-- Module architecture: rome
-- 192 nodes taurusi[7001-7192], each:
-    - 2x AMD EPYC CPU 7702 (64 cores) @ 2.0GHz, MultiThreading
+- Slurm partition: `romeo`
+- Module architecture: `rome`
+- 192 nodes `taurusi[7001-7192]`, each:
+    - 2x AMD EPYC CPU 7702 (64 cores) @ 2.0GHz, Simultaneous Multithreading (SMT)
     - 512 GB RAM
-    - 200 GB SSD disk mounted on /tmp
+    - 200 GB SSD disk mounted on `/tmp`
 
 ## Usage
 
-There is a total of 128 physical cores in each
-node. SMT is also active, so in total, 256 logical cores are available
-per node.
+There is a total of 128 physical cores in each node. SMT is also active, so in total, 256 logical
+cores are available per node.
 
 !!! note
-    Multithreading is disabled per default in a job. To make use of it
-    include the Slurm parameter `--hint=multithread` in your job script
-    or command line, or set
-    the environment variable `SLURM_HINT=multithread` before job submission.
 
-Each node brings 512 GB of main memory, so you can request roughly
-1972MB per logical core (using --mem-per-cpu). Note that you will always
-get the memory for the logical core sibling too, even if you do not
-intend to use SMT.
+    Multithreading is disabled per default in a job. To make use of it include the Slurm parameter
+    `--hint=multithread` in your job script or command line, or set the environment variable
+    `SLURM_HINT=multithread` before job submission.
+
+Each node brings 512 GB of main memory, so you can request roughly 1972 MB per logical core (using
+`--mem-per-cpu`). Note that you will always get the memory for the logical core sibling too, even if
+you do not intend to use SMT.
 
 !!! note
-    If you are running a job here with only ONE process (maybe
-    multiple cores), please explicitly set the option `-n 1` !
 
-Be aware that software built with Intel compilers and `-x*` optimization
-flags will not run on those AMD processors! That's why most older
-modules built with intel toolchains are not available on **romeo**.
+    If you are running a job here with only ONE process (maybe multiple cores), please explicitly
+    set the option `-n 1`!
+
+Be aware that software built with Intel compilers and `-x*` optimization flags will not run on those
+AMD processors! That's why most older modules built with Intel toolchains are not available on
+partition `romeo`.
 
-We provide the script: `ml_arch_avail` that you can use to check if a
-certain module is available on rome architecture.
+We provide the script `ml_arch_avail` that can be used to check if a certain module is available on
+`rome` architecture.
 
 ## Example, running CP2K on Rome
 
 First, check what CP2K modules are available in general:
 `module load spider CP2K` or `module avail CP2K`.
 
-You will see that there are several different CP2K versions avail, built
-with different toolchains. Now let's assume you have to decided you want
-to run CP2K version 6 at least, so to check if those modules are built
-for rome, use:
+You will see that there are several different CP2K versions avail, built with different toolchains.
+Now let's assume you have to decided you want to run CP2K version 6 at least, so to check if those
+modules are built for rome, use:
 
 ```console
 marie@login$ ml_arch_avail CP2K/6
@@ -55,13 +53,11 @@ CP2K/6.1-intel-2018a: sandy, haswell
 CP2K/6.1-intel-2018a-spglib: haswell
 ```
 
-There you will see that only the modules built with **foss** toolchain
-are available on architecture "rome", not the ones built with **intel**.
-So you can load e.g. `ml CP2K/6.1-foss-2019a`.
+There you will see that only the modules built with toolchain `foss` are available on architecture
+`rome`, not the ones built with `intel`. So you can load, e.g. `ml CP2K/6.1-foss-2019a`.
 
-Then, when writing your batch script, you have to specify the **romeo**
-partition. Also, if e.g. you wanted to use an entire ROME node (no SMT)
-and fill it with MPI ranks, it could look like this:
+Then, when writing your batch script, you have to specify the partition `romeo`. Also, if e.g. you
+wanted to use an entire ROME node (no SMT) and fill it with MPI ranks, it could look like this:
 
 ```bash
 #!/bin/bash
@@ -73,27 +69,26 @@ and fill it with MPI ranks, it could look like this:
 srun cp2k.popt input.inp
 ```
 
-## Using the Intel toolchain on Rome
+## Using the Intel Toolchain on Rome
 
-Currently, we have only newer toolchains starting at `intel/2019b`
-installed for the Rome nodes. Even though they have AMD CPUs, you can
-still use the Intel compilers on there and they don't even create
-bad-performing code. When using the MKL up to version 2019, though,
-you should set the following environment variable to make sure that AVX2
-is used:
+Currently, we have only newer toolchains starting at `intel/2019b` installed for the Rome nodes.
+Even though they have AMD CPUs, you can still use the Intel compilers on there and they don't even
+create bad-performing code. When using the Intel Math Kernel Library (MKL) up to version 2019,
+though, you should set the following environment variable to make sure that AVX2 is used:
 
 ```bash
 export MKL_DEBUG_CPU_TYPE=5
 ```
 
-Without it, the MKL does a CPUID check and disables AVX2/FMA on
-non-Intel CPUs, leading to much worse performance.
+Without it, the MKL does a CPUID check and disables AVX2/FMA on non-Intel CPUs, leading to much
+worse performance.
+
 !!! note
-    In version 2020, Intel has removed this environment variable and added separate Zen
-    codepaths to the library. However, they are still incomplete and do not
-    cover every BLAS function. Also, the Intel AVX2 codepaths still seem to
-    provide somewhat better performance, so a new workaround would be to
-    overwrite the `mkl_serv_intel_cpu_true` symbol with a custom function:
+
+    In version 2020, Intel has removed this environment variable and added separate Zen codepaths to
+    the library. However, they are still incomplete and do not cover every BLAS function. Also, the
+    Intel AVX2 codepaths still seem to provide somewhat better performance, so a new workaround
+    would be to overwrite the `mkl_serv_intel_cpu_true` symbol with a custom function:
 
 ```c
 int mkl_serv_intel_cpu_true() {
@@ -108,13 +103,11 @@ marie@login$ gcc -shared -fPIC -o libfakeintel.so fakeintel.c
 marie@login$ export LD_PRELOAD=libfakeintel.so
 ```
 
-As for compiler optimization flags, `-xHOST` does not seem to produce
-best-performing code in every case on Rome. You might want to try
-`-mavx2 -fma` instead.
+As for compiler optimization flags, `-xHOST` does not seem to produce best-performing code in every
+case on Rome. You might want to try `-mavx2 -fma` instead.
 
 ### Intel MPI
 
-We have seen only half the theoretical peak bandwidth via Infiniband
-between two nodes, whereas OpenMPI got close to the peak bandwidth, so
-you might want to avoid using Intel MPI on romeo if your application
-heavily relies on MPI communication until this issue is resolved.
+We have seen only half the theoretical peak bandwidth via Infiniband between two nodes, whereas
+OpenMPI got close to the peak bandwidth, so you might want to avoid using Intel MPI on partition
+`rome` if your application heavily relies on MPI communication until this issue is resolved.
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
index 04624da4e55fe3a32e3d41842622b38b3e176315..c09260cf8d814a6a6835f981a25d1e8700c71df2 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
@@ -1,24 +1,23 @@
-# Large shared-memory node - HPE Superdome Flex
+# Large Shared-Memory Node - HPE Superdome Flex
 
--   Hostname: taurussmp8
--   Access to all shared file systems
--   Slurm partition `julia`
--   32 x Intel(R) Xeon(R) Platinum 8276M CPU @ 2.20GHz (28 cores)
--   48 TB RAM (usable: 47 TB - one TB is used for cache coherence
-    protocols)
--   370 TB of fast NVME storage available at `/nvme/<projectname>`
+- Hostname: `taurussmp8`
+- Access to all shared filesystems
+- Slurm partition `julia`
+- 32 x Intel(R) Xeon(R) Platinum 8276M CPU @ 2.20GHz (28 cores)
+- 48 TB RAM (usable: 47 TB - one TB is used for cache coherence protocols)
+- 370 TB of fast NVME storage available at `/nvme/<projectname>`
 
-## Local temporary NVMe storage
+## Local Temporary NVMe Storage
 
 There are 370 TB of NVMe devices installed. For immediate access for all projects, a volume of 87 TB
-of fast NVMe storage is available at `/nvme/1/<projectname>`. For testing, we have set a quota of 100
-GB per project on this NVMe storage.This is
+of fast NVMe storage is available at `/nvme/1/<projectname>`. For testing, we have set a quota of
+100 GB per project on this NVMe storage.
 
 With a more detailed proposal on how this unique system (large shared memory + NVMe storage) can
 speed up their computations, a project's quota can be increased or dedicated volumes of up to the
 full capacity can be set up.
 
-## Hints for usage
+## Hints for Usage
 
 - granularity should be a socket (28 cores)
 - can be used for OpenMP applications with large memory demands
@@ -35,5 +34,5 @@ full capacity can be set up.
   this unique system (large shared memory + NVMe storage) can speed up
   their computations, we will gladly increase this limit, for selected
   projects.
-- Test users might have to clean-up their /nvme storage within 4 weeks
+- Test users might have to clean-up their `/nvme` storage within 4 weeks
   to make room for large projects.
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md
index 0c4d3d92a25de40aa7ec887feeb08086081a5af3..d7c3530fad85643c4f814a02c6e3250df427af38 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md
@@ -1,589 +1,405 @@
-# Slurm
+# Batch System Slurm
 
-The HRSK-II systems are operated with the batch system Slurm. Just specify the resources you need
-in terms of cores, memory, and time and your job will be placed on the system.
+When log in to ZIH systems, you are placed on a login node. There you can manage your
+[data life cycle](../data_lifecycle/overview.md),
+[setup experiments](../data_lifecycle/experiments.md), and
+edit and prepare jobs. The login nodes are not suited for computational work!  From the login nodes,
+you can interact with the batch system, e.g., submit and monitor your jobs.
 
-## Job Submission
+??? note "Batch System"
 
-Job submission can be done with the command: `srun [options] <command>`
-
-However, using srun directly on the shell will be blocking and launch an interactive job. Apart from
-short test runs, it is recommended to launch your jobs into the background by using batch jobs. For
-that, you can conveniently put the parameters directly in a job file which you can submit using
-`sbatch [options] <job file>`
-
-Some options of `srun/sbatch` are:
-
-| slurm option                           | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                |
-|:---------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| -n \<N> or --ntasks \<N>               | set a number of tasks to N(default=1). This determines how many processes will be spawned by srun (for MPI jobs).                                                                                                                                                                                                                                                                                                                                          |
-| -N \<N> or --nodes \<N>                | set number of nodes that will be part of a job, on each node there will be --ntasks-per-node processes started, if the option --ntasks-per-node is not given, 1 process per node will be started                                                                                                                                                                                                                                                           |
-| --ntasks-per-node \<N>                 | how many tasks per allocated node to start, as stated in the line before                                                                                                                                                                                                                                                                                                                                                                                   |
-| -c \<N> or --cpus-per-task \<N>        | this option is needed for multithreaded (e.g. OpenMP) jobs, it tells Slurm to allocate N cores per task allocated; typically N should be equal to the number of threads you program spawns, e.g. it should be set to the same number as OMP_NUM_THREADS                                                                                                                                                                                                    |
-| -p \<name> or --partition \<name>      | select the type of nodes where you want to execute your job, on Taurus we currently have haswell, `smp`, `sandy`, `west`, ml and `gpu` available                                                                                                                                                                                                                                                                                                           |
-| --mem-per-cpu \<name>                  | specify the memory need per allocated CPU in MB                                                                                                                                                                                                                                                                                                                                                                                                            |
-| --time \<HH:MM:SS>                     | specify the maximum runtime of your job, if you just put a single number in, it will be interpreted as minutes                                                                                                                                                                                                                                                                                                                                             |
-| --mail-user \<your email>              | tell the batch system your email address to get updates about the status of the jobs                                                                                                                                                                                                                                                                                                                                                                       |
-| --mail-type ALL                        | specify for what type of events you want to get a mail; valid options beside ALL are: BEGIN, END, FAIL, REQUEUE                                                                                                                                                                                                                                                                                                                                            |
-| -J \<name> or --job-name \<name>       | give your job a name which is shown in the queue, the name will also be included in job emails (but cut after 24 chars within emails)                                                                                                                                                                                                                                                                                                                      |
-| --no-requeue                           | At node failure, jobs are requeued automatically per default. Use this flag to disable requeueing.                                                                                                                                                                                                                                                                                                                                                         |
-| --exclusive                            | tell Slurm that only your job is allowed on the nodes allocated to this job; please be aware that you will be charged for all CPUs/cores on the node                                                                                                                                                                                                                                                                                                       |
-| -A \<project>                          | Charge resources used by this job to the specified project, useful if a user belongs to multiple projects.                                                                                                                                                                                                                                                                                                                                                 |
-| -o \<filename> or --output \<filename> | \<p>specify a file name that will be used to store all normal output (stdout), you can use %j (job id) and %N (name of first node) to automatically adopt the file name to the job, per default stdout goes to "slurm-%j.out"\</p> \<p>%RED%NOTE:<span class="twiki-macro ENDCOLOR"></span> the target path of this parameter must be writeable on the compute nodes, i.e. it may not point to a read-only mounted file system like /projects.\</p>        |
-| -e \<filename> or --error \<filename>  | \<p>specify a file name that will be used to store all error output (stderr), you can use %j (job id) and %N (name of first node) to automatically adopt the file name to the job, per default stderr goes to "slurm-%j.out" as well\</p> \<p>%RED%NOTE:<span class="twiki-macro ENDCOLOR"></span> the target path of this parameter must be writeable on the compute nodes, i.e. it may not point to a read-only mounted file system like /projects.\</p> |
-| -a or --array                          | submit an array job, see the extra section below                                                                                                                                                                                                                                                                                                                                                                                                           |
-| -w \<node1>,\<node2>,...               | restrict job to run on specific nodes only                                                                                                                                                                                                                                                                                                                                                                                                                 |
-| -x \<node1>,\<node2>,...               | exclude specific nodes from job                                                                                                                                                                                                                                                                                                                                                                                                                            |
-
-The following example job file shows how you can make use of sbatch
-
-```Bash
-#!/bin/bash
-#SBATCH --time=01:00:00
-#SBATCH --output=simulation-m-%j.out
-#SBATCH --error=simulation-m-%j.err
-#SBATCH --ntasks=512
-#SBATCH -A myproject
-
-echo Starting Program
-```
+    The batch system is the central organ of every HPC system users interact with its compute
+    resources. The batch system finds an adequate compute system (partition) for your compute jobs.
+    It organizes the queueing and messaging, if all resources are in use. If resources are available
+    for your job, the batch system allocates and connects to these resources, transfers runtime
+    environment, and starts the job.
 
-During runtime, the environment variable SLURM_JOB_ID will be set to the id of your job.
+??? note "Batch Job"
 
-You can also use our [Slurm Batch File Generator]**todo** Slurmgenerator, which could help you create
-basic Slurm job scripts.
+    At HPC systems, computational work and resource requirements are encapsulated into so-called
+    jobs. In order to allow the batch system an efficient job placement it needs these
+    specifications:
 
-Detailed information on [memory limits on Taurus]**todo**
+    * requirements: number of nodes and cores, memory per core, additional resources (GPU)
+    * maximum run-time
+    * HPC project for accounting
+    * who gets an email on which occasion
 
-### Interactive Jobs
+    Moreover, the [runtime environment](../software/overview.md) as well as the executable and
+    certain command-line arguments have to be specified to run the computational work.
 
-Interactive activities like editing, compiling etc. are normally limited to the login nodes. For
-longer interactive sessions you can allocate cores on the compute node with the command "salloc". It
-takes the same options like `sbatch` to specify the required resources.
+ZIH uses the batch system Slurm for resource management and job scheduling.
+Just specify the resources you need in terms
+of cores, memory, and time and your Slurm will place your job on the system.
 
-The difference to LSF is, that `salloc` returns a new shell on the node, where you submitted the
-job. You need to use the command `srun` in front of the following commands to have these commands
-executed on the allocated resources. If you allocate more than one task, please be aware that srun
-will run the command on each allocated task!
+This pages provides a brief overview on
 
-An example of an interactive session looks like:
+* [Slurm options](#options) to specify resource requirements,
+* how to submit [interactive](#interactive-jobs) and [batch jobs](#batch-jobs),
+* how to [write job files](#job-files),
+* how to [manage and control your jobs](#manage-and-control-jobs).
 
-```Shell Session
-tauruslogin3 /home/mark; srun --pty -n 1 -c 4 --time=1:00:00 --mem-per-cpu=1700 bash<br />srun: job 13598400 queued and waiting for resources<br />srun: job 13598400 has been allocated resources
-taurusi1262 /home/mark;   # start interactive work with e.g. 4 cores.
-```
+If you are are already familiar with Slurm, you might be more interested in our collection of
+[job examples](slurm_examples.md).
+There is also a ton of external resources regarding Slurm. We recommend these links for detailed
+information:
 
-**Note:** A dedicated partition `interactive` is reserved for short jobs (< 8h) with not more than
-one job per user. Please check the availability of nodes there with `sinfo -p interactive` .
+- [slurm.schedmd.com](https://slurm.schedmd.com/) provides the official documentation comprising
+   manual pages, tutorials, examples, etc.
+- [Comparison with other batch systems](https://www.schedmd.com/slurmdocs/rosetta.html)
 
-### Interactive X11/GUI Jobs
+## Job Submission
 
-Slurm will forward your X11 credentials to the first (or even all) node
-for a job with the (undocumented) --x11 option. For example, an
-interactive session for 1 hour with Matlab using eight cores can be
-started with:
+There are three basic Slurm commands for job submission and execution:
 
-```Shell Session
-module load matlab
-srun --ntasks=1 --cpus-per-task=8 --time=1:00:00 --pty --x11=first matlab
-```
+1. `srun`: Submit a job for execution or initiate job steps in real time.
+1. `sbatch`: Submit a batch script to Slurm for later execution.
+1. `salloc`: Obtain a Slurm job allocation (a set of nodes), execute a command, and then release the
+   allocation when the command is finished.
 
-**Note:** If you are getting the error:
+Using `srun` directly on the shell will be blocking and launch an
+[interactive job](#interactive-jobs). Apart from short test runs, it is recommended to submit your
+jobs to Slurm for later execution by using [batch jobs](#batch-jobs). For that, you can conveniently
+put the parameters directly in a [job file](#job-files) which you can submit using `sbatch [options]
+<job file>`.
 
-```Bash
-srun: error: x11: unable to connect node taurusiXXXX
-```
+During runtime, the environment variable `SLURM_JOB_ID` will be set to the id of your job. The job
+id is unique. The id allows you to [manage and control](#manage-and-control-jobs) your jobs.
 
-that probably means you still have an old host key for the target node in your `\~/.ssh/known_hosts`
-file (e.g. from pre-SCS5). This can be solved either by removing the entry from your known_hosts or
-by simply deleting the known_hosts file altogether if you don't have important other entries in it.
-
-### Requesting an Nvidia K20X / K80 / A100
-
-Slurm will allocate one or many GPUs for your job if requested. Please note that GPUs are only
-available in certain partitions, like `gpu2`, `gpu3` or `gpu2-interactive`. The option
-for sbatch/srun in this case is `--gres=gpu:[NUM_PER_NODE]` (where `NUM_PER_NODE` can be `1`, 2 or
-4, meaning that one, two or four of the GPUs per node will be used for the job). A sample job file
-could look like this
-
-```Bash
-#!/bin/bash
-#SBATCH -A Project1            # account CPU time to Project1
-#SBATCH --nodes=2              # request 2 nodes<br />#SBATCH --mincpus=1            # allocate one task per node...<br />#SBATCH --ntasks=2             # ...which means 2 tasks in total (see note below)
-#SBATCH --cpus-per-task=6      # use 6 threads per task
-#SBATCH --gres=gpu:1           # use 1 GPU per node (i.e. use one GPU per task)
-#SBATCH --time=01:00:00        # run for 1 hour
-srun ./your/cuda/application   # start you application (probably requires MPI to use both nodes)
-```
+## Options
 
-Please be aware that the partitions `gpu`, `gpu1` and `gpu2` can only be used for non-interactive
-jobs which are submitted by `sbatch`.  Interactive jobs (`salloc`, `srun`) will have to use the
-partition `gpu-interactive`. Slurm will automatically select the right partition if the partition
-parameter (-p) is omitted.
+The following table holds the most important options for `srun/sbatch/salloc` to specify resource
+requirements and control communication.
 
-**Note:** Due to an unresolved issue concerning the Slurm job scheduling behavior, it is currently
-not practical to use `--ntasks-per-node` together with GPU jobs.  If you want to use multiple nodes,
-please use the parameters `--ntasks` and `--mincpus` instead. The values of mincpus \* nodes has to
-equal ntasks in this case.
+??? tip "Options Table"
 
-### Limitations of GPU job allocations
+    | Slurm Option               | Description |
+    |:---------------------------|:------------|
+    | `-n, --ntasks=<N>`         | number of (MPI) tasks (default: 1) |
+    | `-N, --nodes=<N>`          | number of nodes; there will be `--ntasks-per-node` processes started on each node |
+    | `--ntasks-per-node=<N>`    | number of tasks per allocated node to start (default: 1) |
+    | `-c, --cpus-per-task=<N>`  | number of CPUs per task; needed for multithreaded (e.g. OpenMP) jobs; typically `N` should be equal to `OMP_NUM_THREADS` |
+    | `-p, --partition=<name>`   | type of nodes where you want to execute your job (refer to [partitions](partitions_and_limits.md)) |
+    | `--mem-per-cpu=<size>`     | memory need per allocated CPU in MB |
+    | `-t, --time=<HH:MM:SS>`    | maximum runtime of the job |
+    | `--mail-user=<your email>` | get updates about the status of the jobs |
+    | `--mail-type=ALL`          | for what type of events you want to get a mail; valid options: `ALL`, `BEGIN`, `END`, `FAIL`, `REQUEUE` |
+    | `-J, --job-name=<name>`    | name of the job shown in the queue and in mails (cut after 24 chars) |
+    | `--no-requeue`             | disable requeueing of the job in case of node failure (default: enabled) |
+    | `--exclusive`              | exclusive usage of compute nodes; you will be charged for all CPUs/cores on the node |
+    | `-A, --account=<project>`  | charge resources used by this job to the specified project |
+    | `-o, --output=<filename>`  | file to save all normal output (stdout) (default: `slurm-%j.out`) |
+    | `-e, --error=<filename>`   | file to save all error output (stderr)  (default: `slurm-%j.out`) |
+    | `-a, --array=<arg>`        | submit an array job ([examples](slurm_examples.md#array-jobs)) |
+    | `-w <node1>,<node2>,...`   | restrict job to run on specific nodes only |
+    | `-x <node1>,<node2>,...`   | exclude specific nodes from job |
 
-The number of cores per node that are currently allowed to be allocated for GPU jobs is limited
-depending on how many GPUs are being requested.  On the K80 nodes, you may only request up to 6
-cores per requested GPU (8 per on the K20 nodes). This is because we do not wish that GPUs remain
-unusable due to all cores on a node being used by a single job which does not, at the same time,
-request all GPUs.
+!!! note "Output and Error Files"
 
-E.g., if you specify `--gres=gpu:2`, your total number of cores per node (meaning: ntasks \*
-cpus-per-task) may not exceed 12 (on the K80 nodes)
+    When redirecting stderr and stderr into a file using `--output=<filename>` and
+    `--stderr=<filename>`, make sure the target path is writeable on the
+    compute nodes, i.e., it may not point to a read-only mounted
+    [filesystem](../data_lifecycle/overview.md) like `/projects.`
 
-Note that this also has implications for the use of the --exclusive parameter. Since this sets the
-number of allocated cores to 24 (or 16 on the K20X nodes), you also **must** request all four GPUs
-by specifying --gres=gpu:4, otherwise your job will not start. In the case of --exclusive, it won't
-be denied on submission, because this is evaluated in a later scheduling step. Jobs that directly
-request too many cores per GPU will be denied with the error message:
+!!! note "No free lunch"
 
-```Shell Session
-Batch job submission failed: Requested node configuration is not available
-```
+    Runtime and memory limits are enforced. Please refer to the section on [partitions and
+    limits](partitions_and_limits.md) for a detailed overview.
 
-### Parallel Jobs
+### Host List
 
-For submitting parallel jobs, a few rules have to be understood and followed. In general, they
-depend on the type of parallelization and architecture.
-
-#### OpenMP Jobs
-
-An SMP-parallel job can only run within a node, so it is necessary to include the options `-N 1` and
-`-n 1`. The maximum number of processors for an SMP-parallel program is 488 on Venus and 56 on
-taurus (smp island). Using --cpus-per-task N Slurm will start one task and you will have N CPUs
-available for your job. An example job file would look like:
-
-```Bash
-#!/bin/bash
-#SBATCH -J Science1
-#SBATCH --nodes=1
-#SBATCH --tasks-per-node=1
-#SBATCH --cpus-per-task=8
-#SBATCH --mail-type=end
-#SBATCH --mail-user=your.name@tu-dresden.de
-#SBATCH --time=08:00:00
+If you want to place your job onto specific nodes, there are two options for doing this. Either use
+`-p, --partion=<name>` to specify a host group aka. [partition](partitions_and_limits.md) that fits
+your needs. Or, use `-w, --nodelist=<host1,host2,..>`) with a list of hosts that will work for you.
 
-export OMP_NUM_THREADS=8
-./path/to/binary
-```
+## Interactive Jobs
 
-#### MPI Jobs
+Interactive activities like editing, compiling, preparing experiments etc. are normally limited to
+the login nodes. For longer interactive sessions you can allocate cores on the compute node with the
+command `salloc`. It takes the same options like `sbatch` to specify the required resources.
 
-For MPI jobs one typically allocates one core per task that has to be started. **Please note:**
-There are different MPI libraries on Taurus and Venus, so you have to compile the binaries
-specifically for their target.
+`salloc` returns a new shell on the node, where you submitted the job. You need to use the command
+`srun` in front of the following commands to have these commands executed on the allocated
+resources. If you allocate more than one task, please be aware that `srun` will run the command on
+each allocated task!
 
-```Bash
-#!/bin/bash
-#SBATCH -J Science1
-#SBATCH --ntasks=864
-#SBATCH --mail-type=end
-#SBATCH --mail-user=your.name@tu-dresden.de
-#SBATCH --time=08:00:00
+The syntax for submitting a job is
 
-srun ./path/to/binary
+```
+marie@login$ srun [options] <command>
 ```
 
-#### Multiple Programs Running Simultaneously in a Job
-
-In this short example, our goal is to run four instances of a program concurrently in a **single**
-batch script. Of course we could also start a batch script four times with sbatch but this is not
-what we want to do here. Please have a look at [Running Multiple GPU Applications Simultaneously in
-a Batch Job] todo Compendium.RunningNxGpuAppsInOneJob in case you intend to run GPU programs
-simultaneously in a **single** job.
-
-```Bash
-#!/bin/bash
-#SBATCH -J PseudoParallelJobs
-#SBATCH --ntasks=4
-#SBATCH --cpus-per-task=1
-#SBATCH --mail-type=end
-#SBATCH --mail-user=your.name@tu-dresden.de
-#SBATCH --time=01:00:00 
+An example of an interactive session looks like:
 
-# The following sleep command was reported to fix warnings/errors with srun by users (feel free to uncomment).
-#sleep 5
-srun --exclusive --ntasks=1 ./path/to/binary &
+```console
+marie@login$ srun --pty -n 1 -c 4 --time=1:00:00 --mem-per-cpu=1700 bash
+marie@login$ srun: job 13598400 queued and waiting for resources
+marie@login$ srun: job 13598400 has been allocated resources
+marie@compute$ # Now, you can start interactive work with e.g. 4 cores
+```
 
-#sleep 5
-srun --exclusive --ntasks=1 ./path/to/binary &
+!!! note "Partition `interactive`"
 
-#sleep 5
-srun --exclusive --ntasks=1 ./path/to/binary &
+    A dedicated partition `interactive` is reserved for short jobs (< 8h) with not more than one job
+    per user. Please check the availability of nodes there with `sinfo -p interactive`.
 
-#sleep 5
-srun --exclusive --ntasks=1 ./path/to/binary &
+### Interactive X11/GUI Jobs
 
-echo "Waiting for parallel job steps to complete..."
-wait
-echo "All parallel job steps completed!"
-```
+Slurm will forward your X11 credentials to the first (or even all) node for a job with the
+(undocumented) `--x11` option. For example, an interactive session for one hour with Matlab using
+eight cores can be started with:
 
-### Exclusive Jobs for Benchmarking
-
-Jobs on taurus run, by default, in shared-mode, meaning that multiple jobs can run on the same
-compute nodes. Sometimes, this behaviour is not desired (e.g. for benchmarking purposes), in which
-case it can be turned off by specifying the Slurm parameter: `--exclusive` .
-
-Setting `--exclusive` **only** makes sure that there will be **no other jobs running on your nodes**.
-It does not, however, mean that you automatically get access to all the resources which the node
-might provide without explicitly requesting them, e.g. you still have to request a GPU via the
-generic resources parameter (gres) to run on the GPU partitions, or you still have to request all
-cores of a node if you need them. CPU cores can either to be used for a task (`--ntasks`) or for
-multi-threading within the same task (--cpus-per-task). Since those two options are semantically
-different (e.g., the former will influence how many MPI processes will be spawned by 'srun' whereas
-the latter does not), Slurm cannot determine automatically which of the two you might want to use.
-Since we use cgroups for separation of jobs, your job is not allowed to use more resources than
-requested.*
-
-If you just want to use all available cores in a node, you have to
-specify how Slurm should organize them, like with \<span>"-p haswell -c
-24\</span>" or "\<span>-p haswell --ntasks-per-node=24". \</span>
-
-Here is a short example to ensure that a benchmark is not spoiled by
-other jobs, even if it doesn't use up all resources in the nodes:
-
-```Bash
-#!/bin/bash
-#SBATCH -J Benchmark
-#SBATCH -p haswell
-#SBATCH --nodes=2
-#SBATCH --ntasks-per-node=2
-#SBATCH --cpus-per-task=8
-#SBATCH --exclusive    # ensure that nobody spoils my measurement on 2 x 2 x 8 cores
-#SBATCH --mail-user=your.name@tu-dresden.de
-#SBATCH --time=00:10:00
-
-srun ./my_benchmark
+```console
+marie@login$ module load matlab
+marie@login$ srun --ntasks=1 --cpus-per-task=8 --time=1:00:00 --pty --x11=first matlab
 ```
 
-### Array Jobs
-
-Array jobs can be used to create a sequence of jobs that share the same executable and resource
-requirements, but have different input files, to be submitted, controlled, and monitored as a single
-unit. The arguments `-a` or `--array` take an additional parameter that specify the array indices.
-Within the job you can read the environment variables `SLURM_ARRAY_JOB_ID`, which will be set to the
-first job ID of the array, and `SLURM_ARRAY_TASK_ID`, which will be set individually for each step.
-
-Within an array job, you can use %a and %A in addition to %j and %N
-(described above) to make the output file name specific to the job. %A
-will be replaced by the value of SLURM_ARRAY_JOB_ID and %a will be
-replaced by the value of SLURM_ARRAY_TASK_ID.
-
-Here is an example of how an array job can look like:
-
-```Bash
-#!/bin/bash
-#SBATCH -J Science1
-#SBATCH --array 0-9
-#SBATCH -o arraytest-%A_%a.out
-#SBATCH -e arraytest-%A_%a.err
-#SBATCH --ntasks=864
-#SBATCH --mail-type=end
-#SBATCH --mail-user=your.name@tu-dresden.de
-#SBATCH --time=08:00:00
-
-echo "Hi, I am step $SLURM_ARRAY_TASK_ID in this array job $SLURM_ARRAY_JOB_ID"
-```
+!!! hint "X11 error"
 
-**Note:** If you submit a large number of jobs doing heavy I/O in the Lustre file systems you should
-limit the number of your simultaneously running job with a second parameter like:
+    If you are getting the error:
 
-```Bash
-#SBATCH --array=1-100000%100
-```
+    ```Bash
+    srun: error: x11: unable to connect node taurusiXXXX
+    ```
 
-For further details please read the Slurm documentation at
-(https://slurm.schedmd.com/sbatch.html)
-
-### Chain Jobs
-
-You can use chain jobs to create dependencies between jobs. This is often the case if a job relies
-on the result of one or more preceding jobs. Chain jobs can also be used if the runtime limit of the
-batch queues is not sufficient for your job. Slurm has an option `-d` or "--dependency" that allows
-to specify that a job is only allowed to start if another job finished.
-
-Here is an example of how a chain job can look like, the example submits 4 jobs (described in a job
-file) that will be executed one after each other with different CPU numbers:
-
-```Bash
-#!/bin/bash
-TASK_NUMBERS="1 2 4 8"
-DEPENDENCY=""
-JOB_FILE="myjob.slurm"
-
-for TASKS in $TASK_NUMBERS ; do
-    JOB_CMD="sbatch --ntasks=$TASKS"
-    if [ -n "$DEPENDENCY" ] ; then
-        JOB_CMD="$JOB_CMD --dependency afterany:$DEPENDENCY"
-    fi
-    JOB_CMD="$JOB_CMD $JOB_FILE"
-    echo -n "Running command: $JOB_CMD  "
-    OUT=`$JOB_CMD`
-    echo "Result: $OUT"
-    DEPENDENCY=`echo $OUT | awk '{print $4}'`
-done
-```
+    that probably means you still have an old host key for the target node in your
+    `~.ssh/known_hosts` file (e.g. from pre-SCS5). This can be solved either by removing the entry
+    from your known_hosts or by simply deleting the `known_hosts` file altogether if you don't have
+    important other entries in it.
 
-### Binding and Distribution of Tasks
+## Batch Jobs
 
-The Slurm provides several binding strategies to place and bind the tasks and/or threads of your job
-to cores, sockets and nodes. Note: Keep in mind that the distribution method has a direct impact on
-the execution time of your application. The manipulation of the distribution can either speed up or
-slow down your application. More detailed information about the binding can be found
-[here](binding_and_distribution_of_tasks.md).
+Working interactively using `srun` and `salloc` is a good starting point for testing and compiling.
+But, as soon as you leave the testing stage, we highly recommend you to use batch jobs.
+Batch jobs are encapsulated within [job files](#job-files) and submitted to the batch system using
+`sbatch` for later execution. A job file is basically a script holding the resource requirements,
+environment settings and the commands for executing the application. Using batch jobs and job files
+has multiple advantages:
 
-The default allocation of the tasks/threads for OpenMP, MPI and Hybrid (MPI and OpenMP) are as
-follows.
+* You can reproduce your experiments and work, because it's all steps are saved in a file.
+* You can easily share your settings and experimental setup with colleagues.
+* Submit your job file to the scheduling system for later execution. In the meanwhile, you can grab
+  a coffee and proceed with other work (,e.g., start writing a paper).
 
-#### OpenMP
+!!! hint "The syntax for submitting a job file to Slurm is"
 
-The illustration below shows the default binding of a pure OpenMP-job on 1 node with 16 cpus on
-which 16 threads are allocated.
+    ```console
+    marie@login$ sbatch [options] <job_file>
+    ```
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=1
-#SBATCH --tasks-per-node=1
-#SBATCH --cpus-per-task=16
+### Job Files
 
-export OMP_NUM_THREADS=16
+Job files have to be written with the following structure.
 
-srun --ntasks 1 --cpus-per-task $OMP_NUM_THREADS ./application
-```
+```bash
+#!/bin/bash                           # Batch script starts with shebang line
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAX4AAADeCAIAAAC10/zxAAAABmJLR0QA/wD/AP+gvaeTAAASvElEQVR4nO3de1BU5ePH8XMIBN0FVllusoouCuZ0UzMV7WtTDqV2GRU0spRm1GAqtG28zaBhNmU62jg2WWkXGWegNLVmqnFGQhsv/WEaXQxLaFEQdpfBXW4ul+X8/jgTQ1z8KQd4luX9+mv3Oc8+5zl7nv3wnLNnObKiKBIA9C8/0R0AMBgRPQAEIHoACED0ABCA6AEgANEDQACiB4AARA8AAYgeAAIQPQAEIHoACED0ABCA6AEgANEDQACiB4AARA8AAYgeAAIQPQAEIHoACOCv5cWyLPdWPwAMOFr+szuzHgACaJr1qLinBTDYaD/iYdYDQACiB4AARA8AAYgeAAIQPQAEIHoACED0ABCA6AEgANEDQACiB4AARA8AAYgeAAIQPQAEIHoACED0ABCA6AEgANEDQACiB4AARA8AAYgeAAIQPQAEIHoACED0ABCA6Bm8HnnkEVmWz54921YSFRV17Nix22/hl19+0ev1t18/JycnMTFRp9NFRUXdQUfhi4ieQS0sLGzt2rX9tjqj0bhmzZrs7Ox+WyO8FtEzqK1YsaK4uPirr77qvKiioiIlJSUiIsJkMr3yyisNDQ1q+bVr1x5//HGDwXDPPfecOXOmrX5NTU1GRsaoUaPCw8OfffbZqqqqzm3Omzdv8eLFo0aN6qPNwQBC9Axqer0+Ozt748aNzc3NHRYtWrQoICCguLj4/PnzFy5csFgsanlKSorJZKqsrPzuu+8+/PDDtvpLly612WwXL168evVqaGhoWlpav20FBiRFA+0tQKDZs2dv3bq1ubl5woQJe/bsURQlMjLy6NGjiqIUFRVJkmS329Wa+fn5QUFBHo+nqKhIluXq6mq1PCcnR6fTKYpSUlIiy3JbfZfLJcuy0+nscr25ubmRkZF9vXXoU9o/+/7CMg/ewd/ff9u2bStXrly2bFlbYVlZmU6nCw8PV5+azWa3211VVVVWVhYWFjZ8+HC1fPz48eoDq9Uqy/LUqVPbWggNDS0vLw8NDe2v7cAAQ/RAeuaZZ3bu3Llt27a2EpPJVF9f73A41PSxWq2BgYFGozEmJsbpdDY2NgYGBkqSVFlZqdYfPXq0LMuFhYVkDW4T53ogSZK0Y8eO3bt319bWqk/j4+OnT59usVjq6upsNltWVtby5cv9/PwmTJgwadKk9957T5KkxsbG3bt3q/Xj4uKSkpJWrFhRUVEhSZLD4Th8+HDntXg8HrfbrZ5XcrvdjY2N/bR58D5EDyRJkqZNmzZ//vy2r7FkWT58+HBDQ8PYsWMnTZp033337dq1S1106NCh/Pz8yZMnP/roo48++mhbC7m5uSNHjkxMTAwODp4+ffrp06c7r2Xfvn1Dhw5dtmyZzWYbOnRoWFhYP2wavJPcdsaoJy+WZUmStLQAYCDS/tln1gNAAKIHgABEDwABiB4AAhA9AAQgegAIQPQAEIDoASAA0QNAAKIHgABEDwABiB4AAhA9AAQgegAIQPQAEIDoASAA0QNAAKIHgABEDwABiB4AAhA9AAQgegAIQPQAEIDoASAA0QNAAH/tTaj3IQSA28esB4AAmu65DgA9w6wHgABEDwABiB4AAhA9AAQgegAIQPQAEEDTJYVcTDgY9OzyC8bGYKDl0hxmPQAE6IUfUnBRoq/SPnNhbPgq7WODWQ8AAYgeAAIQPQAEIHoACED0ABCA6AEgANEDQACiB4AARA8AAYgeAAIQPQAEIHoACED0ABDA96Pn0qVLTz31lNFoHDZs2IQJE9avX9+DRiZMmHDs2LHbrPzAAw/k5eV1uSgnJycxMVGn00VFRfWgG+hdXjU2XnvttYkTJw4bNmz06NHr1q1ramrqQWcGEB+PntbW1ieeeGLkyJG//fZbVVVVXl6e2WwW2B+j0bhmzZrs7GyBfYDK28ZGXV3dRx99dO3atby8vLy8vDfeeENgZ/qDooH2FvratWvXJEm6dOlS50XXr19PTk4ODw+PiYl5+eWX6+vr1fIbN25kZGSMHj06ODh40qRJRUVFiqIkJCQcPXpUXTp79uxly5Y1NTW5XK709HSTyWQ0GpcsWeJwOBRFeeWVVwICAoxGY2xs7LJly7rsVW5ubmRkZF9tc+/Rsn8ZGz0bG6rNmzc//PDDvb/NvUf7/vXxWc/IkSPj4+PT09O/+OKLq1evtl+0aNGigICA4uLi8+fPX7hwwWKxqOWpqamlpaXnzp1zOp0HDhwIDg5ue0lpaenMmTNnzZp14MCBgICApUuX2my2ixcvXr16NTQ0NC0tTZKkPXv2TJw4cc+ePVar9cCBA/24rbgz3jw2Tp8+PWXKlN7fZq8iNvn6gc1m27Bhw+TJk/39/ceNG5ebm6soSlFRkSRJdrtdrZOfnx8UFOTxeIqLiyVJKi8v79BIQkLCpk2bTCbTRx99pJaUlJTIstzWgsvlkmXZ6XQqinL//fera+kOsx4v4YVjQ1GUzZs3jx07tqqqqhe3tNf1QnqIXX1/qq2t3blzp5+f36+//nrixAmdTte26J9//pEkyWaz5efnDxs2rPNrExISIiMjp02b5na71ZIffvjBz88vth2DwfDHH38oRI/m1/Y/7xkbW7ZsMZvNVqu1V7ev92nfvz5+wNWeXq+3WCxBQUG//vqryWSqr693OBzqIqvVGhgYqB6ENzQ0VFRUdH757t27w8PDn3766YaGBkmSRo8eLctyYWGh9V83btyYOHGiJEl+foPoXfUNXjI2NmzYcPDgwVOnTsXGxvbBVnoXH/+QVFZWrl279uLFi/X19dXV1e+8805zc/PUqVPj4+OnT59usVjq6upsNltWVtby5cv9/Pzi4uKSkpJWrVpVUVGhKMrvv//eNtQCAwOPHDkSEhIyd+7c2tpateaKFSvUCg6H4/Dhw2rNqKioy5cvd9kfj8fjdrubm5slSXK73Y2Njf3yNqAL3jY2MjMzjxw5cvz4caPR6Ha7ff7LdR8/4HK5XCtXrhw/fvzQoUMNBsPMmTO//fZbdVFZWdnChQuNRmN0dHRGRkZdXZ1aXl1dvXLlypiYmODg4MmTJ1++fFlp9y1GS0vLCy+88NBDD1VXVzudzszMzDFjxuj1erPZvHr1arWFkydPjh8/3mAwLFq0qEN/9u7d2/7Nbz+x90Ja9i9j447Gxo0bNzp8MOPi4vrvvbhz2vevrGi4XYl6QwwtLcCbadm/jA3fpn3/+vgBFwDvRPQAEIDoASAA0QNAAKIHgABEDwABiB4AAhA9AAQgegAIQPQAEIDoASAA0QNAAKIHgABEDwAB/LU3of58HuiMsYHuMOsBIICmfxUGAD3DrAeAAEQPAAGIHgACED0ABCB6AAhA9AAQgOgBIICmq5m5VnUw0HILQPg2bgEIYIDphd9wcT20r9I+c2Fs+CrtY4NZDwABiB4AAhA9AAQgegAIQPQAEIDoASAA0QNAAKIHgAA+Gz1nzpyZP3/+iBEjdDrdvffem5WVVV9f3w/rbWlpyczMHDFiREhIyNKlS2tqarqsptfr5XYCAwMbGxv7oXuDlqjxYLPZFi9ebDQaDQbD448/fvny5S6r5eTkJCYm6nS6qKio9uVpaWntx0leXl4/9Ll/+Gb0fPPNN4899tj9999/7tw5u91+8OBBu91eWFh4O69VFKW5ubnHq96yZcvx48fPnz9/5cqV0tLS9PT0LqvZbLbafy1cuHDBggWBgYE9XiluTeB4yMjIcDqdf/31V3l5eXR0dEpKSpfVjEbjmjVrsrOzOy+yWCxtQyU5ObnHPfE6igbaW+gLHo/HZDJZLJYO5a2trYqiXL9+PTk5OTw8PCYm5uWXX66vr1eXJiQkZGVlzZo1Kz4+vqCgwOVypaenm0wmo9G4ZMkSh8OhVtu1a1dsbGxoaGh0dPTWrVs7rz0iIuLTTz9VHxcUFPj7+9+4ceMWvXU4HIGBgT/88IPGre4LWvav94wNseMhLi5u//796uOCggI/P7+WlpbuupqbmxsZGdm+ZPny5evXr+/ppvehXkgPsavvC+pfs4sXL3a5dMaMGampqTU1NRUVFTNmzHjppZfU8oSEhHvuuaeqqkp9+uSTTy5YsMDhcDQ0NKxatWr+/PmKoly+fFmv1//999+Kojidzp9//rlD4xUVFe1XrR5tnTlz5ha93bFjx/jx4zVsbh/yjegROB4URVm3bt1jjz1ms9lcLtfzzz+/cOHCW3S1y+iJjo42mUxTpkx59913m5qa7vwN6BNETxdOnDghSZLdbu+8qKioqP2i/Pz8oKAgj8ejKEpCQsL777+vlpeUlMiy3FbN5XLJsux0OouLi4cOHfrll1/W1NR0ueq//vpLkqSSkpK2Ej8/v++///4WvY2Pj9+xY8edb2V/8I3oETge1MqzZ89W342777776tWrt+hq5+g5fvz42bNn//7778OHD8fExHSeu4miff/64Lme8PBwSZLKy8s7LyorK9PpdGoFSZLMZrPb7a6qqlKfjhw5Un1gtVplWZ46deqYMWPGjBlz3333hYaGlpeXm83mnJycDz74ICoq6n//+9+pU6c6tB8cHCxJksvlUp/W1ta2traGhIR8/vnnbWcK29cvKCiwWq1paWm9te3oTOB4UBRlzpw5ZrO5urq6rq5u8eLFs2bNqq+v7248dJaUlDRjxoxx48YtWrTo3XffPXjwoJa3wruITb6+oB7bv/766x3KW1tbO/yVKygoCAwMbPsrd/ToUbX8ypUrd911l9Pp7G4VDQ0Nb7/99vDhw9XzBe1FRER89tln6uOTJ0/e+lzPkiVLnn322TvbvH6kZf96z9gQOB4cDofU6QD8p59+6q6dzrOe9r788ssRI0bcalP7US+kh9jV95Gvv/46KCho06ZNxcXFbrf7999/z8jIOHPmTGtr6/Tp059//vna2trKysqZM2euWrVKfUn7oaYoyty5c5OTk69fv64oit1uP3TokKIof/75Z35+vtvtVhRl3759ERERnaMnKysrISGhpKTEZrM9/PDDqamp3XXSbrcPGTLEO08wq3wjehSh4yE2NnblypUul+vmzZtvvvmmXq+vrq7u3MOWlpabN2/m5ORERkbevHlTbdPj8ezfv99qtTqdzpMnT8bFxbWdihKO6OnW6dOn586dazAYhg0bdu+9977zzjvqlxdlZWULFy40Go3R0dEZGRl1dXVq/Q5Dzel0ZmZmjhkzRq/Xm83m1atXK4py4cKFhx56KCQkZPjw4dOmTfvxxx87r7epqenVV181GAx6vT41NdXlcnXXw+3bt3vtCWaVz0SPIm48FBYWJiUlDR8+PCQkZMaMGd39pdm7d2/7YxGdTqcoisfjmTNnTlhY2JAhQ8xm88aNGxsaGnr9nekZ7ftX0z3X1SNVLS3Am2nZv4wN36Z9//rgaWYA3o/oASAA0QNAAKIHgABEDwABiB4AAhA9AAQgegAIQPQAEKAX7rmu/e7L8FWMDXSHWQ8AATT9hgsAeoZZDwABiB4AAhA9AAQgegAIQPQAEIDoASAA0QNAAE1XM3OtKjCY8b+ZAQwwvfAbLq6HBgYb7Uc8zHoACED0ABCA6AEgANGDjlpaWjIzM0eMGBESErJ06dKampouq+Xk5CQmJup0uqioqA6L0tLS5Hby8vL6vtcYYIgedLRly5bjx4+fP3/+ypUrpaWl6enpXVYzGo1r1qzJzs7ucqnFYqn9V3Jych92FwMT0YOOPv744w0bNpjN5oiIiLfeeuvQoUNOp7NztXnz5i1evHjUqFFdNhIQEKD/l79/L3yRCh9D9OA/Kisr7Xb7pEmT1KdTpkxpaWm5dOnSnbaTk5MzatSoBx98cPv27c3Nzb3dTQx4/DnCf9TW1kqSFBoaqj4NDg728/Pr7nRPd5577rmXXnopPDy8sLBw9erVNptt586dvd9XDGRED/4jODhYkiSXy6U+ra2tbW1tDQkJ+fzzz1988UW18P+9iDQpKUl9MG7cOLfbbbFYiB50wAEX/iMqKioiIuKXX35Rn164cMHf33/ixIlpaWnKv+6owSFDhrS0tPRBTzGwET3oaNWqVdu2bfvnn3/sdvumTZtSUlIMBkPnah6Px+12q+dx3G53Y2OjWt7a2vrJJ5+Ulpa6XK5Tp05t3LgxJSWlXzcAA4KigfYW4IWamppeffVVg8Gg1+tTU1NdLleX1fbu3dt+IOl0OrXc4/HMmTMnLCxsyJAhZrN548aNDQ0N/dh99Aftn31NN8NRf0KmpQUAA5H2zz4HXAAEIHoACED0ABCA6AEgQC9cUsh/aAZwp5j1ABBA05frANAzzHoACED0ABCA6AEgANEDQACiB4AARA8AAYgeAAIQPQAEIHoACED0ABCA6AEgANEDQACiB4AARA8AAYgeAAIQPQAEIHoACED0ABCA6AEgwP8BhqBe/aVBoe8AAAAASUVORK5CYII="
-/>
+#SBATCH --ntasks=24                   # All #SBATCH lines have to follow uninterrupted
+#SBATCH --time=01:00:00               # after the shebang line
+#SBATCH --account=<KTR>               # Comments start with # and do not count as interruptions
+#SBATCH --job-name=fancyExp
+#SBATCH --output=simulation-%j.out
+#SBATCH --error=simulation-%j.err
 
-#### MPI
+module purge                          # Set up environment, e.g., clean modules environment
+module load <modules>                 # and load necessary modules
 
-The illustration below shows the default binding of a pure MPI-job. In
-which 32 global ranks are distributed onto 2 nodes with 16 cores each.
-Each rank has 1 core assigned to it.
+srun ./application [options]          # Execute parallel application with srun
+```
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=16
-#SBATCH --cpus-per-task=1
+The following two examples show the basic resource specifications for a pure OpenMP application and
+a pure MPI application, respectively. Within the section [Job Examples](slurm_examples.md) we
+provide a comprehensive collection of job examples.
 
-srun --ntasks 32 ./application
-```
+??? example "Job file OpenMP"
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAw4AAADeCAIAAAAb9sCoAAAABmJLR0QA/wD/AP+gvaeTAAAfBklEQVR4nO3dfXBU1f348bshJEA2ISGbB0gIZAMJxqciIhCktGKxaqs14UEGC9gBJVUjxIo4EwFlpiqMOgydWipazTBNVATbGevQMQQYUMdSEEUNYGIID8kmMewmm2TzeH9/3On+9pvN2T27N9nsJu/XX+Tu/dx77uee8+GTu8tiUFVVAQAAQH/ChnoAAAAAwYtWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQIhWCQAAQChcT7DBYBiocQAIOaqqDvUQfEC9AkYyPfWKp0oAAABCup4qaULrN0sA+oXuExrqFTDS6K9XPFUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUCAAAQolUauX72s58ZDIZPP/3UuSU5OfnDDz+UP8KXX35pNBrl9y8uLs7JyYmKikpOTvZhoABGvMDXq40bN2ZnZ48bNy4tLW3Tpk2dnZ0+DBfDC63SiBYfH//0008H7HQmk2nDhg3btm0L2BkBDBsBrld2u33Pnj2XLl0qLS0tLS3dunVrwE6NYEOrNKKtXbu2srLygw8+cH+ptrZ26dKliYmJqampjz/+eFtbm7b90qVLd911V2xs7A033HDixAnn/s3Nzfn5+ZMnT05ISHjwwQcbGxvdj3nPPfcsW7Zs8uTJg3Q5AIaxANerN954Y8GCBfHx8Tk5OQ8//LBrOEYaWqURzWg0btu27dlnn+3q6urzUl5e3ujRoysrK0+ePHnq1KnCwkJt+9KlS1NTU+vq6v71r3/95S9/ce6/cuVKi8Vy+vTpmpqa8ePHr1mzJmBXAWAkGMJ6dfz48VmzZg3o1SCkqDroPwKG0MKFC7dv397V1TVjxozdu3erqpqUlHTw4EFVVSsqKhRFqa+v1/YsKysbM2ZMT09PRUWFwWBoamrSthcXF0dFRamqWlVVZTAYnPvbbDaDwWC1Wvs9b0lJSVJS0mBfHQZVKK79UBwznIaqXqmqumXLlvT09MbGxkG9QAwe/Ws/PNCtGYJMeHj4Sy+9tG7dulWrVjk3Xr58OSoqKiEhQfvRbDY7HI7GxsbLly/Hx8fHxcVp26dPn679obq62mAwzJ4923mE8ePHX7lyZfz48YG6DgDDX+Dr1QsvvLBv377y8vL4+PjBuioEPVolKPfff/8rr7zy0ksvObekpqa2trY2NDRo1ae6ujoyMtJkMqWkpFit1o6OjsjISEVR6urqtP3T0tIMBsOZM2fojQAMqkDWq82bNx84cODo0aOpqamDdkEIAXxWCYqiKDt37ty1a1dLS4v2Y2Zm5ty5cwsLC+12u8ViKSoqWr16dVhY2IwZM2bOnPnaa68pitLR0bFr1y5t/4yMjMWLF69du7a2tlZRlIaGhv3797ufpaenx+FwaJ8zcDgcHR0dAbo8AMNIYOpVQUHBgQMHDh06ZDKZHA4HXxYwktEqQVEUZc6cOffee6/zn40YDIb9+/e3tbWlp6fPnDnzpptuevXVV7WX3n///bKysltuueWOO+644447nEcoKSmZNGlSTk5OdHT03Llzjx8/7n6WN954Y+zYsatWrbJYLGPHjuWBNgA/BKBeWa3W3bt3X7hwwWw2jx07duzYsdnZ2YG5OgQhg/MTT/4EGwyKoug5AoBQFIprPxTHDEA//Wufp0oAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABCtEoAAABC4UM9AAAInKqqqqEeAoAQY1BV1f9gg0FRFD1HABCKQnHta2MGMDLpqVcD8FSJAgQg+JnN5qEeAoCQNABPlQCMTKH1VAkA/KOrVQIAABje+BdwAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrq+gpLvVRoJ/Ps6CebGSBBaXzXCnBwJqFcQ0VOveKoEAAAgNAD/sUlo/WYJefp/02JuDFeh+1s4c3K4ol5BRP/c4KkSAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACAEK0SAACA0PBvlb799ttf//rXJpNp3LhxM2bMeOaZZ/w4yIwZMz788EPJnX/yk5+Ulpb2+1JxcXFOTk5UVFRycrIfw8DACqq5sXHjxuzs7HHjxqWlpW3atKmzs9OPwSDUBdWcpF4FlaCaGyOtXg3zVqm3t/eXv/zlpEmTvv7668bGxtLSUrPZPITjMZlMGzZs2LZt2xCOAZpgmxt2u33Pnj2XLl0qLS0tLS3dunXrEA4GQyLY5iT1KngE29wYcfVK1UH/EQbbpUuXFEX59ttv3V+6evXqkiVLEhISUlJSHnvssdbWVm37tWvX8vPz09LSoqOjZ86cWVFRoapqVlbWwYMHtVcXLly4atWqzs5Om822fv361NRUk8m0fPnyhoYGVVUff/zx0aNHm0ymKVOmrFq1qt9RlZSUJCUlDdY1Dxw995e54d/c0GzZsmXBggUDf80DJ/jvr7vgH3NwzknqVTAIzrmhGQn1apg/VZo0aVJmZub69evffffdmpoa15fy8vJGjx5dWVl58uTJU6dOFRYWattXrFhx8eLFzz77zGq1vvPOO9HR0c6Qixcvzp8///bbb3/nnXdGjx69cuVKi8Vy+vTpmpqa8ePHr1mzRlGU3bt3Z2dn7969u7q6+p133gngtcI3wTw3jh8/PmvWrIG/ZgS3YJ6TGFrBPDdGRL0a2k4tACwWy+bNm2+55Zbw8PBp06aVlJSoqlpRUaEoSn19vbZPWVnZmDFjenp6KisrFUW5cuVKn4NkZWU999xzqampe/bs0bZUVVUZDAbnEWw2m8FgsFqtqqrefPPN2llE+C0tSATh3FBVdcuWLenp6Y2NjQN4pQMuJO5vHyEx5iCck9SrIBGEc0MdMfVq+LdKTi0tLa+88kpYWNhXX331ySefREVFOV/64YcfFEWxWCxlZWXjxo1zj83KykpKSpozZ47D4dC2HD58OCwsbIqL2NjYb775RqX06I4NvOCZG88//7zZbK6urh7Q6xt4oXV/NaE15uCZk9SrYBM8c2Pk1Kth/gacK6PRWFhYOGbMmK+++io1NbW1tbWhoUF7qbq6OjIyUntTtq2trba21j18165dCQkJ9913X1tbm6IoaWlpBoPhzJkz1f9z7dq17OxsRVHCwkZQVoeHIJkbmzdv3rdv39GjR6dMmTIIV4lQEiRzEkEoSObGiKpXw3yR1NXVPf3006dPn25tbW1qanrxxRe7urpmz56dmZk5d+7cwsJCu91usViKiopWr14dFhaWkZGxePHiRx55pLa2VlXVs2fPOqdaZGTkgQMHYmJi7r777paWFm3PtWvXajs0NDTs379f2zM5OfncuXP9jqenp8fhcHR1dSmK4nA4Ojo6ApIG9CPY5kZBQcGBAwcOHTpkMpkcDsew/8e3cBdsc5J6FTyCbW6MuHo1tA+1BpvNZlu3bt306dPHjh0bGxs7f/78jz76SHvp8uXLubm5JpNp4sSJ+fn5drtd297U1LRu3bqUlJTo6Ohbbrnl3Llzqsu/Guju7v7tb3972223NTU1Wa3WgoKCqVOnGo1Gs9n85JNPakc4cuTI9OnTY2Nj8/Ly+ozn9ddfd02+64PTIKTn/jI3fJob165d67MwMzIyApcL3wX//XUX/GMOqjmpUq+CSVDNjRFYrwzOo/jBYDBop/f7CAhmeu4vc2N4C8X7G4pjhjzqFUT0399h/gYcAACAHrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQrRKAAAAQuH6D2EwGPQfBMMScwPBhjkJEeYGRHiqBAAAIGRQVXWoxwAAABCkeKoEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgRKsEAAAgpOvbuvlu05HAv2/eYm6MBKH1rWzMyZGAegURPfWKp0oAAABCA/B/wIXWb5aQp/83LebGcBW6v4UzJ4cr6hVE9M8NnioBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAI0SoBAAAIDdtW6cSJE/fee++ECROioqJuvPHGoqKi1tbWAJy3u7u7oKBgwoQJMTExK1eubG5u7nc3o9FocBEZGdnR0RGA4Y1YQzUfLBbLsmXLTCZTbGzsXXfdde7cuX53Ky4uzsnJiYqKSk5Odt2+Zs0a13lSWloagDEj8KhXcEW9CjbDs1X65z//uWjRoptvvvmzzz6rr6/ft29ffX39mTNnZGJVVe3q6vL71M8///yhQ4dOnjz5/fffX7x4cf369f3uZrFYWv4nNzf3gQceiIyM9Puk8GwI50N+fr7Vaj1//vyVK1cmTpy4dOnSfnczmUwbNmzYtm2b+0uFhYXOqbJkyRK/R4KgRb2CK+pVMFJ10H+EwdDT05OamlpYWNhne29vr6qqV69eXbJkSUJCQkpKymOPPdba2qq9mpWVVVRUdPvtt2dmZpaXl9tstvXr16empppMpuXLlzc0NGi7vfrqq1OmTBk/fvzEiRO3b9/ufvbExMS33npL+3N5eXl4ePi1a9c8jLahoSEyMvLw4cM6r3ow6Lm/wTM3hnY+ZGRk7N27V/tzeXl5WFhYd3e3aKglJSVJSUmuW1avXv3MM8/4e+mDKHjur7zgHDP1aqBQr6hXIgPQ7Qzt6QeD1n2fPn2631fnzZu3YsWK5ubm2traefPmPfroo9r2rKysG264obGxUfvxV7/61QMPPNDQ0NDW1vbII4/ce++9qqqeO3fOaDReuHBBVVWr1frf//63z8Fra2tdT609zT5x4oSH0e7cuXP69Ok6LncQDY/SM4TzQVXVTZs2LVq0yGKx2Gy2hx56KDc318NQ+y09EydOTE1NnTVr1ssvv9zZ2el7AgZF8NxfecE5ZurVQKFeUa9EaJX68cknnyiKUl9f7/5SRUWF60tlZWVjxozp6elRVTUrK+tPf/qTtr2qqspgMDh3s9lsBoPBarVWVlaOHTv2vffea25u7vfU58+fVxSlqqrKuSUsLOzjjz/2MNrMzMydO3f6fpWBMDxKzxDOB23nhQsXatm47rrrampqPAzVvfQcOnTo008/vXDhwv79+1NSUtx/1xwqwXN/5QXnmKlXA4V6pW2nXrnTf3+H4WeVEhISFEW5cuWK+0uXL1+OiorSdlAUxWw2OxyOxsZG7cdJkyZpf6iurjYYDLNnz546derUqVNvuumm8ePHX7lyxWw2FxcX//nPf05OTv7pT3969OjRPsePjo5WFMVms2k/trS09Pb2xsTEvP32285PurnuX15eXl1dvWbNmoG6drgbwvmgquqdd95pNpubmprsdvuyZctuv/321tZW0Xxwt3jx4nnz5k2bNi0vL+/ll1/et2+fnlQgCFGv4Ip6FaSGtlMbDNp7vU899VSf7b29vX268vLy8sjISGdXfvDgQW37999/P2rUKKvVKjpFW1vbH//4x7i4OO39Y1eJiYl/+9vftD8fOXLE83v/y5cvf/DBB327vADSc3+DZ24M4XxoaGhQ3N7g+Pzzz0XHcf8tzdV77703YcIET5caQMFzf+UF55ipVwOFeqVtp165G4BuZ2hPP0j+8Y9/jBkz5rnnnqusrHQ4HGfPns3Pzz9x4kRvb+/cuXMfeuihlpaWurq6+fPnP/LII1qI61RTVfXuu+9esmTJ1atXVVWtr69///33VVX97rvvysrKHA6HqqpvvPFGYmKie+kpKirKysqqqqqyWCwLFixYsWKFaJD19fURERHB+QFJzfAoPeqQzocpU6asW7fOZrO1t7e/8MILRqOxqanJfYTd3d3t7e3FxcVJSUnt7e3aMXt6evbu3VtdXW21Wo8cOZKRkeH8aMKQC6r7Kylox0y9GhDUK+cRqFd90CoJHT9+/O67746NjR03btyNN9744osvav9Y4PLly7m5uSaTaeLEifn5+Xa7Xdu/z1SzWq0FBQVTp041Go1ms/nJJ59UVfXUqVO33XZbTExMXFzcnDlzjh075n7ezs7OJ554IjY21mg0rlixwmaziUa4Y8eOoP2ApGbYlB516ObDmTNnFi9eHBcXFxMTM2/ePNHfNK+//rrrs96oqChVVXt6eu688874+PiIiAiz2fzss8+2tbUNeGb8E2z3V0Ywj5l6pR/1yhlOvepD//01OI/iB+2dSz1HQDDTc3+ZG8NbKN7fUBwz5FGvIKL//g7Dj3UDAAAMFFolAAAAIVolAAAAIVolAAAAIVolAAAAIVolAAAAIVolAAAAIVolAAAAIVolAAAAoXD9h/D6vw1jxGJuINgwJyHC3IAIT5UAAACEdP0fcAAAAMMbT5UAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEdH1bN99tOhL4981bzI2RILS+lY05ORJQryCip17xVAkAAEBoAP4POD1dPLHBH6tHKF4vsfKxoSgU80ysfKweoXi9xMrH6sFTJQAAACFaJQAAACFaJQAAAKFBaZW6u7sLCgomTJgQExOzcuXK5uZm+diNGzdmZ2ePGzcuLS1t06ZNnZ2dfpx95syZBoOhrq7Op8B///vfc+bMGTNmTEJCwqZNm+QDLRbLsmXLTCZTbGzsXXfdde7cOc/7FxcX5+TkREVFJScn9xm517yJYmXyJop1nt2/vPnE8xg8KyoqSk9Pj4yMjI+Pv++++77//nv52DVr1hhclJaWyscajUbX2MjIyI6ODsnYy5cv5+XlxcfHT5gw4fe//73XQFF+ZPIm2kcmb6JYPXkLZh7y6bUOiGJl6oBoncqsfVGszNr3vI/nte8h1muuRLEyuRLNWz1/v8gQ3V+ZOiCKlakDolzJrH1RrMzaF8XKrH1RrEyuRLEyuRJdl56/X7xQdRAdoaioKDMzs7Ky0mKxzJ8/f8WKFfKxa9euPXbsWGNj44kTJyZPnrx582b5WM327dsXLVqkKEptba18bFlZmdFo/Otf/1pXV1dTU3Ps2DH52AceeOAXv/jFjz/+aLfbV69efeONN3qO/eijj959990dO3YkJSW57iPKm0ysKG8ysRr3vOmZIaJYz2PwHPv5559XVlY2NzdXVVXdf//9OTk58rGrV68uLCxs+Z+uri75WLvd7gzMzc1dvny5fOxtt9324IMP2my2q1evzp0798knn/QcK8qPaLtMrChvMrGivOmvHoEnc72iOiATK6oDrrGidSqz9kWxMmvfc131vPZFsTK5EsXK5Eo0b2Vy5SuZ+yuqAzKxojogkyuZtS+KlVn7oliZtS+KlcmVKFYmV6LrksmVfwalVUpMTHzrrbe0P5eXl4eHh1+7dk0y1tWWLVsWLFggf15VVb/55puMjIwvvvhC8bFVysnJeeaZZzyPRxSbkZGxd+9e7c/l5eVhYWHd3d1eY0tKSvrcTlHeZGJdueZNMrbfvA1U6XHnefxez9vZ2Zmfn3/PPffIx65evdrv++vU0NAQGRl5+PBhydgrV64oilJRUaH9ePDgQaPR2NHR4TVWlB/37T7NjT55k4kV5U1/6Qk8mesV1QGZWFEdEOXKdZ3Kr333WNF2yVif1r5rrHyu3GN9ylWfeetrrmT4tI761AGvsR7qgPz9lVn7olhVYu27x/q69vs9r9dc9Yn1NVf9/l0gnyt5A/8GXF1dXX19/cyZM7UfZ82a1d3d/e233/pxqOPHj8+aNUt+/56ent/97nevvfZadHS0TydyOByff/55T0/PddddFxcXt2jRoq+++ko+PC8vr6SkpL6+vrm5+c033/zNb34zatQonwaghGbeAq+4uDg5OTk6Ovrrr7/++9//7mvs5MmTb7311h07dnR1dflx9rfffjstLe3nP/+55P7OJepkt9t9et9woAxt3kJFgOuAc536sfZFa1xm7bvu4+vad8b6kSvX80rmyn3eDmCd9FsA6oCvNdxDrE9r3z1Wfu33O2bJXDlj5XOlp6b5Q0+f1e8Rzp8/ryhKVVXV/2/HwsI+/vhjmVhXW7ZsSU9Pb2xslDyvqqo7d+5cunSpqqrfffed4stTpdraWkVR0tPTz549a7fbN2zYkJKSYrfbJc9rs9kWLlyovXrdddfV1NTInLdP5+shb15jXfXJm0ysKG96ZojnWL+fKrW1tV29evXYsWMzZ85cu3atfOyhQ4c+/fTTCxcu7N+/PyUlpbCw0Ncxq6qamZm5c+dOn8Z86623Oh8mz5s3T1GUzz77zGvsgD9V6jdvMrGivOmvHoHn9Xo91AGZXInqQL+5cl2nPq19VVwbva599318WvuusT7lyv28krlyn7e+5kqSTzW2Tx2QiRXVAfn7K/mkxD1Wcu27x/q09kVz0muu3GMlc+Xh74LBeKo08K2StoROnz6t/ah95u7EiRMysU7PP/+82Wyurq6WP++FCxcmTZpUV1en+t4qtbS0KIqyY8cO7cf29vZRo0YdPXpUJra3t3f27NkPP/xwU1OT3W7funVrWlqaTJvVb5nuN2/yy9g9b15jPeRtYEuPzPjlz3vs2DGDwdDa2upH7L59+xITE3097+HDhyMiIhoaGnwa88WLF3Nzc5OSktLT07du3aooyvnz573GDtIbcOr/zZuvsa550196As/r9XqoA15jPdQB99g+69SntS+qjTJrv88+Pq39PrE+5apPrE+50jjnrU+5kie/FtzrgEysqA7I31+Zte/5703Pa99zrOe1L4qVyZV7rHyu3K9LExpvwCUnJycmJn755Zfaj6dOnQoPD8/OzpY/wubNm/ft23f06NEpU6bIRx0/fryxsfH66683mUxaK3r99de/+eabMrFGo3HatGnOL/T06Zs9f/zxx//85z8FBQVxcXFRUVFPPfVUTU3N2bNn5Y+gCcW8Da1Ro0b58UanoigRERHd3d2+Ru3Zsyc3N9dkMvkUlZaW9sEHH9TV1VVVVaWmpqakpEybNs3XUw+sAOcthASmDrivU/m1L1rjMmvffR/5te8eK58r91j/aqY2b/XXSZ0GtQ74V8PlY0Vr32ush7XvIdZrrvqN9aNm+l3TfKCnzxIdoaioKCsrq6qqymKxLFiwwKd/AffEE09Mnz69qqqqvb29vb3d/TOwotjW1tZL/3PkyBFFUU6dOiX/Jtqrr75qNpvPnTvX3t7+hz/8YfLkyfJPLKZMmbJu3Tqbzdbe3v7CCy8YjcampiYPsd3d3e3t7cXFxUlJSe3t7Q6HQ9suyptMrChvXmM95E3PDBHFisbvNbazs/PFF1+sqKiwWq1ffPHFrbfempeXJxnb09Ozd+/e6upqq9V65MiRjIyMRx99VH7MqqrW19dHRET0+4Fuz7EnT5784YcfGhsbDxw4kJCQ8Pbbb3uOFeVHtN1rrIe8eY31kDf91SPwZPIsqgMysaI64BorWqcya18UK7P2+91Hcu2Lji+TK1Gs11x5mLcyuRqMuaEK6oBMrKgOyORKZu33Gyu59vuNlVz7Hv6+9porUazXXHm4Lplc+WdQWqXOzs4nnngiNjbWaDSuWLHCZrNJxl67dk35vzIyMuTP6+TrG3Cqqvb29m7ZsiUpKSkmJuaOO+74+uuv5WPPnDmzePHiuLi4mJiYefPmef0XUq+//rrrNUZFRWnbRXnzGushbzLnFeVNz/QSxXodgyi2q6vrvvvuS0pKioiImDp16saNG+XnVU9Pz5133hkfHx8REWE2m5999tm2tjb5MauqumPHjunTp/f7kufYXbt2JSYmjh49Ojs7u7i42GusKD+i7V5jPeTNa6yHvOmZG0NFJs+iOiATK6oDzlgP69Tr2hfFyqx9mboqWvseYr3mykOs11x5mLcyddJXMvdXFdQBmVhRHZDJlde1L4qVWfuiWJm173leec6Vh1ivufJwXTJ10j8G51H8ELr/bR6xxBI7VLFDJRRzRSyxxA5trIb/2AQAAECIVgkAAECIVgkAAECIVgkAAEBoAD7WjeFNz8foMLyF4se6MbxRryDCx7oBAAAGha6nSgAAAMMbT5UAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACEaJUAAACE/h82xQH7rLtt0wAAAABJRU5ErkJggg=="
-/>
+    ```bash
+    #!/bin/bash
 
-#### Hybrid (MPI and OpenMP)
+    #SBATCH --nodes=1
+    #SBATCH --tasks-per-node=1
+    #SBATCH --cpus-per-task=64
+    #SBATCH --time=01:00:00
+    #SBATCH --account=<account>
 
-In the illustration below the default binding of a Hybrid-job is shown.
-In which 8 global ranks are distributed onto 2 nodes with 16 cores each.
-Each rank has 4 cores assigned to it.
+    module purge
+    module load <modules>
 
-```Bash
-#!/bin/bash
-#SBATCH --nodes=2
-#SBATCH --tasks-per-node=4
-#SBATCH --cpus-per-task=4
+    export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
+    srun ./path/to/openmpi_application
+    ```
 
-export OMP_NUM_THREADS=4
+    * Submisson: `marie@login$ sbatch batch_script.sh`
+    * Run with fewer CPUs: `marie@login$ sbatch -c 14 batch_script.sh`
 
-srun --ntasks 8 --cpus-per-task $OMP_NUM_THREADS ./application
-```
+??? example "Job file MPI"
 
-\<img alt=""
-src="data:;base64,iVBORw0KGgoAAAANSUhEUgAAAvoAAADyCAIAAACzsfbGAAAABmJLR0QA/wD/AP+gvaeTAAAgAElEQVR4nO3de1iUdf7/8XsQA+SoDgdhZHA4CaUlpijmYdXooJvrsbxqzXa1dCsPbFlt5qF2O2xbXV52bdtlV25c7iVrhrVXWVaEupJ2gjxUYAIDgjgcZJCDIIf7+8f9a36zjCAwM/eMn3k+/oJ77rnf9z3z5u1r7hnn1siyLAEAAIjLy9U7AAAA4FzEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABCctz131mg0jtoPANccWZZVrsjMATyZPTOHszsAAEBwdp3dUaj/Cg+Aa7n2LAszB/A09s8czu4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9zxXDNmzNBoNF9++aVlSURExPvvv9/3LXz//fcBAQF9Xz8zMzMtLc3f3z8iIqIfOwpACOrPnPXr1ycnJw8ZMiQ6OnrDhg2XL1/ux+5CLMQdjzZ8+PDHH39ctXJarXbdunVbtmxRrSIAt6LyzGlqanrzzTfPnj2blZWVlZW1efNm1UrD3RB3PNqKFSuKi4vfe+8925uqqqoWL14cFham0+keeeSRlpYWZfnZs2dvu+22kJCQG264IS8vz7L+xYsXV69ePXLkyNDQ0Hvuuae2ttZ2m3feeeeSJUtGjhzppMMB4OZUnjk7duyYOnXq8OHD09LSHnjgAeu7w9MQdzxaQEDAli1bnnrqqfb29m43LVy4cPDgwcXFxd9++21+fn5GRoayfPHixTqd7vz58/v37//HP/5hWf/ee+81mUwFBQXl5eXBwcHLly9X7SgAXCtcOHOOHDkyfvx4hx4NrimyHezfAlxo+vTpzz33XHt7++jRo7dv3y7Lcnh4+L59+2RZLiwslCSpurpaWTMnJ8fX17ezs7OwsFCj0Vy4cEFZnpmZ6e/vL8tySUmJRqOxrN/Q0KDRaMxm8xXr7t69Ozw83NlHB6dy1d8+M+ea5qqZI8vypk2bRo0aVVtb69QDhPPY/7fvrXa8gpvx9vZ+8cUXV65cuWzZMsvCiooKf3//0NBQ5VeDwdDa2lpbW1tRUTF8+PChQ4cqy+Pj45UfjEajRqOZMGGCZQvBwcGVlZXBwcFqHQeAa4P6M+fZZ5/dtWtXbm7u8OHDnXVUcHvEHUjz5s175ZVXXnzxRcsSnU7X3NxcU1OjTB+j0ejj46PVaqOiosxmc1tbm4+PjyRJ58+fV9aPjo7WaDTHjx8n3wC4KjVnzpNPPpmdnX3o0CGdTue0A8I1gM/uQJIk6eWXX962bVtjY6Pya0JCwqRJkzIyMpqamkwm08aNG++//34vL6/Ro0ePGzfutddekySpra1t27ZtyvqxsbHp6ekrVqyoqqqSJKmmpmbv3r22VTo7O1tbW5X37FtbW9va2lQ6PABuRp2Zs2bNmuzs7AMHDmi12tbWVv4juicj7kCSJCk1NXXOnDmW/wqh0Wj27t3b0tIyatSocePGjR079tVXX1Vuevfdd3NyclJSUmbOnDlz5kzLFnbv3h0ZGZmWlhYYGDhp0qQjR47YVtmxY4efn9+yZctMJpOfnx8nlgGPpcLMMZvN27dv//nnnw0Gg5+fn5+fX3JysjpHBzeksXwCaCB31mgkSbJnCwCuRa7622fmAJ7J/r99zu4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBedu/CY1GY/9GAKCPmDkA+ouzOwAAQHAaWZZdvQ8AAABOxNkdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADB2fU1g3zZlycY2FcV0BueQP2vsaCvPAEzBz2xZ+ZwdgcAAAjOAReR4IsKRWX/qyV6Q1SufSVNX4mKmYOe2N8bnN0BAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjx486PP/7461//WqvVDhkyZPTo0U888cQANjJ69Oj333+/jyvfdNNNWVlZV7wpMzMzLS3N398/IiJiALsBx3Kr3li/fn1ycvKQIUOio6M3bNhw+fLlAewM3IFb9RUzx624VW942swRPO50dXXdfvvtkZGRJ0+erK2tzcrKMhgMLtwfrVa7bt26LVu2uHAfoHC33mhqanrzzTfPnj2blZWVlZW1efNmF+4MBszd+oqZ4z7crTc8bubIdrB/C8529uxZSZJ+/PFH25vOnTu3aNGi0NDQqKiohx9+uLm5WVleX1+/evXq6OjowMDAcePGFRYWyrKcmJi4b98+5dbp06cvW7bs8uXLDQ0Nq1at0ul0Wq327rvvrqmpkWX5kUceGTx4sFar1ev1y5Ytu+Je7d69Ozw83FnH7Dj2PL/0xsB6Q7Fp06apU6c6/pgdx1XPL33FzHHGfdXhnr2h8ISZI/jZncjIyISEhFWrVv373/8uLy+3vmnhwoWDBw8uLi7+9ttv8/PzMzIylOVLly4tKys7evSo2Wx+5513AgMDLXcpKyubMmXKLbfc8s477wwePPjee+81mUwFBQXl5eXBwcHLly+XJGn79u3Jycnbt283Go3vvPOOiseK/nHn3jhy5Mj48eMdf8xwPnfuK7iWO/eGR8wc16YtFZhMpieffDIlJcXb2zsuLm737t2yLBcWFkqSVF1drayTk5Pj6+vb2dlZXFwsSVJlZWW3jSQmJj7zzDM6ne7NN99UlpSUlGg0GssWGhoaNBqN2WyWZfnGG29UqvSEV1puwg17Q5blTZs2jRo1qra21oFH6nCuen7pK2aOM+6rGjfsDdljZo74cceisbHxlVde8fLyOnHixOeff+7v72+5qbS0VJIkk8mUk5MzZMgQ2/smJiaGh4enpqa2trYqS7744gsvLy+9lZCQkB9++EFm9Nh9X/W5T29s3brVYDAYjUaHHp/jEXf6wn36ipnjbtynNzxn5gj+Zpa1gICAjIwMX1/fEydO6HS65ubmmpoa5Saj0ejj46O8wdnS0lJVVWV7923btoWGht51110tLS2SJEVHR2s0muPHjxt/UV9fn5ycLEmSl5cHPapicJPeePLJJ3ft2nXo0CG9Xu+Eo4Ta3KSv4IbcpDc8auYI/kdy/vz5xx9/vKCgoLm5+cKFCy+88EJ7e/uECRMSEhImTZqUkZHR1NRkMpk2btx4//33e3l5xcbGpqenP/jgg1VVVbIsnzp1ytJqPj4+2dnZQUFBd9xxR2Njo7LmihUrlBVqamr27t2rrBkREVFUVHTF/ens7GxtbW1vb5ckqbW1ta2tTZWHAVfgbr2xZs2a7OzsAwcOaLXa1tZW4f9TqKjcra+YOe7D3XrD42aOa08uOVtDQ8PKlSvj4+P9/PxCQkKmTJny0UcfKTdVVFQsWLBAq9WOGDFi9erVTU1NyvILFy6sXLkyKioqMDAwJSWlqKhItvokfEdHx29/+9uJEydeuHDBbDavWbMmJiYmICDAYDCsXbtW2cLBgwfj4+NDQkIWLlzYbX/eeOMN6wff+gSmG7Ln+aU3+tUb9fX13f4wY2Nj1Xss+s9Vzy99xcxxxn3V4Va94YEzR2PZygBoNBql/IC3AHdmz/NLb4jNVc8vfSU2Zg56Yv/zK/ibWQAAAMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAILztn8TymXZAVv0BpyBvkJP6A30hLM7AABAcBpZll29DwAAAE7E2R0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAMERdwAAgODs+lZlvr/SEwzsm5noDU+g/rd20VeegJmDntgzczi7AwAABOeAa2bxvcyisv/VEr0hKte+kqavRMXMQU/s7w3O7gAAAMERdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAghM27uTl5c2ZM2fYsGH+/v5jxozZuHFjc3OzCnU7OjrWrFkzbNiwoKCge++99+LFi1dcLSAgQGPFx8enra1Nhd3zWK7qB5PJtGTJEq1WGxIScttttxUVFV1xtczMzLS0NH9//4iICOvly5cvt+6TrKwsFfYZA8PMgTVmjrsRM+785z//mTVr1o033nj06NHq6updu3ZVV1cfP368L/eVZbm9vX3Apbdu3XrgwIFvv/32zJkzZWVlq1atuuJqJpOp8RcLFiyYP3++j4/PgIuidy7sh9WrV5vN5tOnT1dWVo4YMWLx4sVXXE2r1a5bt27Lli22N2VkZFhaZdGiRQPeEzgVMwfWmDnuSLaD/Vtwhs7OTp1Ol5GR0W15V1eXLMvnzp1btGhRaGhoVFTUww8/3NzcrNyamJi4cePGW265JSEhITc3t6GhYdWqVTqdTqvV3n333TU1Ncpqr776ql6vDw4OHjFixHPPPWdbPSws7O2331Z+zs3N9fb2rq+v72Vva2pqfHx8vvjiCzuP2hnseX7dpzdc2w+xsbFvvfWW8nNubq6Xl1dHR0dPu7p79+7w8HDrJffff/8TTzwx0EN3Ilc9v+7TV9aYOY7CzGHm9MQBicW15Z1BSdAFBQVXvHXy5MlLly69ePFiVVXV5MmTH3roIWV5YmLiDTfcUFtbq/w6d+7c+fPn19TUtLS0PPjgg3PmzJFluaioKCAg4Oeff5Zl2Ww2f/fdd902XlVVZV1aOaucl5fXy96+/PLL8fHxdhyuE4kxelzYD7Isb9iwYdasWSaTqaGh4b777luwYEEvu3rF0TNixAidTjd+/PiXXnrp8uXL/X8AnIK4Y42Z4yjMHGZOT4g7V/D5559LklRdXW17U2FhofVNOTk5vr6+nZ2dsiwnJia+/vrryvKSkhKNRmNZraGhQaPRmM3m4uJiPz+/PXv2XLx48YqlT58+LUlSSUmJZYmXl9fHH3/cy94mJCS8/PLL/T9KNYgxelzYD8rK06dPVx6NpKSk8vLyXnbVdvQcOHDgyy+//Pnnn/fu3RsVFWX7etFViDvWmDmOwsxRljNzbNn//Ar42Z3Q0FBJkiorK21vqqio8Pf3V1aQJMlgMLS2ttbW1iq/RkZGKj8YjUaNRjNhwoSYmJiYmJixY8cGBwdXVlYaDIbMzMy///3vERER06ZNO3ToULftBwYGSpLU0NCg/NrY2NjV1RUUFPTPf/7T8skv6/Vzc3ONRuPy5csddeyw5cJ+kGV59uzZBoPhwoULTU1NS5YsueWWW5qbm3vqB1vp6emTJ0+Oi4tbuHDhSy+9tGvXLnseCjgJMwfWmDluyrVpyxmU903/+Mc/dlve1dXVLVnn5ub6+PhYkvW+ffuU5WfOnBk0aJDZbO6pREtLy/PPPz906FDlvVhrYWFhO3fuVH4+ePBg7++j33333ffcc0//Dk9F9jy/7tMbLuyHmpoayeaNhmPHjvW0HdtXWtb27NkzbNiw3g5VRa56ft2nr6wxcxyFmaMsZ+bYckBicW15J/nggw98fX2feeaZ4uLi1tbWU6dOrV69Oi8vr6ura9KkSffdd19jY+P58+enTJny4IMPKnexbjVZlu+4445FixadO3dOluXq6up3331XluWffvopJyentbVVluUdO3aEhYXZjp6NGzcmJiaWlJSYTKapU6cuXbq0p52srq6+7rrr3PMDgwoxRo/s0n7Q6/UrV65saGi4dOnSs88+GxAQcOHCBds97OjouHTpUmZmZnh4+KVLl5RtdnZ2vvXWW0aj0Ww2Hzx4MDY21vI2v8sRd7ph5jgEM8eyBWZON8SdHh05cuSOO+4ICQkZMmTImDFjXnjhBeUD8BUVFQsWLNBqtSNGjFi9enVTU5OyfrdWM5vNa9asiYmJCQgIMBgMa9eulWU5Pz9/4sSJQUFBQ4cOTU1NPXz4sG3dy5cvP/rooyEhIQEBAUuXLm1oaOhpD//617+67QcGFcKMHtl1/XD8+PH09PShQ4cGBQVNnjy5p39p3njjDetzrv7+/rIsd3Z2zp49e/jw4dddd53BYHjqqadaWloc/sgMDHHHFjPHfswcy92ZOd3Y//xqLFsZAOVdQHu2AHdmz/NLb4jNVc8vfSU2Zg56Yv/zK+BHlQEAAKwRdwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwXnbv4mrXmEVHovegDPQV+gJvYGecHYHAAAIzq5rZgEAALg/zu4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARn17cq8/2VnmBg38xEb3gC9b+1i77yBMwc9MSemcPZHQAAIDgHXDPLVa/wqKtOXXt42mPlaXVdxdMeZ0+raw9Pe6w8ra49OLsDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACA44g4AABCcw+KO2Wz29vaOiYnR6/V/+MMf+v6f8o1G4+zZs3u69cMPPzQYDDExMZmZmWrWnT9/fkhIyKJFi3pawRl1S0tLZ86cGRUVlZSU9Mknn6hWt6WlJSUlRafT6fX6bdu29XGDfUdv2F9X1N6wh5OeX0mSWlpa9Hr9unXr1Kzr7++v0+l0Ot3ixYvVrHv27NmZM2eGhYUlJSW1traqU7egoED3C29v77y8vD5us4/oDYfUFa03ZDtYb6G+vj4qKkqW5dbW1gkTJnz88cd93EhpaemsWbOueFN7e7vBYDAajTU1NdHR0Q0NDerUlWU5Nzc3Ozt74cKF1gudXbe4uPjo0aOyLJ86dSo8PLyzs1Oduh0dHefPn5dlua6uLjIyUvm5W93+ojccW1ek3rCHCs+vLMsbN25cvHjx2rVr1ayr1+ttF6pQd/bs2Tt27JBluby8vL29XbW6ipqamhEjRnR0dNjW7S96w+F1hekNhePfzPLx8Zk4ceKZM2ckSWpra5s1a1ZKSsq4ceMOHTokSZLRaExNTX3ooYduvfXWRx991PqOeXl5kydPrqmpsSz5+uuvExIS9Hq9VqudMWNGTk6OOnUlSZoxY0ZgYKDKx2swGCZNmiRJ0vXXXy9JUnNzszp1Bw0aFB4eLklSR0dHQECAn59fXw58AOgNesMZHPv8lpSU/Pjjj3feeafKdV1yvKWlpUajccWKFZIkjRw50tu7t+/Zd8bxvvfee3fdddegQYMG9lBcFb1Bb/x/9mQl6y1YUt7FixfHjh2bm5sry3JnZ2d9fb0sy1VVVWlpabIsl5aWBgcH19TUyLI8bdq0kpISJeXl5eWlpqaaTCbr7b/77ru///3vlZ//9Kc/bd++XZ26is8++6wvr+AdXleW5U8//XTKlClq1m1oaIiOjh40aNAbb7xxxbr9RW84o64sRG/YQ4XjXbhwYWFh4c6dO6/6Ct6xdQMCAgwGw/jx4z/55BPV6n766aczZsyYP3/+TTfdtHnzZjWPVzFz5sycnJwr1u0vesOxdUXqjf+3Bbvu/L+HPWjQIL1ef9111y1btkxZ2NXV9fTTT6elpU2fPj04OFiW5dLS0mnTpim3rly5Mjc3t7S0VK/XjxkzxnKe3KKP/6Q5vK7iqv+kOaluWVlZUlLSTz/9pHJd5V6jRo0qLy+3rdtf9Aa94QzOPt5PPvlk/fr1siz3/k+aMx5no9Eoy3J+fn5kZGRdXZ06dT/++GNfX9/CwsJLly5NnTrV8maEOn1lMpkiIyMt71bI7j1z6A3Vjld2dG8oHPlmVkREhNFoLCsr++qrr3744QdJkvbv319cXHzo0KGDBw/6+voqqw0ePFj5wcvLq6OjQ5KksLAwPz+/EydOdNtgZGTkuXPnlJ8rKysjIyPVqeuq45UkyWw233XXXdu3bx89erSadRUxMTGpqamnTp3q/4NxFfQGveEMDj/eY8eO7dmzJyYm5rHHHnv77befffZZdepKkqTX6yVJGjduXHJy8unTp9WpGxUVlZiYmJiY6Ovre+utt548eVK145Uk6b333ps3b56T3smiN+iNbhz/2Z2IiIgtW7Zs3bpVkqT6+nqDweDt7f3111+bTKae7hIUFPTBBx889thj33zzjfXyiRMnFhUVlZeX19XV5ebm9v6BeQfW7RcH1r18+fKCBQvWr18/a9YsNetWVVUp0aGiouLYsWPJyclXrT4w9Aa94QwOPN7NmzdXVFQYjca//e1vv/vd7zZt2qRO3bq6ugsXLkiSVFRUdOrUqdjYWHXq3nDDDV1dXRUVFZ2dnf/973+TkpLUqavYs2fPkiVLeqloP3qD3rBwyvfuLF68+MSJE4WFhfPmzfv666+XLl36r3/9Kzo6upe7REREZGdnP/DAA0VFRZaF3t7er7322owZM1JSUrZu3RoUFKROXUmSbrvttqVLl+7fv1+n0xUUFKhT9/PPPz98+PDTTz+t/B88o9GoTt26urrZs2dHRUXNmjXrz3/+s/JKwknoDXrDGRz4/Lqk7tmzZ1NTU6Oion7zm9+8/vrroaGh6tTVaDTbtm1LT09PSkq6/vrr586dq05dSZJMJtPp06enTZvWe0X70Rv0hkJjeUtsIHfWaCRJsmcL1BW17rW4z9SlLnWv3brX4j5TV826fKsyAAAQHHEHAAAIjrgDAAAEp17c6eUKR1e9CNGAlfZ8pSEVLgbU09VVrnoBFHv0dJUTZ1+kxh5/+ctf4uPj4+Li1q9fb3tTQkJCQkLCvn377Kxi22b96skBd2m3O/bSk7Yr29OlV9zhnnrSdmWndqk6mDkWzJxumDk9rSzyzLHnS3v6vgXbKxw1NDR0dXUpt17xIkQOqWt7pSFL3Z4uBuSQugrrq6tYH+8VL4DiqLrdrnJiXVfR7UIkjqo74PuePXtWr9e3tLS0t7enpKR88803ln3+7rvvbrrppkuXLtXV1SmT1J663dqsvz3Ze5f2vW4vPWm78lW7tO91FT31pO3KvXep/dNjYJg5vWPm9GVNZo5nzhyVzu7YXuFo7NixlZWVyq19vwhRf9leachS19kXA+p2dRXr43WeUpurnNjWdfZFavorICDA19e3ra1NuQTd8OHDLftcWFiYmprq6+s7bNiwkSNHHj582J5C3dqsvz054C7tdsdeetJ2ZXu61HaHe+lJ5/0Nugozh5nTE2aOZ84cleLOuXPnoqKilJ91Ol1lZWVWVtZVvz/AgT777LO4uLjAwEDruhcvXtTr9ZGRkevXr7/qF7f014YNG55//nnLr9Z16+rqYmNjb7755gMHDji26JkzZ3Q63YIFC8aNG7dly5ZudRUqfLVXv4SEhGRkZERHR0dGRs6bN2/UqFGWfR4zZsyRI0caGxvPnz+fn5/v2Nntnj1py4Fd2ktP2nJel6rDPZ9fZo47YOZ45szp7RqnTqWETXWUl5evXbs2Ozu7W92goKCysjKj0Thz5sw5c+aMHDnSURUPHDgQHR2dmJh49OhRZYl13VOnTun1+oKCgrlz5548eXLYsGGOqtvZ2Xns2LHvv/9er9enp6dPmjTp9ttvt16hurq6sLBw+vTpjqpov/Ly8ldffbWkpMTX1/dXv/rV3LlzLY/VmDFjVq1aNX369IiIiLS0tN4vyWs/d+hJW47q0t570pbzutRV3OH5Zea4A2aOZ84clc7u9PEKR85w1SsNOeNiQL1fXaUvF0AZmKte5cSpF6kZmIKCgptvvlmr1QYEBMycOfOrr76yvvWRRx7Jz8/fv39/fX19XFycA+u6c0/asr9L+3jFHwvndak63Pn5Zea4FjOnL8SbOSrFHdsrHG3evNlsNju7ru2Vhix1nXoxINurq1jq9usCKP1le5WTbo+zu51VliQpPj7+m2++aWpqamtrO3z4cEJCgvU+l5WVSZL04Ycfms3m1NRUB9Z1w5605cAu7aUnbTm1S9Xhhs8vM8dNMHM8dObY8znnfm3hgw8+GDVqVHR09M6dO2VZHjlyZGNjo3JTenq6Vqv18/OLiorKz893YN2PPvpo0KBBUb8oLS211D158mRSUlJkZGRCQsKuXbv6srUBPGI7d+5UPpFuqVtQUBAXFxcZGTl69Oi9e/c6vO4XX3yRlJQUHx+/bt06+X8f5/Pnz0dGRnZ2dvZxU/Z0SL/u+/zzz8fFxcXGxmZkZMj/u88TJ04MCwu7+eabT506ZWdd2zbrV0/23qV9r9tLT9qufNUu7dfxKmx70nblq3ap/dNjYJg5V8XM6QtmjgfOHPXijrWioqJHH32UuqLWtee+nvZYeVpdO11zx0tdderac19Pe6w8ra4FlwilrlPqXov7TF3qUvfarXst7jN11azLRSQAAIDgiDsAAEBwxB0AACA44g4AABAccQcAAAiOuAMAAARH3AEAAIIj7gAAAME54GsGITZ7vvILYnPVV41BbMwc9ISvGQQAAOiRXWd3AAAA3B9ndwAAgOCIOwAAQHDEHQAAIDjiDgAAEBxxBwAACI64AwAABEfcAQAAgiPuAAAAwRF3AACA4Ig7AABAcMQdAAAgOOIOAAAQHHEHAAAIjrgDAAAER9wBAACCI+4AAADBEXcAAIDgiDsAAEBwxB0AACC4/wNeW27o5DoAAAACSURBVCEI/r8gawAAAABJRU5ErkJggg=="
-/>
+    ```bash
+    #!/bin/bash
 
-### Node Features for Selective Job Submission
+    #SBATCH --ntasks=64
+    #SBATCH --time=01:00:00
+    #SBATCH --account=<account>
 
-The nodes in our HPC system are becoming more diverse in multiple aspects: hardware, mounted
-storage, software. The system administrators can describe the set of properties and it is up to the
-user to specify her/his requirements. These features should be thought of as changing over time
-(e.g. a file system get stuck on a certain node).
+    module purge
+    module load <modules>
 
-A feature can be used with the Slurm option `--constrain` or `-C` like
-`srun -C fs_lustre_scratch2 ...` with `srun` or `sbatch`. Combinations like
-`--constraint="fs_beegfs_global0`are allowed. For a detailed description of the possible
-constraints, please refer to the Slurm documentation (<https://slurm.schedmd.com/srun.html>).
+    srun ./path/to/mpi_application
+    ```
 
-**Remark:** A feature is checked only for scheduling. Running jobs are not affected by changing
-features.
+    * Submisson: `marie@login$ sbatch batch_script.sh`
+    * Run with fewer MPI tasks: `marie@login$ sbatch --ntasks 14 batch_script.sh`
 
-### Available features on Taurus
+## Manage and Control Jobs
 
-| Feature | Description                                                              |
-|:--------|:-------------------------------------------------------------------------|
-| DA      | subset of Haswell nodes with a high bandwidth to NVMe storage (island 6) |
+### Job and Slurm Monitoring
 
-#### File system features
+On the command line, use `squeue` to watch the scheduling queue. This command will tell the reason,
+why a job is not running (job status in the last column of the output). More information about job
+parameters can also be determined with `scontrol -d show job <jobid>`. The following table holds
+detailed descriptions of the possible job states:
+
+??? tip "Reason Table"
+
+    | Reason             | Long Description  |
+    |:-------------------|:------------------|
+    | `Dependency`         | This job is waiting for a dependent job to complete. |
+    | `None`               | No reason is set for this job. |
+    | `PartitionDown`      | The partition required by this job is in a down state. |
+    | `PartitionNodeLimit` | The number of nodes required by this job is outside of its partitions current limits. Can also indicate that required nodes are down or drained. |
+    | `PartitionTimeLimit` | The jobs time limit exceeds its partitions current time limit. |
+    | `Priority`           | One or higher priority jobs exist for this partition. |
+    | `Resources`          | The job is waiting for resources to become available. |
+    | `NodeDown`           | A node required by the job is down. |
+    | `BadConstraints`     | The jobs constraints can not be satisfied. |
+    | `SystemFailure`      | Failure of the Slurm system, a filesystem, the network, etc. |
+    | `JobLaunchFailure`   | The job could not be launched. This may be due to a filesystem problem, invalid program name, etc. |
+    | `NonZeroExitCode`    | The job terminated with a non-zero exit code. |
+    | `TimeLimit`          | The job exhausted its time limit. |
+    | `InactiveLimit`      | The job reached the system inactive limit. |
 
-A feature `fs_*` is active if a certain file system is mounted and available on a node. Access to
-these file systems are tested every few minutes on each node and the Slurm features set accordingly.
+In addition, the `sinfo` command gives you a quick status overview.
 
-| Feature            | Description                                                          |
-|:-------------------|:---------------------------------------------------------------------|
-| fs_lustre_scratch2 | `/scratch` mounted read-write (the OS mount point is `/lustre/scratch2)` |
-| fs_lustre_ssd      | `/lustre/ssd` mounted read-write                                       |
-| fs_warm_archive_ws | `/warm_archive/ws` mounted read-only                                   |
-| fs_beegfs_global0  | `/beegfs/global0` mounted read-write                                   |
+For detailed information on why your submitted job has not started yet, you can use the command
 
-For certain projects, specific file systems are provided. For those,
-additional features are available, like `fs_beegfs_<projectname>`.
+```console
+marie@login$ whypending <jobid>
+```
 
-## Editing Jobs
+### Editing Jobs
 
 Jobs that have not yet started can be altered. Using `scontrol update timelimit=4:00:00
-jobid=<jobid>` is is for example possible to modify the maximum runtime. scontrol understands many
-different options, please take a look at the man page for more details.
+jobid=<jobid>` it is for example possible to modify the maximum runtime. `scontrol` understands many
+different options, please take a look at the [man page](https://slurm.schedmd.com/scontrol.html) for
+more details.
 
-## Job and Slurm Monitoring
+### Canceling Jobs
 
-On the command line, use `squeue` to watch the scheduling queue. This command will tell the reason,
-why a job is not running (job status in the last column of the output). More information about job
-parameters can also be determined with `scontrol -d show job <jobid>` Here are detailed descriptions
-of the possible job status:
-
-| Reason             | Long description                                                                                                                                 |
-|:-------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------|
-| Dependency         | This job is waiting for a dependent job to complete.                                                                                             |
-| None               | No reason is set for this job.                                                                                                                   |
-| PartitionDown      | The partition required by this job is in a DOWN state.                                                                                           |
-| PartitionNodeLimit | The number of nodes required by this job is outside of its partitions current limits. Can also indicate that required nodes are DOWN or DRAINED. |
-| PartitionTimeLimit | The jobs time limit exceeds its partitions current time limit.                                                                                   |
-| Priority           | One or higher priority jobs exist for this partition.                                                                                            |
-| Resources          | The job is waiting for resources to become available.                                                                                            |
-| NodeDown           | A node required by the job is down.                                                                                                              |
-| BadConstraints     | The jobs constraints can not be satisfied.                                                                                                       |
-| SystemFailure      | Failure of the Slurm system, a file system, the network, etc.                                                                                    |
-| JobLaunchFailure   | The job could not be launched. This may be due to a file system problem, invalid program name, etc.                                              |
-| NonZeroExitCode    | The job terminated with a non-zero exit code.                                                                                                    |
-| TimeLimit          | The job exhausted its time limit.                                                                                                                |
-| InactiveLimit      | The job reached the system InactiveLimit.                                                                                                        |
+The command `scancel <jobid>` kills a single job and removes it from the queue. By using `scancel -u
+<username>` you can send a canceling signal to all of your jobs at once.
 
-In addition, the `sinfo` command gives you a quick status overview.
+### Accounting
+
+The Slurm command `sacct` provides job statistics like memory usage, CPU time, energy usage etc.
 
-For detailed information on why your submitted job has not started yet, you can use: `whypending
-<jobid>`.
+!!! hint "Learn from old jobs"
 
-## Accounting
+    We highly encourage you to use `sacct` to learn from you previous jobs in order to better
+    estimate the requirements, e.g., runtime, for future jobs.
 
-The Slurm command `sacct` provides job statistics like memory usage, CPU
-time, energy usage etc. Examples:
+`sacct` outputs the following fields by default.
 
-```Shell Session
+```console
 # show all own jobs contained in the accounting database
-sacct
-# show specific job
-sacct -j &lt;JOBID&gt;
-# specify fields
-sacct -j &lt;JOBID&gt; -o JobName,MaxRSS,MaxVMSize,CPUTime,ConsumedEnergy
-# show all fields
-sacct -j &lt;JOBID&gt; -o ALL
+marie@login$ sacct
+       JobID    JobName  Partition    Account  AllocCPUS      State ExitCode
+------------ ---------- ---------- ---------- ---------- ---------- --------
+[...]
 ```
 
-Read the manpage (`man sacct`) for information on the provided fields.
+We'd like to point your attention to the following options gain insight in your jobs.
 
-Note that sacct by default only shows data of the last day. If you want
-to look further into the past without specifying an explicit job id, you
-need to provide a startdate via the **-S** or **--starttime** parameter,
-e.g
+??? example "Show specific job"
 
-```Shell Session
-# show all jobs since the beginning of year 2020:
-sacct -S 2020-01-01
-```
+    ```console
+    marie@login$ sacct -j <JOBID>
+    ```
 
-## Killing jobs
+??? example "Show all fields for a specific job"
 
-The command `scancel <jobid>` kills a single job and removes it from the queue. By using `scancel -u
-<username>` you are able to kill all of your jobs at once.
+    ```console
+    marie@login$ sacct -j <JOBID> -o All
+    ```
 
-## Host List
+??? example "Show specific fields"
 
-If you want to place your job onto specific nodes, there are two options for doing this. Either use
-`-p` to specify a host group that fits your needs. Or, use `-w` or (`--nodelist`) with a name node
-nodes that will work for you.
+    ```console
+    marie@login$ sacct -j <JOBID> -o JobName,MaxRSS,MaxVMSize,CPUTime,ConsumedEnergy
+    ```
 
-## Job Profiling
+The manual page (`man sacct`) and the [online reference](https://slurm.schedmd.com/sacct.html)
+provide a comprehensive documentation regarding available fields and formats.
 
-\<a href="%ATTACHURL%/hdfview_memory.png"> \<img alt="" height="272"
-src="%ATTACHURL%/hdfview_memory.png" style="float: right; margin-left:
-10px;" title="hdfview" width="324" /> \</a>
+!!! hint "Time span"
 
-Slurm offers the option to gather profiling data from every task/node of the job. Following data can
-be gathered:
+    By default, `sacct` only shows data of the last day. If you want to look further into the past
+    without specifying an explicit job id, you need to provide a start date via the `-S` option.
+    A certain end date is also possible via `-E`.
 
-- Task data, such as CPU frequency, CPU utilization, memory
-  consumption (RSS and VMSize), I/O
-- Energy consumption of the nodes
-- Infiniband data (currently deactivated)
-- Lustre filesystem data (currently deactivated)
+??? example "Show all jobs since the beginning of year 2021"
 
-The data is sampled at a fixed rate (i.e. every 5 seconds) and is stored in a HDF5 file.
+    ```console
+    marie@login$ sacct -S 2021-01-01 [-E now]
+    ```
 
-**CAUTION**: Please be aware that the profiling data may be quiet large, depending on job size,
-runtime, and sampling rate. Always remove the local profiles from
-`/lustre/scratch2/profiling/${USER}`, either by running sh5util as shown above or by simply removing
-those files.
+## Jobs at Reservations
 
-Usage examples:
+How to ask for a reservation is described in the section
+[reservations](overview.md#exclusive-reservation-of-hardware).
+After we agreed with your requirements, we will send you an e-mail with your reservation name. Then
+you could see more information about your reservation with the following command:
 
-```Shell Session
-# create energy and task profiling data (--acctg-freq is the sampling rate in seconds)
-srun --profile=All --acctg-freq=5,energy=5 -n 32 ./a.out
-# create task profiling data only
-srun --profile=All --acctg-freq=5 -n 32 ./a.out
+```console
+marie@login$ scontrol show res=<reservation name>
+# e.g. scontrol show res=hpcsupport_123
+```
 
-# merge the node local files in /lustre/scratch2/profiling/${USER} to single file
-# (without -o option output file defaults to job_&lt;JOBID&gt;.h5)
-sh5util -j &lt;JOBID&gt; -o profile.h5
-# in jobscripts or in interactive sessions (via salloc):
-sh5util -j ${SLURM_JOBID} -o profile.h5
+If you want to use your reservation, you have to add the parameter
+`--reservation=<reservation name>` either in your sbatch script or to your `srun` or `salloc` command.
 
-# view data:
-module load HDFView
-hdfview.sh profile.h5
-```
+## Node Features for Selective Job Submission
 
-More information about profiling with Slurm:
+The nodes in our HPC system are becoming more diverse in multiple aspects: hardware, mounted
+storage, software. The system administrators can describe the set of properties and it is up to the
+user to specify her/his requirements. These features should be thought of as changing over time
+(e.g., a filesystem get stuck on a certain node).
 
-- [Slurm Profiling](http://slurm.schedmd.com/hdf5_profile_user_guide.html)
-- [sh5util](http://slurm.schedmd.com/sh5util.html)
+A feature can be used with the Slurm option `--constrain` or `-C` like
+`srun -C fs_lustre_scratch2 ...` with `srun` or `sbatch`. Combinations like
+`--constraint="fs_beegfs_global0`are allowed. For a detailed description of the possible
+constraints, please refer to the [Slurm documentation](https://slurm.schedmd.com/srun.html).
 
-## Reservations
+!!! hint
 
-If you want to run jobs, which specifications are out of our job limitations, you could
-[ask for a reservation](mailto:hpcsupport@zih.tu-dresden.de). Please add the following information
-to your request mail:
+      A feature is checked only for scheduling. Running jobs are not affected by changing features.
 
-- start time (please note, that the start time have to be later than
-  the day of the request plus 7 days, better more, because the longest
-  jobs run 7 days)
-- duration or end time
-- account
-- node count or cpu count
-- partition
+### Available Features
 
-After we agreed with your requirements, we will send you an e-mail with your reservation name. Then
-you could see more information about your reservation with the following command:
+| Feature | Description                                                              |
+|:--------|:-------------------------------------------------------------------------|
+| DA      | subset of Haswell nodes with a high bandwidth to NVMe storage (island 6) |
 
-```Shell Session
-scontrol show res=<reservation name>
-# e.g. scontrol show res=hpcsupport_123
-```
+#### Filesystem Features
 
-If you want to use your reservation, you have to add the parameter `--reservation=<reservation
-name>` either in your sbatch script or to your `srun` or `salloc` command.
+A feature `fs_*` is active if a certain filesystem is mounted and available on a node. Access to
+these filesystems are tested every few minutes on each node and the Slurm features set accordingly.
 
-## Slurm External Links
+| Feature            | Description                                                          |
+|:-------------------|:---------------------------------------------------------------------|
+| `fs_lustre_scratch2` | `/scratch` mounted read-write (mount point is `/lustre/scratch2)`  |
+| `fs_lustre_ssd`      | `/lustre/ssd` mounted read-write                                   |
+| `fs_warm_archive_ws` | `/warm_archive/ws` mounted read-only                               |
+| `fs_beegfs_global0`  | `/beegfs/global0` mounted read-write                               |
 
-- Manpages, tutorials, examples, etc: (http://slurm.schedmd.com/)
-- Comparison with other batch systems: (http://www.schedmd.com/slurmdocs/rosetta.html)
+For certain projects, specific filesystems are provided. For those,
+additional features are available, like `fs_beegfs_<projectname>`.
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md
index 187bd7cf82651718fb0b188edfa0c95f33621b20..396657db06766eaab6f8694ca4bed4f8014cf7f4 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md
@@ -1,5 +1,358 @@
-# SlurmExamples
+# Job Examples
 
-## Array-Job with Afterok-Dependency and DataMover Usage
+## Parallel Jobs
 
-TODO
+For submitting parallel jobs, a few rules have to be understood and followed. In general, they
+depend on the type of parallelization and architecture.
+
+### OpenMP Jobs
+
+An SMP-parallel job can only run within a node, so it is necessary to include the options `-N 1` and
+`-n 1`. The maximum number of processors for an SMP-parallel program is 896 and 56 on partition
+`taurussmp8` and  `smp2`, respectively.  Please refer to the
+[partitions section](partitions_and_limits.md#memory-limits) for up-to-date information. Using the
+option `--cpus-per-task=<N>` Slurm will start one task and you will have `N` CPUs available for your
+job.  An example job file would look like:
+
+!!! example "Job file for OpenMP application"
+
+    ```Bash
+    #!/bin/bash
+    #SBATCH --nodes=1
+    #SBATCH --tasks-per-node=1
+    #SBATCH --cpus-per-task=8
+    #SBATCH --time=08:00:00
+    #SBATCH -J Science1
+    #SBATCH --mail-type=end
+    #SBATCH --mail-user=your.name@tu-dresden.de
+
+    export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
+    ./path/to/binary
+    ```
+
+### MPI Jobs
+
+For MPI-parallel jobs one typically allocates one core per task that has to be started.
+
+!!! warning "MPI libraries"
+
+    There are different MPI libraries on ZIH systems for the different micro archtitectures. Thus,
+    you have to compile the binaries specifically for the target architecture and partition. Please
+    refer to the sections [building software](../software/building_software.md) and
+    [module environments](../software/runtime_environment.md#module-environments) for detailed
+    information.
+
+!!! example "Job file for MPI application"
+
+    ```Bash
+    #!/bin/bash
+    #SBATCH --ntasks=864
+    #SBATCH --time=08:00:00
+    #SBATCH -J Science1
+    #SBATCH --mail-type=end
+    #SBATCH --mail-user=your.name@tu-dresden.de
+
+    srun ./path/to/binary
+    ```
+
+### Multiple Programs Running Simultaneously in a Job
+
+In this short example, our goal is to run four instances of a program concurrently in a **single**
+batch script. Of course we could also start a batch script four times with `sbatch` but this is not
+what we want to do here. Please have a look at
+[this subsection](#multiple-programs-running-simultaneously-in-a-job)
+in case you intend to run GPU programs simultaneously in a **single** job.
+
+!!! example " "
+
+    ```Bash
+    #!/bin/bash
+    #SBATCH --ntasks=4
+    #SBATCH --cpus-per-task=1
+    #SBATCH --time=01:00:00
+    #SBATCH -J PseudoParallelJobs
+    #SBATCH --mail-type=end
+    #SBATCH --mail-user=your.name@tu-dresden.de
+
+    # The following sleep command was reported to fix warnings/errors with srun by users (feel free to uncomment).
+    #sleep 5
+    srun --exclusive --ntasks=1 ./path/to/binary &
+
+    #sleep 5
+    srun --exclusive --ntasks=1 ./path/to/binary &
+
+    #sleep 5
+    srun --exclusive --ntasks=1 ./path/to/binary &
+
+    #sleep 5
+    srun --exclusive --ntasks=1 ./path/to/binary &
+
+    echo "Waiting for parallel job steps to complete..."
+    wait
+    echo "All parallel job steps completed!"
+    ```
+
+## Requesting GPUs
+
+Slurm will allocate one or many GPUs for your job if requested. Please note that GPUs are only
+available in certain partitions, like `gpu2`, `gpu3` or `gpu2-interactive`. The option
+for `sbatch/srun` in this case is `--gres=gpu:[NUM_PER_NODE]` (where `NUM_PER_NODE` can be `1`, `2` or
+`4`, meaning that one, two or four of the GPUs per node will be used for the job).
+
+!!! example "Job file to request a GPU"
+
+    ```Bash
+    #!/bin/bash
+    #SBATCH --nodes=2              # request 2 nodes
+    #SBATCH --mincpus=1            # allocate one task per node...
+    #SBATCH --ntasks=2             # ...which means 2 tasks in total (see note below)
+    #SBATCH --cpus-per-task=6      # use 6 threads per task
+    #SBATCH --gres=gpu:1           # use 1 GPU per node (i.e. use one GPU per task)
+    #SBATCH --time=01:00:00        # run for 1 hour
+    #SBATCH -A Project1            # account CPU time to Project1
+
+    srun ./your/cuda/application   # start you application (probably requires MPI to use both nodes)
+    ```
+
+Please be aware that the partitions `gpu`, `gpu1` and `gpu2` can only be used for non-interactive
+jobs which are submitted by `sbatch`.  Interactive jobs (`salloc`, `srun`) will have to use the
+partition `gpu-interactive`. Slurm will automatically select the right partition if the partition
+parameter `-p, --partition` is omitted.
+
+!!! note
+
+    Due to an unresolved issue concerning the Slurm job scheduling behavior, it is currently not
+    practical to use `--ntasks-per-node` together with GPU jobs. If you want to use multiple nodes,
+    please use the parameters `--ntasks` and `--mincpus` instead. The values of `mincpus`*`nodes`
+    has to equal `ntasks` in this case.
+
+### Limitations of GPU Job Allocations
+
+The number of cores per node that are currently allowed to be allocated for GPU jobs is limited
+depending on how many GPUs are being requested. On the K80 nodes, you may only request up to 6
+cores per requested GPU (8 per on the K20 nodes). This is because we do not wish that GPUs remain
+unusable due to all cores on a node being used by a single job which does not, at the same time,
+request all GPUs.
+
+E.g., if you specify `--gres=gpu:2`, your total number of cores per node (meaning:
+`ntasks`*`cpus-per-task`) may not exceed 12 (on the K80 nodes)
+
+Note that this also has implications for the use of the `--exclusive` parameter. Since this sets the
+number of allocated cores to 24 (or 16 on the K20X nodes), you also **must** request all four GPUs
+by specifying `--gres=gpu:4`, otherwise your job will not start. In the case of `--exclusive`, it won't
+be denied on submission, because this is evaluated in a later scheduling step. Jobs that directly
+request too many cores per GPU will be denied with the error message:
+
+```console
+Batch job submission failed: Requested node configuration is not available
+```
+
+### Running Multiple GPU Applications Simultaneously in a Batch Job
+
+Our starting point is a (serial) program that needs a single GPU and four CPU cores to perform its
+task (e.g. TensorFlow). The following batch script shows how to run such a job on the partition `ml`.
+
+!!! example
+
+    ```bash
+    #!/bin/bash
+    #SBATCH --ntasks=1
+    #SBATCH --cpus-per-task=4
+    #SBATCH --gres=gpu:1
+    #SBATCH --gpus-per-task=1
+    #SBATCH --time=01:00:00
+    #SBATCH --mem-per-cpu=1443
+    #SBATCH --partition=ml
+
+    srun some-gpu-application
+    ```
+
+When `srun` is used within a submission script, it inherits parameters from `sbatch`, including
+`--ntasks=1`, `--cpus-per-task=4`, etc. So we actually implicitly run the following
+
+```bash
+srun --ntasks=1 --cpus-per-task=4 ... --partition=ml some-gpu-application
+```
+
+Now, our goal is to run four instances of this program concurrently in a single batch script. Of
+course we could also start the above script multiple times with `sbatch`, but this is not what we want
+to do here.
+
+#### Solution
+
+In order to run multiple programs concurrently in a single batch script/allocation we have to do
+three things:
+
+1. Allocate enough resources to accommodate multiple instances of our program. This can be achieved
+   with an appropriate batch script header (see below).
+1. Start job steps with srun as background processes. This is achieved by adding an ampersand at the
+   end of the `srun` command
+1. Make sure that each background process gets its private resources. We need to set the resource
+   fraction needed for a single run in the corresponding srun command. The total aggregated
+   resources of all job steps must fit in the allocation specified in the batch script header.
+   Additionally, the option `--exclusive` is needed to make sure that each job step is provided with
+   its private set of CPU and GPU resources.  The following example shows how four independent
+   instances of the same program can be run concurrently from a single batch script. Each instance
+   (task) is equipped with 4 CPUs (cores) and one GPU.
+
+!!! example "Job file simultaneously executing four independent instances of the same program"
+
+    ```Bash
+    #!/bin/bash
+    #SBATCH --ntasks=4
+    #SBATCH --cpus-per-task=4
+    #SBATCH --gres=gpu:4
+    #SBATCH --gpus-per-task=1
+    #SBATCH --time=01:00:00
+    #SBATCH --mem-per-cpu=1443
+    #SBATCH --partition=ml
+
+    srun --exclusive --gres=gpu:1 --ntasks=1 --cpus-per-task=4 --gpus-per-task=1 --mem-per-cpu=1443 some-gpu-application &
+    srun --exclusive --gres=gpu:1 --ntasks=1 --cpus-per-task=4 --gpus-per-task=1 --mem-per-cpu=1443 some-gpu-application &
+    srun --exclusive --gres=gpu:1 --ntasks=1 --cpus-per-task=4 --gpus-per-task=1 --mem-per-cpu=1443 some-gpu-application &
+    srun --exclusive --gres=gpu:1 --ntasks=1 --cpus-per-task=4 --gpus-per-task=1 --mem-per-cpu=1443 some-gpu-application &
+
+    echo "Waiting for all job steps to complete..."
+    wait
+    echo "All jobs completed!"
+    ```
+
+In practice it is possible to leave out resource options in `srun` that do not differ from the ones
+inherited from the surrounding `sbatch` context. The following line would be sufficient to do the
+job in this example:
+
+```bash
+srun --exclusive --gres=gpu:1 --ntasks=1 some-gpu-application &
+```
+
+Yet, it adds some extra safety to leave them in, enabling the Slurm batch system to complain if not
+enough resources in total were specified in the header of the batch script.
+
+## Exclusive Jobs for Benchmarking
+
+Jobs ZIH systems run, by default, in shared-mode, meaning that multiple jobs (from different users)
+can run at the same time on the same compute node. Sometimes, this behavior is not desired (e.g.
+for benchmarking purposes). Thus, the Slurm parameter `--exclusive` request for exclusive usage of
+resources.
+
+Setting `--exclusive` **only** makes sure that there will be **no other jobs running on your nodes**.
+It does not, however, mean that you automatically get access to all the resources which the node
+might provide without explicitly requesting them, e.g. you still have to request a GPU via the
+generic resources parameter (`gres`) to run on the partitions with GPU, or you still have to
+request all cores of a node if you need them. CPU cores can either to be used for a task
+(`--ntasks`) or for multi-threading within the same task (`--cpus-per-task`). Since those two
+options are semantically different (e.g., the former will influence how many MPI processes will be
+spawned by `srun` whereas the latter does not), Slurm cannot determine automatically which of the
+two you might want to use. Since we use cgroups for separation of jobs, your job is not allowed to
+use more resources than requested.*
+
+If you just want to use all available cores in a node, you have to specify how Slurm should organize
+them, like with `-p haswell -c 24` or `-p haswell --ntasks-per-node=24`.
+
+Here is a short example to ensure that a benchmark is not spoiled by other jobs, even if it doesn't
+use up all resources in the nodes:
+
+!!! example "Exclusive resources"
+
+    ```Bash
+    #!/bin/bash
+    #SBATCH -p haswell
+    #SBATCH --nodes=2
+    #SBATCH --ntasks-per-node=2
+    #SBATCH --cpus-per-task=8
+    #SBATCH --exclusive    # ensure that nobody spoils my measurement on 2 x 2 x 8 cores
+    #SBATCH --time=00:10:00
+    #SBATCH -J Benchmark
+    #SBATCH --mail-user=your.name@tu-dresden.de
+
+    srun ./my_benchmark
+    ```
+
+## Array Jobs
+
+Array jobs can be used to create a sequence of jobs that share the same executable and resource
+requirements, but have different input files, to be submitted, controlled, and monitored as a single
+unit. The option is `-a, --array=<indexes>` where the parameter `indexes` specifies the array
+indices. The following specifications are possible
+
+* comma separated list, e.g., `--array=0,1,2,17`,
+* range based, e.g., `--array=0-42`,
+* step based, e.g., `--array=0-15:4`,
+* mix of comma separated and range base, e.g., `--array=0,1,2,16-42`.
+
+A maximum number of simultaneously running tasks from the job array may be specified using the `%`
+separator. The specification `--array=0-23%8` limits the number of simultaneously running tasks from
+this job array to 8.
+
+Within the job you can read the environment variables `SLURM_ARRAY_JOB_ID` and
+`SLURM_ARRAY_TASK_ID` which is set to the first job ID of the array and set individually for each
+step, respectively.
+
+Within an array job, you can use `%a` and `%A` in addition to `%j` and `%N` to make the output file
+name specific to the job:
+
+* `%A` will be replaced by the value of `SLURM_ARRAY_JOB_ID`
+* `%a` will be replaced by the value of `SLURM_ARRAY_TASK_ID`
+
+!!! example "Job file using job arrays"
+
+    ```Bash
+    #!/bin/bash
+    #SBATCH --array 0-9
+    #SBATCH -o arraytest-%A_%a.out
+    #SBATCH -e arraytest-%A_%a.err
+    #SBATCH --ntasks=864
+    #SBATCH --time=08:00:00
+    #SBATCH -J Science1
+    #SBATCH --mail-type=end
+    #SBATCH --mail-user=your.name@tu-dresden.de
+
+    echo "Hi, I am step $SLURM_ARRAY_TASK_ID in this array job $SLURM_ARRAY_JOB_ID"
+    ```
+
+!!! note
+
+    If you submit a large number of jobs doing heavy I/O in the Lustre filesystems you should limit
+    the number of your simultaneously running job with a second parameter like:
+
+    ```Bash
+    #SBATCH --array=1-100000%100
+    ```
+
+Please read the Slurm documentation at https://slurm.schedmd.com/sbatch.html for further details.
+
+## Chain Jobs
+
+You can use chain jobs to create dependencies between jobs. This is often the case if a job relies
+on the result of one or more preceding jobs. Chain jobs can also be used if the runtime limit of the
+batch queues is not sufficient for your job. Slurm has an option
+`-d, --dependency=<dependency_list>` that allows to specify that a job is only allowed to start if
+another job finished.
+
+Here is an example of how a chain job can look like, the example submits 4 jobs (described in a job
+file) that will be executed one after each other with different CPU numbers:
+
+!!! example "Script to submit jobs with dependencies"
+
+    ```Bash
+    #!/bin/bash
+    TASK_NUMBERS="1 2 4 8"
+    DEPENDENCY=""
+    JOB_FILE="myjob.slurm"
+
+    for TASKS in $TASK_NUMBERS ; do
+        JOB_CMD="sbatch --ntasks=$TASKS"
+        if [ -n "$DEPENDENCY" ] ; then
+            JOB_CMD="$JOB_CMD --dependency afterany:$DEPENDENCY"
+        fi
+        JOB_CMD="$JOB_CMD $JOB_FILE"
+        echo -n "Running command: $JOB_CMD  "
+        OUT=`$JOB_CMD`
+        echo "Result: $OUT"
+        DEPENDENCY=`echo $OUT | awk '{print $4}'`
+    done
+    ```
+
+## Array-Job with Afterok-Dependency and Datamover Usage
+
+This is a *todo*
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_profiling.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_profiling.md
new file mode 100644
index 0000000000000000000000000000000000000000..273a87710602b62feb97c342335b4c44f30ad09e
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_profiling.md
@@ -0,0 +1,62 @@
+# Job Profiling
+
+Slurm offers the option to gather profiling data from every task/node of the job. Analyzing this
+data allows for a better understanding of your jobs in terms of elapsed time, runtime and I/O
+behavior, and many more.
+
+The following data can be gathered:
+
+* Task data, such as CPU frequency, CPU utilization, memory consumption (RSS and VMSize), I/O
+* Energy consumption of the nodes
+* Infiniband data (currently deactivated)
+* Lustre filesystem data (currently deactivated)
+
+The data is sampled at a fixed rate (i.e. every 5 seconds) and is stored in a HDF5 file.
+
+!!! note "Data hygiene"
+
+    Please be aware that the profiling data may be quiet large, depending on job size, runtime, and
+    sampling rate. Always remove the local profiles from `/lustre/scratch2/profiling/${USER}`,
+    either by running `sh5util` as shown above or by simply removing those files.
+
+## Examples
+
+The following examples of `srun` profiling command lines are meant to replace the current `srun`
+line within your job file.
+
+??? example "Create profiling data"
+
+    (--acctg-freq is the sampling rate in seconds)
+
+    ```console
+    # Energy and task profiling
+    srun --profile=All --acctg-freq=5,energy=5 -n 32 ./a.out
+    # Task profiling data only
+    srun --profile=All --acctg-freq=5 -n 32 ./a.out
+    ```
+
+??? example "Merge the node local files"
+
+    ... in `/lustre/scratch2/profiling/${USER}` to single file.
+
+    ```console
+    # (without -o option output file defaults to job_$JOBID.h5)
+    sh5util -j <JOBID> -o profile.h5
+    # in jobscripts or in interactive sessions (via salloc):
+    sh5util -j ${SLURM_JOBID} -o profile.h5
+    ```
+
+??? example "View data"
+
+    ```console
+    marie@login$ module load HDFView
+    marie@login$ hdfview.sh profile.h5
+    ```
+
+![HDFView Memory](misc/hdfview_memory.png)
+{: align="center"}
+
+More information about profiling with Slurm:
+
+- [Slurm Profiling](http://slurm.schedmd.com/hdf5_profile_user_guide.html)
+- [`sh5util`](http://slurm.schedmd.com/sh5util.html)
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/system_taurus.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/system_taurus.md
deleted file mode 100644
index 3625bf4503d4b41d73fc7a9de6c02dabc3d3feec..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/system_taurus.md
+++ /dev/null
@@ -1,210 +0,0 @@
-# Taurus
-
-## Information about the Hardware
-
-Detailed information on the current HPC hardware can be found
-[here.](../jobs_and_resources/hardware_taurus.md)
-
-## Applying for Access to the System
-
-Project and login application forms for taurus are available
-[here](../access/overview.md).
-
-## Login to the System
-
-Login to the system is available via ssh at taurus.hrsk.tu-dresden.de.
-There are several login nodes (internally called tauruslogin3 to
-tauruslogin6). Currently, if you use taurus.hrsk.tu-dresden.de, you will
-be placed on tauruslogin5. It might be a good idea to give the other
-login nodes a try if the load on tauruslogin5 is rather high (there will
-once again be load balancer soon, but at the moment, there is none).
-
-Please note that if you store data on the local disk (e.g. under /tmp),
-it will be on only one of the three nodes. If you relogin and the data
-is not there, you are probably on another node.
-
-You can find an list of fingerprints [here](../access/key_fingerprints.md).
-
-## Transferring Data from/to Taurus
-
-taurus has two specialized data transfer nodes. Both nodes are
-accessible via `taurusexport.hrsk.tu-dresden.de`. Currently, only rsync,
-scp and sftp to these nodes will work. A login via SSH is not possible
-as these nodes are dedicated to data transfers.
-
-These nodes are located behind a firewall. By default, they are only
-accessible from IP addresses from with the Campus of the TU Dresden.
-External IP addresses can be enabled upon request. These requests should
-be send via eMail to `servicedesk@tu-dresden.de` and mention the IP
-address range (or node names), the desired protocol and the time frame
-that the firewall needs to be open.
-
-We are open to discuss options to export the data in the scratch file
-system via CIFS or other protocols. If you have a need for this, please
-contact the Service Desk as well.
-
-**Phase 2:** The nodes taurusexport\[3,4\] provide access to the
-`/scratch` file system of the second phase.
-
-## Compiling Parallel Applications
-
-You have to explicitly load a compiler module and an MPI module on
-Taurus. Eg. with `module load GCC OpenMPI`. ( [read more about
-Modules](../software/runtime_environment.md), **todo link** (read more about
-Compilers)(Compendium.Compilers))
-
-Use the wrapper commands like e.g. `mpicc` (`mpiicc` for intel),
-`mpicxx` (`mpiicpc`) or `mpif90` (`mpiifort`) to compile MPI source
-code. To reveal the command lines behind the wrappers, use the option
-`-show`.
-
-For running your code, you have to load the same compiler and MPI module
-as for compiling the program. Please follow the following guiedlines to
-run your parallel program using the batch system.
-
-## Batch System
-
-Applications on an HPC system can not be run on the login node. They
-have to be submitted to compute nodes with dedicated resources for the
-user's job. Normally a job can be submitted with these data:
-
--   number of CPU cores,
--   requested CPU cores have to belong on one node (OpenMP programs) or
-    can distributed (MPI),
--   memory per process,
--   maximum wall clock time (after reaching this limit the process is
-    killed automatically),
--   files for redirection of output and error messages,
--   executable and command line parameters.
-
-The batch system on Taurus is Slurm. If you are migrating from LSF
-(deimos, mars, atlas), the biggest difference is that Slurm has no
-notion of batch queues any more.
-
--   [General information on the Slurm batch system](slurm.md)
--   Slurm also provides process-level and node-level [profiling of
-    jobs](slurm.md#Job_Profiling)
-
-### Partitions
-
-Please note that the islands are also present as partitions for the
-batch systems. They are called
-
--   romeo (Island 7 - AMD Rome CPUs)
--   julia (large SMP machine)
--   haswell (Islands 4 to 6 - Haswell CPUs)
--   gpu (Island 2 - GPUs)
-    -   gpu2 (K80X)
--   smp2 (SMP Nodes)
-
-**Note:** usually you don't have to specify a partition explicitly with
-the parameter -p, because SLURM will automatically select a suitable
-partition depending on your memory and gres requirements.
-
-### Run-time Limits
-
-**Run-time limits are enforced**. This means, a job will be canceled as
-soon as it exceeds its requested limit. At Taurus, the maximum run time
-is 7 days.
-
-Shorter jobs come with multiple advantages:\<img alt="part.png"
-height="117" src="%ATTACHURL%/part.png" style="float: right;"
-title="part.png" width="284" />
-
--   lower risk of loss of computing time,
--   shorter waiting time for reservations,
--   higher job fluctuation; thus, jobs with high priorities may start
-    faster.
-
-To bring down the percentage of long running jobs we restrict the number
-of cores with jobs longer than 2 days to approximately 50% and with jobs
-longer than 24 to 75% of the total number of cores. (These numbers are
-subject to changes.) As best practice we advise a run time of about 8h.
-
-Please always try to make a good estimation of your needed time limit.
-For this, you can use a command line like this to compare the requested
-timelimit with the elapsed time for your completed jobs that started
-after a given date:
-
-    sacct -X -S 2021-01-01 -E now --format=start,JobID,jobname,elapsed,timelimit -s COMPLETED
-
-Instead of running one long job, you should split it up into a chain
-job. Even applications that are not capable of chreckpoint/restart can
-be adapted. The HOWTO can be found [here](../jobs_and_resources/checkpoint_restart.md),
-
-### Memory Limits
-
-**Memory limits are enforced.** This means that jobs which exceed their
-per-node memory limit will be killed automatically by the batch system.
-Memory requirements for your job can be specified via the *sbatch/srun*
-parameters: **--mem-per-cpu=\<MB>** or **--mem=\<MB>** (which is "memory
-per node"). The **default limit** is **300 MB** per cpu.
-
-Taurus has sets of nodes with a different amount of installed memory
-which affect where your job may be run. To achieve the shortest possible
-waiting time for your jobs, you should be aware of the limits shown in
-the following table.
-
-| Partition          | Nodes                                    | # Nodes | Cores per Node  | Avail. Memory per Core | Avail. Memory per Node | GPUs per node     |
-|:-------------------|:-----------------------------------------|:--------|:----------------|:-----------------------|:-----------------------|:------------------|
-| `haswell64`        | `taurusi[4001-4104,5001-5612,6001-6612]` | `1328`  | `24`            | `2541 MB`              | `61000 MB`             | `-`               |
-| `haswell128`       | `taurusi[4105-4188]`                     | `84`    | `24`            | `5250 MB`              | `126000 MB`            | `-`               |
-| `haswell256`       | `taurusi[4189-4232]`                     | `44`    | `24`            | `10583 MB`             | `254000 MB`            | `-`               |
-| `broadwell`        | `taurusi[4233-4264]`                     | `32`    | `28`            | `2214 MB`              | `62000 MB`             | `-`               |
-| `smp2`             | `taurussmp[3-7]`                         | `5`     | `56`            | `36500 MB`             | `2044000 MB`           | `-`               |
-| `gpu2`             | `taurusi[2045-2106]`                     | `62`    | `24`            | `2583 MB`              | `62000 MB`             | `4 (2 dual GPUs)` |
-| `gpu2-interactive` | `taurusi[2045-2108]`                     | `64`    | `24`            | `2583 MB`              | `62000 MB`             | `4 (2 dual GPUs)` |
-| `hpdlf`            | `taurusa[3-16]`                          | `14`    | `12`            | `7916 MB`              | `95000 MB`             | `3`               |
-| `ml`               | `taurusml[1-32]`                         | `32`    | `44 (HT: 176)`  | `1443 MB*`             | `254000 MB`            | `6`               |
-| `romeo`            | `taurusi[7001-7192]`                     | `192`   | `128 (HT: 256)` | `1972 MB*`             | `505000 MB`            | `-`               |
-| `julia`            | `taurussmp8`                             | `1`     | `896`           | `27343 MB*`            | `49000000 MB`          | `-`               |
-
-\* note that the ML nodes have 4way-SMT, so for every physical core
-allocated (e.g., with SLURM_HINT=nomultithread), you will always get
-4\*1443MB because the memory of the other threads is allocated
-implicitly, too.
-
-### Submission of Parallel Jobs
-
-To run MPI jobs ensure that the same MPI module is loaded as during
-compile-time. In doubt, check you loaded modules with `module list`. If
-your code has been compiled with the standard `bullxmpi` installation,
-you can load the module via `module load bullxmpi`. Alternative MPI
-libraries (`intelmpi`, `openmpi`) are also available.
-
-Please pay attention to the messages you get loading the module. They
-are more up-to-date than this manual.
-
-## GPUs
-
-Island 2 of taurus contains a total of 128 NVIDIA Tesla K80 (dual) GPUs
-in 64 nodes.
-
-More information on how to program applications for GPUs can be found
-[GPU Programming](GPU Programming).
-
-The following software modules on taurus offer GPU support:
-
--   `CUDA` : The NVIDIA CUDA compilers
--   `PGI` : The PGI compilers with OpenACC support
-
-## Hardware for Deep Learning (HPDLF)
-
-The partition hpdlf contains 14 servers. Each of them has:
-
--   2 sockets CPU E5-2603 v4 (1.70GHz) with 6 cores each,
--   3 consumer GPU cards NVIDIA GTX1080,
--   96 GB RAM.
-
-## Energy Measurement
-
-Taurus contains sophisticated energy measurement instrumentation.
-Especially HDEEM is available on the haswell nodes of Phase II. More
-detailed information can be found at
-**todo link** (EnergyMeasurement)(EnergyMeasurement).
-
-## Low level optimizations
-
-x86 processsors provide registers that can be used for optimizations and
-performance monitoring. Taurus provides you access to such features via
-the **todo link** (X86Adapt)(X86Adapt) software infrastructure.
diff --git a/doc.zih.tu-dresden.de/docs/legal_notice.md b/doc.zih.tu-dresden.de/docs/legal_notice.md
index 3412a3a0a511d26d1a8bf8e730161622fb7930d9..a5e187ee3f5eb9937e8eb01c33eed182fb2c423d 100644
--- a/doc.zih.tu-dresden.de/docs/legal_notice.md
+++ b/doc.zih.tu-dresden.de/docs/legal_notice.md
@@ -1,8 +1,10 @@
-# Legal Notice / Impressum
+# Legal Notice
+
+## Impressum
 
 Es gilt das [Impressum der TU Dresden](https://tu-dresden.de/impressum) mit folgenden Änderungen:
 
-## Ansprechpartner/Betreiber:
+### Ansprechpartner/Betreiber:
 
 Technische Universität Dresden
 Zentrum für Informationsdienste und Hochleistungsrechnen
@@ -12,7 +14,7 @@ Tel.: +49 351 463-40000
 Fax: +49 351 463-42328
 E-Mail: servicedesk@tu-dresden.de
 
-## Konzeption, Technische Umsetzung, Anbieter:
+### Konzeption, Technische Umsetzung, Anbieter:
 
 Technische Universität Dresden
 Zentrum für Informationsdienste und Hochleistungsrechnen
@@ -22,3 +24,10 @@ Prof. Dr. Wolfgang E. Nagel
 Tel.: +49 351 463-35450
 Fax: +49 351 463-37773
 E-Mail: zih@tu-dresden.de
+
+## License
+
+This documentation and the repository have two licenses:
+
+* All documentation is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
+* All software components are licensed under MIT license.
diff --git a/doc.zih.tu-dresden.de/docs/misc/HPC-Introduction.pdf b/doc.zih.tu-dresden.de/docs/misc/HPC-Introduction.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..71d47f04b75004fad2b9fd7181051c2beae4e2fe
Binary files /dev/null and b/doc.zih.tu-dresden.de/docs/misc/HPC-Introduction.pdf differ
diff --git a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md
index 7eb432bd9f963ed2ecc0133db2ad5f04c9b67b8c..9bc564d05a310005edc1d5564549db8da08ee415 100644
--- a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md
+++ b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md
@@ -6,8 +6,8 @@
 
 [Apache Spark](https://spark.apache.org/), [Apache Flink](https://flink.apache.org/)
 and [Apache Hadoop](https://hadoop.apache.org/) are frameworks for processing and integrating
-Big Data. These frameworks are also offered as software [modules](modules.md) on both `ml` and
-`scs5` partition. You can check module versions and availability with the command
+Big Data. These frameworks are also offered as software [modules](modules.md) in both `ml` and
+`scs5` software environments. You can check module versions and availability with the command
 
 ```console
 marie@login$ module avail Spark
@@ -46,20 +46,20 @@ as via [Jupyter notebook](#jupyter-notebook). All three ways are outlined in the
 
 ### Default Configuration
 
-The Spark module is available for both `scs5` and `ml` partitions.
+The Spark module is available in both `scs5` and `ml` environments.
 Thus, Spark can be executed using different CPU architectures, e.g., Haswell and Power9.
 
 Let us assume that two nodes should be used for the computation. Use a
 `srun` command similar to the following to start an interactive session
-using the Haswell partition. The following code snippet shows a job submission
-to Haswell nodes with an allocation of two nodes with 60 GB main memory
+using the partition haswell. The following code snippet shows a job submission
+to haswell nodes with an allocation of two nodes with 60 GB main memory
 exclusively for one hour:
 
 ```console
 marie@login$ srun --partition=haswell -N 2 --mem=60g --exclusive --time=01:00:00 --pty bash -l
 ```
 
-The command for different resource allocation on the `ml` partition is
+The command for different resource allocation on the partition `ml` is
 similar, e. g. for a job submission to `ml` nodes with an allocation of one
 node, one task per node, two CPUs per task, one GPU per node, with 10000 MB for one hour:
 
diff --git a/doc.zih.tu-dresden.de/docs/software/compilers.md b/doc.zih.tu-dresden.de/docs/software/compilers.md
index 4292602e02e77bf01ad04c8c01643aadcc8c580a..7bb9c3c4b9f3a65151d5292ff587decd306e35c9 100644
--- a/doc.zih.tu-dresden.de/docs/software/compilers.md
+++ b/doc.zih.tu-dresden.de/docs/software/compilers.md
@@ -55,10 +55,10 @@ pages or use the option `--help` to list all options of the compiler.
 | `-fprofile-use`      | `-prof-use`  | `-Mpfo`     | use profile data for optimization      |
 
 !!! note
-    We can not generally give advice as to which option should be used.
-    To gain maximum performance please test the compilers and a few combinations of
-    optimization flags.
-    In case of doubt, you can also contact [HPC support](../support.md) and ask the staff for help.
+
+    We can not generally give advice as to which option should be used. To gain maximum performance
+    please test the compilers and a few combinations of optimization flags. In case of doubt, you
+    can also contact [HPC support](../support/support.md) and ask the staff for help.
 
 ### Architecture-specific Optimizations
 
diff --git a/doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md
index 21966e1f3f03416e1a080a391894f370f9f1a5a8..72224113fdf8a9c6f4727d47771283dc1d0c1baa 100644
--- a/doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md
+++ b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md
@@ -7,7 +7,7 @@ algorithms and graphical techniques.  R is an integrated suite of software facil
 manipulation, calculation and graphing.
 
 We recommend using the partitions Haswell and/or Romeo to work with R. For more details
-see our [hardware documentation](../jobs_and_resources/hardware_taurus.md).
+see our [hardware documentation](../jobs_and_resources/hardware_overview.md).
 
 ## R Console
 
@@ -256,7 +256,7 @@ code to use `mclapply` function. Check out an example below.
 
 The disadvantages of using shared-memory parallelism approach are, that the number of parallel tasks
 is limited to the number of cores on a single node. The maximum number of cores on a single node can
-be found in our [hardware documentation](../jobs_and_resources/hardware_taurus.md).
+be found in our [hardware documentation](../jobs_and_resources/hardware_overview.md).
 
 Submitting a multicore R job to Slurm is very similar to submitting an
 [OpenMP Job](../jobs_and_resources/slurm.md#binding-and-distribution-of-tasks),
diff --git a/doc.zih.tu-dresden.de/docs/software/distributed_training.md b/doc.zih.tu-dresden.de/docs/software/distributed_training.md
index 1548afa1aef1dd3377490b4f6b757194f320bdea..bd45768f67c862b2a0137bd2a1656723fa6dfd91 100644
--- a/doc.zih.tu-dresden.de/docs/software/distributed_training.md
+++ b/doc.zih.tu-dresden.de/docs/software/distributed_training.md
@@ -2,50 +2,175 @@
 
 ## Internal Distribution
 
+Training a machine learning model can be a very time-consuming task.
+Distributed training allows scaling up deep learning tasks,
+so we can train very large models and speed up training time.
+
+There are two paradigms for distributed training:
+
+1. data parallelism:
+each device has a replica of the model and computes over different parts of the data.
+2. model parallelism:
+models are distributed over multiple devices.
+
+In the following, we will stick to the concept of data parallelism because it is a widely-used
+technique.
+There are basically two strategies to train the scattered data throughout the devices:
+
+1. synchronous training: devices (workers) are trained over different slices of the data and at the
+end of each step gradients are aggregated.
+2. asynchronous training:
+all devices are independently trained over the data and update variables asynchronously.
+
 ### Distributed TensorFlow
 
-TODO
+[TensorFlow](https://www.tensorflow.org/guide/distributed_training) provides a high-end API to
+train your model and distribute the training on multiple GPUs or machines with minimal code changes.
 
-### Distributed PyTorch
+The primary distributed training method in TensorFlow is `tf.distribute.Strategy`.
+There are multiple strategies that distribute the training depending on the specific use case,
+the data and the model.
 
-Hint: just copied some old content as starting point
+TensorFlow refers to the synchronous training as mirrored strategy.
+There are two mirrored strategies available whose principles are the same:
 
-#### Using Multiple GPUs with PyTorch
+- `tf.distribute.MirroredStrategy` supports the training on multiple GPUs on one machine.
+- `tf.distribute.MultiWorkerMirroredStrategy` for multiple machines, each with multiple GPUs.
+
+The Central Storage Strategy applies to environments where the GPUs might not be able to store
+the entire model:
+
+- `tf.distribute.experimental.CentralStorageStrategy` supports the case of a single machine
+with multiple GPUs.
+
+The CPU holds the global state of the model and GPUs perform the training.
 
-Effective use of GPUs is essential, and it implies using parallelism in
-your code and model. Data Parallelism and model parallelism are effective instruments
-to improve the performance of your code in case of GPU using.
+In some cases asynchronous training might be the better choice, for example, if workers differ on
+capability, are down for maintenance, or have different priorities.
+The Parameter Server Strategy is capable of applying asynchronous training:
 
-The data parallelism is a widely-used technique. It replicates the same model to all GPUs,
-where each GPU consumes a different partition of the input data. You could see this method [here](https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html).
+- `tf.distribute.experimental.ParameterServerStrategy` requires several Parameter Servers and workers.
 
-The example below shows how to solve that problem by using model
-parallel, which, in contrast to data parallelism, splits a single model
-onto different GPUs, rather than replicating the entire model on each
-GPU. The high-level idea of model parallel is to place different sub-networks of a model onto different
-devices. As the only part of a model operates on any individual device, a set of devices can
-collectively serve a larger model.
+The Parameter Server holds the parameters and is responsible for updating
+the global state of the models.
+Each worker runs the training loop independently.
 
-It is recommended to use [DistributedDataParallel]
-(https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html),
+#### Example
+
+In this case, we will go through an example with Multi Worker Mirrored Strategy.
+Multi-node training requires a `TF_CONFIG` environment variable to be set which will
+be different on each node.
+
+```console
+marie@compute$ TF_CONFIG='{"cluster": {"worker": ["10.1.10.58:12345", "10.1.10.250:12345"]}, "task": {"index": 0, "type": "worker"}}' python main.py
+```
+
+The `cluster` field describes how the cluster is set up (same on each node).
+Here, the cluster has two nodes referred to as workers.
+The `IP:port` information is listed in the `worker` array.
+The `task` field varies from node to node.
+It specifies the type and index of the node.
+In this case, the training job runs on worker 0, which is `10.1.10.58:12345`.
+We need to adapt this snippet for each node.
+The second node will have `'task': {'index': 1, 'type': 'worker'}`.
+
+With two modifications, we can parallelize the serial code:
+We need to initialize the distributed strategy:
+
+```python
+strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
+```
+
+And define the model under the strategy scope:
+
+```python
+with strategy.scope():
+    model = resnet.resnet56(img_input=img_input, classes=NUM_CLASSES)
+    model.compile(
+        optimizer=opt,
+        loss='sparse_categorical_crossentropy',
+        metrics=['sparse_categorical_accuracy'])
+model.fit(train_dataset,
+    epochs=NUM_EPOCHS)
+```
+
+To run distributed training, the training script needs to be copied to all nodes,
+in this case on two nodes.
+TensorFlow is available as a module.
+Check for the version.
+The `TF_CONFIG` environment variable can be set as a prefix to the command.
+Now, run the script on the partition `alpha` simultaneously on both nodes:
+
+```bash
+#!/bin/bash
+
+#SBATCH --job-name=distr
+#SBATCH --partition=alpha
+#SBATCH --output=%j.out
+#SBATCH --error=%j.err
+#SBATCH --mem=64000
+#SBATCH --nodes=2
+#SBATCH --ntasks=2
+#SBATCH --ntasks-per-node=1
+#SBATCH --cpus-per-task=14
+#SBATCH --gres=gpu:1
+#SBATCH --time=01:00:00
+
+function print_nodelist {
+        scontrol show hostname $SLURM_NODELIST
+}
+NODE_1=$(print_nodelist | awk '{print $1}' | sort -u | head -n 1)
+NODE_2=$(print_nodelist | awk '{print $1}' | sort -u | tail -n 1)
+IP_1=$(dig +short ${NODE_1}.taurus.hrsk.tu-dresden.de)
+IP_2=$(dig +short ${NODE_2}.taurus.hrsk.tu-dresden.de)
+
+module load modenv/hiera
+module load modenv/hiera GCC/10.2.0 CUDA/11.1.1 OpenMPI/4.0.5 TensorFlow/2.4.1
+
+# On the first node
+TF_CONFIG='{"cluster": {"worker": ["'"${NODE_1}"':33562", "'"${NODE_2}"':33561"]}, "task": {"index": 0, "type": "worker"}}' srun -w ${NODE_1} -N 1 --ntasks=1 --gres=gpu:1 python main_ddl.py &
+
+# On the second node
+TF_CONFIG='{"cluster": {"worker": ["'"${NODE_1}"':33562", "'"${NODE_2}"':33561"]}, "task": {"index": 1, "type": "worker"}}' srun -w ${NODE_2} -N 1 --ntasks=1 --gres=gpu:1 python main_ddl.py &
+
+wait
+```
+
+### Distributed PyTorch
+
+!!! note
+    This section is under construction
+
+#### Using Multiple GPUs with PyTorch
+
+The example below shows how to solve that problem by using model parallelism, which in contrast to
+data parallelism splits a single model onto different GPUs, rather than replicating the entire
+model on each GPU.
+The high-level idea of model parallelism is to place different sub-networks of a model onto
+different devices.
+As only part of a model operates on any individual device a set of devices can collectively
+serve a larger model.
+
+It is recommended to use
+[DistributedDataParallel](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html),
 instead of this class, to do multi-GPU training, even if there is only a single node.
-See: Use nn.parallel.DistributedDataParallel instead of multiprocessing or nn.DataParallel.
+See: Use `nn.parallel.DistributedDataParallel` instead of multiprocessing or `nn.DataParallel`.
 Check the [page](https://pytorch.org/docs/stable/notes/cuda.html#cuda-nn-ddp-instead) and
 [Distributed Data Parallel](https://pytorch.org/docs/stable/notes/ddp.html#ddp).
 
 Examples:
 
-1\. The parallel model. The main aim of this model to show the way how
-to effectively implement your neural network on several GPUs. It
-includes a comparison of different kinds of models and tips to improve
-the performance of your model. **Necessary** parameters for running this
-model are **2 GPU** and 14 cores (56 thread).
+1. The parallel model.
+The main aim of this model to show the way how to effectively implement your
+neural network on several GPUs.
+It includes a comparison of different kinds of models and tips to improve the performance
+of your model.
+**Necessary** parameters for running this model are **2 GPU** and 14 cores.
 
 (example_PyTorch_parallel.zip)
 
-Remember that for using [JupyterHub service](../access/jupyterhub.md)
-for PyTorch you need to create and activate
-a virtual environment (kernel) with loaded essential modules.
+Remember that for using [JupyterHub service](../access/jupyterhub.md) for PyTorch you need to
+create and activate a virtual environment (kernel) with loaded essential modules.
 
 Run the example in the same way as the previous examples.
 
@@ -54,131 +179,149 @@ Run the example in the same way as the previous examples.
 [DistributedDataParallel](https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel)
 (DDP) implements data parallelism at the module level which can run across multiple machines.
 Applications using DDP should spawn multiple processes and create a single DDP instance per process.
-DDP uses collective communications in the [torch.distributed]
-(https://pytorch.org/tutorials/intermediate/dist_tuto.html)
-package to synchronize gradients and buffers.
+DDP uses collective communications in the
+[torch.distributed](https://pytorch.org/tutorials/intermediate/dist_tuto.html) package to
+synchronize gradients and buffers.
 
-The tutorial could be found [here](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html).
+The tutorial can be found [here](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html).
 
-To use distributed data parallelization on ZIH system please use following
-parameters: `--ntasks-per-node` -parameter to the number of GPUs you use
-per node. Also, it could be useful to increase `memomy/cpu` parameters
-if you run larger models. Memory can be set up to:
+To use distributed data parallelism on ZIH systems, please make sure the `--ntasks-per-node`
+parameter is equal to the number of GPUs you use per node.
+Also, it can be useful to increase `memory/cpu` parameters if you run larger models.
+Memory can be set up to:
 
-`--mem=250000` and `--cpus-per-task=7` for the `ml` partition.
+- `--mem=250G` and `--cpus-per-task=7` for the partition `ml`.
+- `--mem=60G` and `--cpus-per-task=6` for the partition `gpu2`.
 
-`--mem=60000` and `--cpus-per-task=6` for the `gpu2` partition.
-
-Keep in mind that only one memory parameter (`--mem-per-cpu` = <MB> or `--mem`=<MB>) can be
-specified
+Keep in mind that only one memory parameter (`--mem-per-cpu=<MB>` or `--mem=<MB>`) can be specified
 
 ## External Distribution
 
 ### Horovod
 
-[Horovod](https://github.com/horovod/horovod) is the open source distributed training
-framework for TensorFlow, Keras, PyTorch. It is supposed to make it easy
-to develop distributed deep learning projects and speed them up with
-TensorFlow.
+[Horovod](https://github.com/horovod/horovod) is the open source distributed training framework
+for TensorFlow, Keras and PyTorch.
+It makes it easier to develop distributed deep learning projects and speeds them up.
+Horovod scales well to a large number of nodes and has a strong focus on efficient training on
+GPUs.
 
 #### Why use Horovod?
 
-Horovod allows you to easily take a single-GPU TensorFlow and PyTorch
-program and successfully train it on many GPUs! In
-some cases, the MPI model is much more straightforward and requires far
-less code changes than the distributed code from TensorFlow for
-instance, with parameter servers. Horovod uses MPI and NCCL which gives
-in some cases better results than pure TensorFlow and PyTorch.
+Horovod allows you to easily take a single-GPU TensorFlow and PyTorch program and
+train it on many GPUs!
+In some cases, the MPI model is much more straightforward and requires far less code changes than
+the distributed code from TensorFlow for instance, with parameter servers.
+Horovod uses MPI and NCCL which gives in some cases better results than
+pure TensorFlow and PyTorch.
 
 #### Horovod as a module
 
-Horovod is available as a module with **TensorFlow** or **PyTorch**for **all** module environments.
+Horovod is available as a module with **TensorFlow** or **PyTorch** for
+**all** module environments.
 Please check the [software module list](modules.md) for the current version of the software.
 Horovod can be loaded like other software on ZIH system:
 
-```Bash
-ml av Horovod            #Check available modules with Python
-module load Horovod      #Loading of the module
+```console
+marie@compute$ module spider Horovod           # Check available modules
+------------------------------------------------------------------------------------------------
+  Horovod:
+------------------------------------------------------------------------------------------------
+    Description:
+      Horovod is a distributed training framework for TensorFlow.
+
+     Versions:
+        Horovod/0.18.2-fosscuda-2019b-TensorFlow-2.0.0-Python-3.7.4
+        Horovod/0.19.5-fosscuda-2019b-TensorFlow-2.2.0-Python-3.7.4
+        Horovod/0.21.1-TensorFlow-2.4.1
+[...]
+marie@compute$ module load Horovod/0.19.5-fosscuda-2019b-TensorFlow-2.2.0-Python-3.7.4  
 ```
 
-#### Horovod installation
+Or if you want to use Horovod on the partition `alpha`, you can load it with the dependencies:
 
-However, if it is necessary to use Horovod with **PyTorch** or use
-another version of Horovod it is possible to install it manually. To
-install Horovod you need to create a virtual environment and load the
-dependencies (e.g. MPI). Installing PyTorch can take a few hours and is
-not recommended
-
-**Note:** You could work with simple examples in your home directory but **please use workspaces
-for your study and work projects** (see the Storage concept).
-
-Setup:
-
-```Bash
-srun -N 1 --ntasks-per-node=6 -p ml --time=08:00:00 --pty bash                    #allocate a Slurm job allocation, which is a set of resources (nodes)
-module load modenv/ml                                                             #Load dependencies by using modules
-module load OpenMPI/3.1.4-gcccuda-2018b
-module load Python/3.6.6-fosscuda-2018b
-module load cuDNN/7.1.4.18-fosscuda-2018b
-module load CMake/3.11.4-GCCcore-7.3.0
-virtualenv --system-site-packages <location_for_your_environment>                 #create virtual environment
-source <location_for_your_environment>/bin/activate                               #activate virtual environment
+```bash
+marie@alpha$ module spider Horovod                         #Check available modules
+marie@alpha$ module load modenv/hiera  GCC/10.2.0  CUDA/11.1.1  OpenMPI/4.0.5 Horovod/0.21.1-TensorFlow-2.4.1
 ```
 
-Or when you need to use conda:
+#### Horovod installation
 
-```Bash
-srun -N 1 --ntasks-per-node=6 -p ml --time=08:00:00 --pty bash                            #allocate a Slurm job allocation, which is a set of resources (nodes)
-module load modenv/ml                                                                     #Load dependencies by using modules
-module load OpenMPI/3.1.4-gcccuda-2018b
-module load PythonAnaconda/3.6
-module load cuDNN/7.1.4.18-fosscuda-2018b
-module load CMake/3.11.4-GCCcore-7.3.0
+However, if it is necessary to use another version of Horovod, it is possible to install it
+manually. For that, you need to create a [virtual environment](python_virtual_environments.md) and
+load the dependencies (e.g. MPI).
+Installing TensorFlow can take a few hours and is not recommended.
 
-conda create --prefix=<location_for_your_environment> python=3.6 anaconda                 #create virtual environment
+##### Install Horovod for TensorFlow with python and pip
 
-conda activate  <location_for_your_environment>                                           #activate virtual environment
-```
+This example shows the installation of Horovod for TensorFlow.
+Adapt as required and refer to the [Horovod documentation](https://horovod.readthedocs.io/en/stable/install_include.html)
+for details.
 
-Install PyTorch (not recommended)
+```console
+marie@alpha$ HOROVOD_GPU_OPERATIONS=NCCL HOROVOD_WITH_TENSORFLOW=1 pip install --no-cache-dir horovod\[tensorflow\]
+[...]
+marie@alpha$ horovodrun --check-build
+Horovod v0.19.5:
 
-```Bash
-cd /tmp
-git clone https://github.com/pytorch/pytorch                                  #clone PyTorch from the source
-cd pytorch                                                                    #go to folder
-git checkout v1.7.1                                                           #Checkout version (example: 1.7.1)
-git submodule update --init                                                   #Update dependencies
-python setup.py install                                                       #install it with python
-```
+Available Frameworks:
+    [X] TensorFlow
+    [ ] PyTorch
+    [ ] MXNet
 
-##### Install Horovod for PyTorch with python and pip
+Available Controllers:
+    [X] MPI
+    [ ] Gloo
 
-In the example presented installation for the PyTorch without
-TensorFlow. Adapt as required and refer to the Horovod documentation for
-details.
+Available Tensor Operations:
+    [X] NCCL
+    [ ] DDL
+    [ ] CCL
+    [X] MPI
+    [ ] Gloo
 
-```Bash
-HOROVOD_GPU_ALLREDUCE=MPI HOROVOD_WITHOUT_TENSORFLOW=1 HOROVOD_WITH_PYTORCH=1 HOROVOD_WITHOUT_MXNET=1 pip install --no-cache-dir horovod
 ```
 
+If you want to use OpenMPI then specify `HOROVOD_GPU_ALLREDUCE=MPI`.
+To have better performance it is recommended to use NCCL instead of OpenMPI.
+
 ##### Verify that Horovod works
 
-```Bash
-python                                           #start python
-import torch                                     #import pytorch
-import horovod.torch as hvd                      #import horovod
-hvd.init()                                       #initialize horovod
-hvd.size()
-hvd.rank()
-print('Hello from:', hvd.rank())
+```pycon
+>>> import tensorflow
+2021-10-07 16:38:55.694445: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
+>>> import horovod.tensorflow as hvd                      #import horovod
+>>> hvd.init()                                       #initialize horovod
+>>> hvd.size()
+1
+>>> hvd.rank()
+0
+>>> print('Hello from:', hvd.rank())
+Hello from: 0
 ```
 
-##### Horovod with NCCL
+#### Example
+
+Follow the steps in the [official examples](https://github.com/horovod/horovod/tree/master/examples)
+to parallelize your code.
+In Horovod, each GPU gets pinned to a process.
+You can easily start your job with the following bash script with four processes on two nodes:
 
-If you want to use NCCL instead of MPI you can specify that in the
-install command after loading the NCCL module:
+```bash
+#!/bin/bash
+#SBATCH --nodes=2
+#SBATCH --ntasks=4
+#SBATCH --ntasks-per-node=2
+#SBATCH --gres=gpu:2
+#SBATCH --partition=ml
+#SBATCH --mem=250G
+#SBATCH --time=01:00:00
+#SBATCH --output=run_horovod.out
 
-```Bash
-module load NCCL/2.3.7-fosscuda-2018b
-HOROVOD_GPU_ALLREDUCE=NCCL HOROVOD_GPU_BROADCAST=NCCL HOROVOD_WITHOUT_TENSORFLOW=1 HOROVOD_WITH_PYTORCH=1 HOROVOD_WITHOUT_MXNET=1 pip install --no-cache-dir horovod
+module load modenv/ml
+module load Horovod/0.19.5-fosscuda-2019b-TensorFlow-2.2.0-Python-3.7.4
+
+srun python <your_program.py>
 ```
+
+Do not forget to specify the total number of tasks `--ntasks` and the number of tasks per node
+`--ntasks-per-node` which must match the number of GPUs per node.
diff --git a/doc.zih.tu-dresden.de/docs/software/fem_software.md b/doc.zih.tu-dresden.de/docs/software/fem_software.md
index bd65ea9832462bae475841f2e3ed2fa8193e3355..65fac88d7f09d0443468e7e35bc32630f424de51 100644
--- a/doc.zih.tu-dresden.de/docs/software/fem_software.md
+++ b/doc.zih.tu-dresden.de/docs/software/fem_software.md
@@ -1,247 +1,238 @@
 # FEM Software
 
-For an up-to-date list of the installed software versions on our
-cluster, please refer to SoftwareModulesList **TODO LINK** (SoftwareModulesList).
+!!! hint "Its all in the modules"
 
-## Abaqus
-
-[ABAQUS](http://www.hks.com) **TODO links to realestate site** is a general-purpose finite-element program
-designed for advanced linear and nonlinear engineering analysis
-applications with facilities for linking-in user developed material
-models, elements, friction laws, etc.
-
-Eike Dohmen (from Inst.f. Leichtbau und Kunststofftechnik) sent us the
-attached description of his ABAQUS calculations. Please try to adapt
-your calculations in that way.\<br />Eike is normally a Windows-User and
-his description contains also some hints for basic Unix commands. (
-ABAQUS-SLURM.pdf **TODO LINK** (%ATTACHURL%/ABAQUS-SLURM.pdf) - only in German)
-
-Please note: Abaqus calculations should be started with a batch script.
-Please read the information about the Batch System **TODO LINK **  (BatchSystems)
-SLURM.
-
-The detailed Abaqus documentation can be found at
-abaqus **TODO LINK MISSING** (only accessible from within the
-TU Dresden campus net).
+    All packages described in this section, are organized in so-called modules. To list the available versions of a package and load a
+    particular, e.g., ANSYS, version, invoke the commands
 
-**Example - Thanks to Benjamin Groeger, Inst. f. Leichtbau und
-Kunststofftechnik) **
-
-1. Prepare an Abaqus input-file (here the input example from Benjamin)
-
-Rot-modell-BenjaminGroeger.inp **TODO LINK**  (%ATTACHURL%/Rot-modell-BenjaminGroeger.inp)
-
-2. Prepare a batch script on taurus like this
-
-```
-#!/bin/bash<br>
-### Thanks to Benjamin Groeger, Institut fuer Leichtbau und Kunststofftechnik, 38748<br />### runs on taurus and needs ca 20sec with 4cpu<br />### generates files:
-###  yyyy.com
-###  yyyy.dat
-###  yyyy.msg
-###  yyyy.odb
-###  yyyy.prt
-###  yyyy.sim
-###  yyyy.sta
-#SBATCH --nodes=1  ### with &gt;1 node abaqus needs a nodeliste
-#SBATCH --ntasks-per-node=4
-#SBATCH --mem=500  ### memory (sum)
-#SBATCH --time=00:04:00
-### give a name, what ever you want
-#SBATCH --job-name=yyyy
-### you get emails when the job will finished or failed
-### set your right email
-#SBATCH --mail-type=END,FAIL
-#SBATCH --mail-user=xxxxx.yyyyyy@mailbox.tu-dresden.de
-### set your project
-#SBATCH -A p_xxxxxxx
-### Abaqus have its own MPI
-unset SLURM_GTIDS
-### load and start
-module load ABAQUS/2019
-abaqus interactive input=Rot-modell-BenjaminGroeger.inp job=yyyy cpus=4 mp_mode=mpi
+    ```console
+    marie@login$ module avail ANSYS
+    [...]
+    marie@login$ module load ANSYS/<version>
+    ```
 
-```
+    The section [runtime environment](runtime_environment.md) provides a comprehensive overview
+    on the module system and relevant commands.
 
-3. Start the batch script (name of our script is
-"batch-Rot-modell-BenjaminGroeger")
+## Abaqus
 
-```
-sbatch batch-Rot-modell-BenjaminGroeger      --->; you will get a jobnumber = JobID (for example 3130522)
-```
+[Abaqus](https://www.3ds.com/de/produkte-und-services/simulia/produkte/abaqus/) is a general-purpose
+finite element method program designed for advanced linear and nonlinear engineering analysis
+applications with facilities for linking-in user developed material models, elements, friction laws,
+etc.
 
-4. Control the status of the job
+### Guide by User
 
-```
-squeue -u your_login     -->; in column "ST" (Status) you will find a R=Running or P=Pending (waiting for resources)
-```
+Eike Dohmen (from Inst. f. Leichtbau und Kunststofftechnik) sent us the description of his
+Abaqus calculations. Please try to adapt your calculations in that way. Eike is normally a
+Windows user and his description contains also some hints for basic Unix commands:
+[Abaqus-Slurm.pdf (only in German)](misc/ABAQUS-SLURM.pdf).
 
-## ANSYS
+### General
 
-ANSYS is a general-purpose finite-element program for engineering
-analysis, and includes preprocessing, solution, and post-processing
-functions. It is used in a wide range of disciplines for solutions to
-mechanical, thermal, and electronic problems. [ANSYS and ANSYS
-CFX](http://www.ansys.com) used to be separate packages in the past and
-are now combined.
+Abaqus calculations should be started using a job file (aka. batch script). Please refer to the
+page covering the [batch system Slurm](../jobs_and_resources/slurm.md) if you are not familiar with
+Slurm or [writing job files](../jobs_and_resources/slurm.md#job-files).
 
-ANSYS, like all other installed software, is organized in so-called
-modules **TODO LINK** (RuntimeEnvironment). To list the available versions and load a
-particular ANSYS version, type
+??? example "Usage of Abaqus"
 
-```
-module avail ANSYS
-...
-module load ANSYS/VERSION
-```
+    (Thanks to Benjamin Groeger, Inst. f. Leichtbau und Kunststofftechnik)).
 
-In general, HPC-systems are not designed for interactive "GUI-working".
-Even so, it is possible to start a ANSYS workbench on Taurus (login
-nodes) interactively for short tasks. The second and recommended way is
-to use batch files. Both modes are documented in the following.
+    1. Prepare an Abaqus input-file. You can start with the input example from Benjamin:
+    [Rot-modell-BenjaminGroeger.inp](misc/Rot-modell-BenjaminGroeger.inp)
+    2. Prepare a job file on ZIH systems like this
+    ```bash
+    #!/bin/bash
+    ### needs ca 20 sec with 4cpu
+    ### generates files:
+    ###  yyyy.com
+    ###  yyyy.dat
+    ###  yyyy.msg
+    ###  yyyy.odb
+    ###  yyyy.prt
+    ###  yyyy.sim
+    ###  yyyy.sta
+    #SBATCH --nodes=1               # with >1 node Abaqus needs a nodeliste
+    #SBATCH --ntasks-per-node=4
+    #SBATCH --mem=500               # total memory
+    #SBATCH --time=00:04:00
+    #SBATCH --job-name=yyyy         # give a name, what ever you want
+    #SBATCH --mail-type=END,FAIL    # send email when the job finished or failed
+    #SBATCH --mail-user=<name>@mailbox.tu-dresden.de  # set your email
+    #SBATCH -A p_xxxxxxx            # charge compute time to your project
+
+
+    # Abaqus has its own MPI
+    unset SLURM_GTIDS
+
+    # load module and start Abaqus
+    module load ABAQUS/2019
+    abaqus interactive input=Rot-modell-BenjaminGroeger.inp job=yyyy cpus=4 mp_mode=mpi
+    ```
+    3. Start the job file (e.g., name `batch-Rot-modell-BenjaminGroeger.sh`)
+    ```
+    marie@login$ sbatch batch-Rot-modell-BenjaminGroeger.sh      # Slurm will provide the Job Id (e.g., 3130522)
+    ```
+    4. Control the status of the job
+    ```
+    marie@login squeue -u your_login     # in column "ST" (Status) you will find a R=Running or P=Pending (waiting for resources)
+    ```
+
+## Ansys
+
+Ansys is a general-purpose finite element method program for engineering analysis, and includes
+preprocessing, solution, and post-processing functions. It is used in a wide range of disciplines
+for solutions to mechanical, thermal, and electronic problems.
+[Ansys and Ansys CFX](http://www.ansys.com) used to be separate packages in the past and are now
+combined.
+
+In general, HPC systems are not designed for interactive working with GUIs. Even so, it is possible to
+start a Ansys workbench on the login nodes interactively for short tasks. The second and
+**recommended way** is to use job files. Both modes are documented in the following.
+
+!!! note ""
+
+    Since the MPI library that Ansys uses internally (Platform MPI) has some problems integrating
+    seamlessly with Slurm, you have to unset the enviroment variable `SLURM_GTIDS` in your
+    environment bevor running Ansysy workbench in interactive andbatch mode.
 
 ### Using Workbench Interactively
 
-For fast things, ANSYS workbench can be invoked interactively on the
-login nodes of Taurus. X11 forwarding needs to enabled when establishing
-the SSH connection. For OpenSSH this option is '-X' and it is valuable
-to use compression of all data via '-C'.
+Ansys workbench (`runwb2`) an be invoked interactively on the login nodes of ZIH systems for short tasks.
+[X11 forwarding](../access/ssh_login.md#x11-forwarding) needs to enabled when establishing the SSH
+connection. For OpenSSH the corresponding option is `-X` and it is valuable to use compression of
+all data via `-C`.
 
-```
-# Connect to taurus, e.g. ssh -CX
-module load ANSYS/VERSION
-runwb2
+```console
+# SSH connection established using -CX
+marie@login$ module load ANSYS/<version>
+marie@login$ runwb2
 ```
 
-If more time is needed, a CPU has to be allocated like this (see topic
-batch systems **TODO LINK** (BatchSystems) for further information):
+If more time is needed, a CPU has to be allocated like this (see
+[batch systems Slurm](../jobs_and_resources/slurm.md) for further information):
 
+```console
+marie@login$ module load ANSYS/<version>
+marie@login$ srun -t 00:30:00 --x11=first [SLURM_OPTIONS] --pty bash
+[...]
+marie@login$ runwb2
 ```
-module load ANSYS/VERSION  
-srun -t 00:30:00 --x11=first [SLURM_OPTIONS] --pty bash
-runwb2
-```
-
-**Note:** The software NICE Desktop Cloud Visualization (DCV) enables to
-remotly access OpenGL-3D-applications running on taurus using its GPUs
-(cf. virtual desktops **TODO LINK** (Compendium.VirtualDesktops)). Using ANSYS
-together with dcv works as follows:
-
--   Follow the instructions within virtual
-    desktops **TODO LINK** (Compendium.VirtualDesktops)
 
-```
-module load ANSYS
-```
+!!! hint "Better use DCV"
 
-```
-unset SLURM_GTIDS
-```
+    The software NICE Desktop Cloud Visualization (DCV) enables to
+    remotly access OpenGL-3D-applications running on ZIH systems using its GPUs
+    (cf. [virtual desktops](virtual_desktops.md)).
 
--   Note the hints w.r.t. GPU support on dcv side
+Ansys can be used under DCV to make use of GPU acceleration. Follow the instructions within
+[virtual desktops](virtual_desktops.md) to set up a DCV session. Then, load a Ansys module, unset
+the environment variable `SLURM_GTIDS`, and finally start the workbench:
 
-```
-runwb2
+```console
+marie@gpu$ module load ANSYS
+marie@gpu$ unset SLURM_GTIDS
+marie@gpu$ runwb2
 ```
 
 ### Using Workbench in Batch Mode
 
-The ANSYS workbench (runwb2) can also be used in a batch script to start
-calculations (the solver, not GUI) from a workbench project into the
-background. To do so, you have to specify the -B parameter (for batch
-mode), -F for your project file, and can then either add different
-commands via -E parameters directly, or specify a workbench script file
-containing commands via -R.
+The Ansys workbench (`runwb2`) can also be used in a job file to start calculations (the solver,
+not GUI) from a workbench project into the background. To do so, you have to specify the `-B`
+parameter (for batch mode), `-F` for your project file, and can then either add different commands via
+`-E parameters directly`, or specify a workbench script file containing commands via `-R`.
 
-**NOTE:** Since the MPI library that ANSYS uses internally (Platform
-MPI) has some problems integrating seamlessly with SLURM, you have to
-unset the enviroment variable SLURM_GTIDS in your job environment before
-running workbench. An example batch script could look like this:
+??? example "Ansys Job File"
 
+    ```bash
     #!/bin/bash
     #SBATCH --time=0:30:00
     #SBATCH --nodes=1
     #SBATCH --ntasks=2
     #SBATCH --mem-per-cpu=1000M
 
+    unset SLURM_GTIDS              # Odd, but necessary!
 
-    unset SLURM_GTIDS         # Odd, but necessary!
-
-    module load ANSYS/VERSION
+    module load ANSYS/<version>
 
     runwb2 -B -F Workbench_Taurus.wbpj -E 'Project.Update' -E 'Save(Overwrite=True)'
     #or, if you wish to use a workbench replay file, replace the -E parameters with: -R mysteps.wbjn
+    ```
 
 ### Running Workbench in Parallel
 
-Unfortunately, the number of CPU cores you wish to use cannot simply be
-given as a command line parameter to your runwb2 call. Instead, you have
-to enter it into an XML file in your home. This setting will then be
-used for all your runwb2 jobs. While it is also possible to edit this
-setting via the Mechanical GUI, experience shows that this can be
-problematic via X-Forwarding and we only managed to use the GUI properly
-via DCV **TODO LINK** (DesktopCloudVisualization), so we recommend you simply edit
-the XML file directly with a text editor of your choice. It is located
+Unfortunately, the number of CPU cores you wish to use cannot simply be given as a command line
+parameter to your `runwb2` call. Instead, you have to enter it into an XML file in your `home`
+directory. This setting will then be **used for all** your `runwb2` jobs. While it is also possible
+to edit this setting via the Mechanical GUI, experience shows that this can be problematic via
+X11-forwarding and we only managed to use the GUI properly via [DCV](virtual_desktops.md), so we
+recommend you simply edit the XML file directly with a text editor of your choice. It is located
 under:
 
-'$HOME/.mw/Application Data/Ansys/v181/SolveHandlers.xml'
+`$HOME/.mw/Application Data/Ansys/v181/SolveHandlers.xml`
 
-(mind the space in there.) You might have to adjust the ANSYS Version
-(v181) in the path. In this file, you can find the parameter
+(mind the space in there.) You might have to adjust the Ansys version
+(here `v181`) in the path to your preferred version. In this file, you can find the parameter
 
-    <MaxNumberProcessors>2</MaxNumberProcessors>
+`<MaxNumberProcessors>2</MaxNumberProcessors>`
 
-that you can simply change to something like 16 oder 24. For now, you
-should stay within single-node boundaries, because multi-node
-calculations require additional parameters. The number you choose should
-match your used --cpus-per-task parameter in your sbatch script.
+that you can simply change to something like 16 oder 24. For now, you should stay within single-node
+boundaries, because multi-node calculations require additional parameters. The number you choose
+should match your used `--cpus-per-task` parameter in your job file.
 
 ## COMSOL Multiphysics
 
-"[COMSOL Multiphysics](http://www.comsol.com) (formerly FEMLAB) is a
-finite element analysis, solver and Simulation software package for
-various physics and engineering applications, especially coupled
-phenomena, or multiphysics."
-[\[http://en.wikipedia.org/wiki/COMSOL_Multiphysics Wikipedia\]](
-    http://en.wikipedia.org/wiki/COMSOL_Multiphysics Wikipedia)
+[COMSOL Multiphysics](http://www.comsol.com) (formerly FEMLAB) is a finite element analysis, solver
+and Simulation software package for various physics and engineering applications, especially coupled
+phenomena, or multiphysics.
 
-Comsol may be used remotely on ZIH machines or locally on the desktop,
-using ZIH license server.
+COMSOL may be used remotely on ZIH systems or locally on the desktop, using ZIH license server.
 
-For using Comsol on ZIH machines, the following operating modes (see
-Comsol manual) are recommended:
+For using COMSOL on ZIH systems, we recommend the interactive client-server mode (see COMSOL
+manual).
 
--   Interactive Client Server Mode
+### Client-Server Mode
 
-In this mode Comsol runs as server process on the ZIH machine and as
-client process on your local workstation. The client process needs a
-dummy license for installation, but no license for normal work. Using
-this mode is almost undistinguishable from working with a local
-installation. It works well with Windows clients. For this operation
-mode to work, you must build an SSH tunnel through the firewall of ZIH.
-For further information, see the Comsol manual.
+In this mode, COMSOL runs as server process on the ZIH system and as client process on your local
+workstation. The client process needs a dummy license for installation, but no license for normal
+work. Using this mode is almost undistinguishable from working with a local installation. It also works
+well with Windows clients. For this operation mode to work, you must build an SSH tunnel through the
+firewall of ZIH. For further information, please refer to the COMSOL manual.
 
-Example for starting the server process (4 cores, 10 GB RAM, max. 8
-hours running time):
+### Usage
 
-    module load COMSOL
-    srun -c4 -t 8:00 --mem-per-cpu=2500 comsol -np 4 server
+??? example "Server Process"
 
--   Interactive Job via Batchsystem SLURM
+    Start the server process with 4 cores, 10 GB RAM and max. 8 hours running time using an
+    interactive Slurm job like this:
 
-<!-- -->
+    ```console
+    marie@login$ module load COMSOL
+    marie@login$ srun -n 1 -c 4 --mem-per-cpu=2500 -t 8:00 comsol -np 4 server
+    ```
 
-    module load COMSOL
-    srun -n1 -c4 --mem-per-cpu=2500 -t 8:00 --pty --x11=first comsol -np 4
+??? example "Interactive Job"
+
+    If you'd like to work interactively using COMSOL, you can request for an interactive job with,
+    e.g., 4 cores and 2500 MB RAM for 8 hours and X11 forwarding to open the COMSOL GUI:
+
+    ```console
+    marie@login$ module load COMSOL
+    marie@login$ srun -n 1 -c 4 --mem-per-cpu=2500 -t 8:00 --pty --x11=first comsol -np 4
+    ```
 
-Man sollte noch schauen, ob das Rendering unter Options -> Preferences
--> Graphics and Plot Windows auf Software-Rendering steht - und dann
-sollte man campusintern arbeiten knnen.
+    Please make sure, that the option *Preferences* --> Graphics --> *Renedering* is set to *software
+    rendering*. Than, you can work from within the campus network.
 
--   Background Job via Batchsystem SLURM
+??? example "Background Job"
 
-<!-- -->
+    Interactive working is great for debugging and setting experiments up. But, if you have a huge
+    workload, you should definitively rely on job files. I.e., you put the necessary steps to get
+    the work done into scripts and submit these scripts to the batch system. These two steps are
+    outlined:
 
+    1. Create a [job file](../jobs_and_resources/slurm.md#job-files), e.g.
+    ```bash
     #!/bin/bash
     #SBATCH --time=24:00:00
     #SBATCH --nodes=2
@@ -251,21 +242,33 @@ sollte man campusintern arbeiten knnen.
 
     module load COMSOL
     srun comsol -mpi=intel batch -inputfile ./MyInputFile.mph
-
-Submit via: `sbatch <filename>`
+    ```
 
 ## LS-DYNA
 
-Both, the shared memory version and the distributed memory version (mpp)
-are installed on all machines.
+[LS-DYNA](https://www.dynamore.de/de) is a general-purpose, implicit and explicit FEM software for
+nonlinear structural analysis. Both, the shared memory version and the distributed memory version
+(`mpp`) are installed on ZIH systems.
+
+You need a job file (aka. batch script) to run the MPI version.
 
-To run the MPI version on Taurus or Venus you need a batchfile (sumbmit
-with `sbatch <filename>`) like:
+??? example "Minimal Job File"
 
+    ```bash
     #!/bin/bash
-    #SBATCH --time=01:00:00   # walltime
-    #SBATCH --ntasks=16   # number of processor cores (i.e. tasks)
+    #SBATCH --time=01:00:00       # walltime
+    #SBATCH --ntasks=16           # number of processor cores (i.e. tasks)
     #SBATCH --mem-per-cpu=1900M   # memory per CPU core
-    
+
     module load ls-dyna
     srun mpp-dyna i=neon_refined01_30ms.k memory=120000000
+    ```
+
+    Submit the job file to the batch system via
+
+    ```console
+    marie@login$ sbatch <filename>
+    ```
+
+    Please refer to the section [Slurm](../jobs_and_resources/slurm.md) for further details and
+    options on the batch system as well as monitoring commands.
diff --git a/doc.zih.tu-dresden.de/docs/software/hyperparameter_optimization.md b/doc.zih.tu-dresden.de/docs/software/hyperparameter_optimization.md
index 92786013f0382c841eed253c71e4a39cbc1a9b62..38190764e6c9efedb275ec9ff4324d916c851566 100644
--- a/doc.zih.tu-dresden.de/docs/software/hyperparameter_optimization.md
+++ b/doc.zih.tu-dresden.de/docs/software/hyperparameter_optimization.md
@@ -190,7 +190,7 @@ There are the following script preparation steps for OmniOpt:
         ```
 
 1. Testing script functionality and determine software requirements for the chosen
-   [partition](../jobs_and_resources/system_taurus.md#partitions). In the following, the alpha
+   [partition](../jobs_and_resources/partitions_and_limits.md). In the following, the alpha
    partition is used. Please note the parameters `--out-layer1`, `--batchsize`, `--epochs` when
    calling the Python script. Additionally, note the `RESULT` string with the output for OmniOpt.
 
diff --git a/doc.zih.tu-dresden.de/docs/software/licenses.md b/doc.zih.tu-dresden.de/docs/software/licenses.md
index af7a4e376f22a0711df8eaff944bd7830367cacd..3173cf98a1b9987c87a74e5175fc7746236613d9 100644
--- a/doc.zih.tu-dresden.de/docs/software/licenses.md
+++ b/doc.zih.tu-dresden.de/docs/software/licenses.md
@@ -1,6 +1,6 @@
 # Use of External Licenses
 
-It is possible (please [contact the support team](../support.md) first) for users to install
+It is possible (please [contact the support team](../support/support.md) first) for users to install
 their own software and use their own license servers, e.g.  FlexLM. The outbound IP addresses from
 ZIH systems are:
 
diff --git a/doc.zih.tu-dresden.de/docs/software/machine_learning.md b/doc.zih.tu-dresden.de/docs/software/machine_learning.md
index ecbb9e146276aff67d6079579f2163fa6d7dbf74..f2e5f24aa9f4f8e5f8fb516310b842584d30a614 100644
--- a/doc.zih.tu-dresden.de/docs/software/machine_learning.md
+++ b/doc.zih.tu-dresden.de/docs/software/machine_learning.md
@@ -1,10 +1,9 @@
 # Machine Learning
 
 This is an introduction of how to run machine learning applications on ZIH systems.
-For machine learning purposes, we recommend to use the partitions [Alpha](#alpha-partition) and/or
-[ML](#ml-partition).
+For machine learning purposes, we recommend to use the partitions `alpha` and/or `ml`.
 
-## ML Partition
+## Partition `ml`
 
 The compute nodes of the partition ML are built on the base of
 [Power9 architecture](https://www.ibm.com/it-infrastructure/power/power9) from IBM. The system was created
@@ -36,7 +35,7 @@ The following have been reloaded with a version change:  1) modenv/scs5 => moden
 There are tools provided by IBM, that work on partition ML and are related to AI tasks.
 For more information see our [Power AI documentation](power_ai.md).
 
-## Alpha Partition
+## Partition: Alpha
 
 Another partition for machine learning tasks is Alpha. It is mainly dedicated to
 [ScaDS.AI](https://scads.ai/) topics. Each node on Alpha has 2x AMD EPYC CPUs, 8x NVIDIA A100-SXM4
@@ -45,7 +44,7 @@ partition in our [Alpha Centauri](../jobs_and_resources/alpha_centauri.md) docum
 
 ### Modules
 
-On the partition **Alpha** load the module environment:
+On the partition alpha load the module environment:
 
 ```console
 marie@alpha$ module load modenv/hiera
diff --git a/doc.zih.tu-dresden.de/docs/software/mathematics.md b/doc.zih.tu-dresden.de/docs/software/mathematics.md
index 3ae820eda962a63a1ff59c55536865f1437d582a..9629e76b77cd8779a993c6c1f3bc5b0fe68d1140 100644
--- a/doc.zih.tu-dresden.de/docs/software/mathematics.md
+++ b/doc.zih.tu-dresden.de/docs/software/mathematics.md
@@ -1,11 +1,9 @@
 # Mathematics Applications
 
-!!! cite
+!!! cite "Galileo Galilei"
 
     Nature is written in mathematical language.
 
-    (Galileo Galilei)
-
 <!--*Please do not run expensive interactive sessions on the login nodes.  Instead, use* `srun --pty-->
 <!--...` *to let the batch system place it on a compute node.*-->
 
@@ -16,8 +14,8 @@ interface capabilities within a document-like user interface paradigm.
 
 ### Fonts
 
-To remotely use the graphical frontend, you have to add the Mathematica fonts to the local
-fontmanager.
+To remotely use the graphical front-end, you have to add the Mathematica fonts to the local
+font manager.
 
 #### Linux Workstation
 
@@ -149,15 +147,15 @@ srun --pty matlab -nodisplay -r basename_of_your_matlab_script #NOTE: you must o
     While running your calculations as a script this way is possible, it is generally frowned upon,
     because you are occupying Matlab licenses for the entire duration of your calculation when doing so.
     Since the available licenses are limited, it is highly recommended you first compile your script via
-    the Matlab Compiler (mcc) before running it for a longer period of time on our systems.  That way,
+    the Matlab Compiler (`mcc`) before running it for a longer period of time on our systems.  That way,
     you only need to check-out a license during compile time (which is relatively short) and can run as
     many instances of your calculation as you'd like, since it does not need a license during runtime
     when compiled to a binary.
 
 You can find detailed documentation on the Matlab compiler at
-[Mathworks' help pages](https://de.mathworks.com/help/compiler/).
+[MathWorks' help pages](https://de.mathworks.com/help/compiler/).
 
-### Using the MATLAB Compiler (mcc)
+### Using the MATLAB Compiler
 
 Compile your `.m` script into a binary:
 
@@ -184,12 +182,12 @@ zih$ srun ./run_compiled_executable.sh $EBROOTMATLAB
 -   If you want to run your code in parallel, please request as many
     cores as you need!
 -   start a batch job with the number N of processes
--   example for N= 4: \<pre> srun -c 4 --pty --x11=first bash\</pre>
+-   example for N= 4: `srun -c 4 --pty --x11=first bash`
 -   run Matlab with the GUI or the CLI or with a script
--   inside use \<pre>matlabpool open 4\</pre> to start parallel
+-   inside use `matlabpool open 4` to start parallel
     processing
 
--   example for 1000\*1000 matrixmutliplication
+-   example for 1000*1000 matrix multiplication
 
 !!! example
 
@@ -201,13 +199,13 @@ zih$ srun ./run_compiled_executable.sh $EBROOTMATLAB
 -   to close parallel task:
 `matlabpool close`
 
-#### With Parfor
+#### With parfor
 
 - start a batch job with the number N of processes (e.g. N=12)
 - inside use `matlabpool open N` or
   `matlabpool(N)` to start parallel processing. It will use
   the 'local' configuration by default.
-- Use 'parfor' for a parallel loop, where the **independent** loop
+- Use `parfor` for a parallel loop, where the **independent** loop
   iterations are processed by N threads
 
 !!! example
diff --git a/Compendium_attachments/FEMSoftware/ABAQUS-SLURM.pdf b/doc.zih.tu-dresden.de/docs/software/misc/ABAQUS-SLURM.pdf
similarity index 100%
rename from Compendium_attachments/FEMSoftware/ABAQUS-SLURM.pdf
rename to doc.zih.tu-dresden.de/docs/software/misc/ABAQUS-SLURM.pdf
diff --git a/Compendium_attachments/FEMSoftware/Rot-modell-BenjaminGroeger.inp b/doc.zih.tu-dresden.de/docs/software/misc/Rot-modell-BenjaminGroeger.inp
similarity index 100%
rename from Compendium_attachments/FEMSoftware/Rot-modell-BenjaminGroeger.inp
rename to doc.zih.tu-dresden.de/docs/software/misc/Rot-modell-BenjaminGroeger.inp
diff --git a/doc.zih.tu-dresden.de/docs/software/power_ai.md b/doc.zih.tu-dresden.de/docs/software/power_ai.md
index 37de0d0a05ecf8113b86ca9a550285184cf202a7..b4beda5cec2b8b2e1ede4729df7434b6e8c8e7d5 100644
--- a/doc.zih.tu-dresden.de/docs/software/power_ai.md
+++ b/doc.zih.tu-dresden.de/docs/software/power_ai.md
@@ -5,7 +5,7 @@ the PowerAI Framework for Machine Learning. In the following the links
 are valid for PowerAI version 1.5.4.
 
 !!! warning
-    The information provided here is available from IBM and can be used on `ml` partition only!
+    The information provided here is available from IBM and can be used on partition ml only!
 
 ## General Overview
 
@@ -47,7 +47,7 @@ are valid for PowerAI version 1.5.4.
   (Open Neural Network Exchange) provides support for moving models
   between those frameworks.
 - [Distributed Deep Learning](https://www.ibm.com/support/knowledgecenter/SS5SF7_1.5.4/navigation/pai_getstarted_ddl.html?view=kc)
-  Distributed Deep Learning (DDL). Works on up to 4 nodes on `ml` partition.
+  Distributed Deep Learning (DDL). Works on up to 4 nodes on partition `ml`.
 
 ## PowerAI Container
 
diff --git a/doc.zih.tu-dresden.de/docs/software/pytorch.md b/doc.zih.tu-dresden.de/docs/software/pytorch.md
index e8e2c4d5ecc7d123527a15140910005204a3d5ef..3c2e88a6c9fc209c246ede0e50410771be541c3f 100644
--- a/doc.zih.tu-dresden.de/docs/software/pytorch.md
+++ b/doc.zih.tu-dresden.de/docs/software/pytorch.md
@@ -15,14 +15,14 @@ marie@login$ module spider pytorch
 
 to find out, which PyTorch modules are available on your partition.
 
-We recommend using **Alpha** and/or **ML** partitions when working with machine learning workflows
+We recommend using partitions alpha and/or ml when working with machine learning workflows
 and the PyTorch library.
 You can find detailed hardware specification in our
-[hardware documentation](../jobs_and_resources/hardware_taurus.md).
+[hardware documentation](../jobs_and_resources/hardware_overview.md).
 
 ## PyTorch Console
 
-On the **Alpha** partition, load the module environment:
+On the partition `alpha`, load the module environment:
 
 ```console
 marie@login$ srun -p alpha --gres=gpu:1 -n 1 -c 7 --pty --mem-per-cpu=800 bash #Job submission on alpha nodes with 1 gpu on 1 node with 800 Mb per CPU
@@ -33,8 +33,8 @@ Die folgenden Module wurden in einer anderen Version erneut geladen:
 Module GCC/10.2.0, CUDA/11.1.1, OpenMPI/4.0.5, PyTorch/1.9.0 and 54 dependencies loaded.
 ```
 
-??? hint "Torchvision on alpha partition"
-    On the **Alpha** partition, the module torchvision is not yet available within the module
+??? hint "Torchvision on partition `alpha`"
+    On the partition `alpha`, the module torchvision is not yet available within the module
     system. (19.08.2021)
     Torchvision can be made available by using a virtual environment:
 
@@ -44,10 +44,10 @@ Module GCC/10.2.0, CUDA/11.1.1, OpenMPI/4.0.5, PyTorch/1.9.0 and 54 dependencies
     marie@alpha$ pip install torchvision --no-deps
     ```
 
-    Using the **--no-deps** option for "pip install" is necessary here as otherwise the PyTorch 
+    Using the **--no-deps** option for "pip install" is necessary here as otherwise the PyTorch
     version might be replaced and you will run into trouble with the cuda drivers.
 
-On the **ML** partition:
+On the partition `ml`:
 
 ```console
 marie@login$ srun -p ml --gres=gpu:1 -n 1 -c 7 --pty --mem-per-cpu=800 bash    #Job submission in ml nodes with 1 gpu on 1 node with 800 Mb per CPU
diff --git a/doc.zih.tu-dresden.de/docs/software/tensorboard.md b/doc.zih.tu-dresden.de/docs/software/tensorboard.md
index a1fab030bfbca20b1a8f69cf302e95957b565185..d2c838d3961d8f48794e544ce1ca7846d24e7325 100644
--- a/doc.zih.tu-dresden.de/docs/software/tensorboard.md
+++ b/doc.zih.tu-dresden.de/docs/software/tensorboard.md
@@ -81,4 +81,4 @@ marie@local$ ssh -N -f -L 6006:taurusi8034.taurus.hrsk.tu-dresden.de:6006 <zih-l
 
 Now, you can see the TensorBoard in your browser at `http://localhost:6006/`.
 
-Note that you can also use TensorBoard in an [sbatch file](../jobs_and_resources/batch_systems.md).
+Note that you can also use TensorBoard in an [sbatch file](../jobs_and_resources/slurm.md).
diff --git a/doc.zih.tu-dresden.de/docs/software/tensorflow.md b/doc.zih.tu-dresden.de/docs/software/tensorflow.md
index d8ad85c3b1a5f870f5ced0848274fb866bd14dff..09a8352a32648178f3634a4099eee52ad6c0ccd0 100644
--- a/doc.zih.tu-dresden.de/docs/software/tensorflow.md
+++ b/doc.zih.tu-dresden.de/docs/software/tensorflow.md
@@ -19,7 +19,7 @@ TensorFlow 2 and TensorFlow 1, see the corresponding [section below](#compatibil
 
 We recommend using partitions **Alpha** and/or **ML** when working with machine learning workflows
 and the TensorFlow library. You can find detailed hardware specification in our
-[Hardware](../jobs_and_resources/hardware_taurus.md) documentation.
+[Hardware](../jobs_and_resources/hardware_overview.md) documentation.
 
 ## TensorFlow Console
 
diff --git a/doc.zih.tu-dresden.de/docs/software/virtual_machines.md b/doc.zih.tu-dresden.de/docs/software/virtual_machines.md
index 9fd64d01dddbfde3119b74fa0e8f9decfe5b49f0..c6c660d3c5ac052f3362ad950f6ad395e4420bdf 100644
--- a/doc.zih.tu-dresden.de/docs/software/virtual_machines.md
+++ b/doc.zih.tu-dresden.de/docs/software/virtual_machines.md
@@ -45,10 +45,10 @@ times till it succeeds.
 bash-4.2$ cat /tmp/marie_2759627/activate
 #!/bin/bash
 
-if ! grep -q -- "Key for the VM on the ml partition" "/home/rotscher/.ssh/authorized_keys" &gt;& /dev/null; then
+if ! grep -q -- "Key for the VM on the partition ml" "/home/rotscher/.ssh/authorized_keys" &gt;& /dev/null; then
   cat "/tmp/marie_2759627/kvm.pub" >> "/home/marie/.ssh/authorized_keys"
 else
-  sed -i "s|.*Key for the VM on the ml partition.*|ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC3siZfQ6vQ6PtXPG0RPZwtJXYYFY73TwGYgM6mhKoWHvg+ZzclbBWVU0OoU42B3Ddofld7TFE8sqkHM6M+9jh8u+pYH4rPZte0irw5/27yM73M93q1FyQLQ8Rbi2hurYl5gihCEqomda7NQVQUjdUNVc6fDAvF72giaoOxNYfvqAkw8lFyStpqTHSpcOIL7pm6f76Jx+DJg98sXAXkuf9QK8MurezYVj1qFMho570tY+83ukA04qQSMEY5QeZ+MJDhF0gh8NXjX/6+YQrdh8TklPgOCmcIOI8lwnPTUUieK109ndLsUFB5H0vKL27dA2LZ3ZK+XRCENdUbpdoG2Czz Key for the VM on the ml partition|" "/home/marie/.ssh/authorized_keys"
+  sed -i "s|.*Key for the VM on the partition ml.*|ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC3siZfQ6vQ6PtXPG0RPZwtJXYYFY73TwGYgM6mhKoWHvg+ZzclbBWVU0OoU42B3Ddofld7TFE8sqkHM6M+9jh8u+pYH4rPZte0irw5/27yM73M93q1FyQLQ8Rbi2hurYl5gihCEqomda7NQVQUjdUNVc6fDAvF72giaoOxNYfvqAkw8lFyStpqTHSpcOIL7pm6f76Jx+DJg98sXAXkuf9QK8MurezYVj1qFMho570tY+83ukA04qQSMEY5QeZ+MJDhF0gh8NXjX/6+YQrdh8TklPgOCmcIOI8lwnPTUUieK109ndLsUFB5H0vKL27dA2LZ3ZK+XRCENdUbpdoG2Czz Key for the VM on the partition ml|" "/home/marie/.ssh/authorized_keys"
 fi
 
 ssh -i /tmp/marie_2759627/kvm root@192.168.0.6
diff --git a/doc.zih.tu-dresden.de/docs/specific_software.md b/doc.zih.tu-dresden.de/docs/specific_software.md
deleted file mode 100644
index fd98e303e5448ae7ce128ddfbc4e78c63e754075..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/specific_software.md
+++ /dev/null
@@ -1,44 +0,0 @@
-# Use of Specific Software (packages, libraries, etc)
-
-## Modular System
-
-The modular concept is the easiest way to work with the software on Taurus. It allows to user to
-switch between different versions of installed programs and provides utilities for the dynamic
-modification of a user's environment. The information can be found [here]**todo link**.
-
-### Private project and user modules files
-
-[Private project module files]**todo link** allow you to load your group-wide installed software
-into your environment and to handle different versions. It allows creating your own software
-environment for the project. You can create a list of modules that will be loaded for every member
-of the team. It gives opportunity on unifying work of the team and defines the reproducibility of
-results. Private modules can be loaded like other modules with module load.
-
-[Private user module files]**todo link** allow you to load your own installed software into your
-environment. It works in the same manner as to project modules but for your private use.
-
-## Use of containers
-
-[Containerization]**todo link** encapsulating or packaging up software code and all its dependencies
-to run uniformly and consistently on any infrastructure. On Taurus [Singularity]**todo link** used
-as a standard container solution. Singularity enables users to have full control of their
-environment. This means that you don’t have to ask an HPC support to install anything for you - you
-can put it in a Singularity container and run! As opposed to Docker (the most famous container
-solution), Singularity is much more suited to being used in an HPC environment and more efficient in
-many cases. Docker containers can easily be used in Singularity. Information about the use of
-Singularity on Taurus can be found [here]**todo link**.
-
-In some cases using Singularity requires a Linux machine with root privileges (e.g. using the ml
-partition), the same architecture and a compatible kernel. For many reasons, users on Taurus cannot
-be granted root permissions. A solution is a Virtual Machine (VM) on the ml partition which allows
-users to gain root permissions in an isolated environment. There are two main options on how to work
-with VM on Taurus:
-
-  1. [VM tools]**todo link**. Automative algorithms for using virtual machines;
-  1. [Manual method]**todo link**. It required more operations but gives you more flexibility and reliability.
-
-Additional Information: Examples of the definition for the Singularity container ([here]**todo
-link**) and some hints ([here]**todo link**).
-
-Useful links: [Containers]**todo link**, [Custom EasyBuild Environment]**todo link**, [Virtual
-machine on Taurus]**todo link**
diff --git a/doc.zih.tu-dresden.de/docs/support.md b/doc.zih.tu-dresden.de/docs/support.md
deleted file mode 100644
index d85f71226115f277cef27bdb6841e276e85ec1d9..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/support.md
+++ /dev/null
@@ -1,19 +0,0 @@
-# What if everything didn't help?
-
-## Create a Ticket: how do I do that?
-
-The best way to ask about the help is to create a ticket. In order to do that you have to write a
-message to the <a href="mailto:hpcsupport@zih.tu-dresden.de">hpcsupport@zih.tu-dresden.de</a> with a
-detailed description of your problem. If possible please add logs, used environment and write a
-minimal executable example for the purpose to recreate the error or issue.
-
-## Communication with HPC Support
-
-There is the HPC support team who is responsible for the support of HPC users and stable work of the
-cluster. You could find the [details]**todo link** in the right part of any page of the compendium.
-However, please, before the contact with the HPC support team check the documentation carefully
-(starting points: [main page]**todo link**, [HPC-DA]**todo link**), use a search and then create a
-ticket. The ticket is a preferred way to solve the issue, but in some terminable cases, you can call
-to ask for help.
-
-Useful link: [Further Documentation]**todo link**
diff --git a/doc.zih.tu-dresden.de/docs/support/news_archive.md b/doc.zih.tu-dresden.de/docs/support/news_archive.md
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/doc.zih.tu-dresden.de/docs/support/support.md b/doc.zih.tu-dresden.de/docs/support/support.md
new file mode 100644
index 0000000000000000000000000000000000000000..c2c9fbda8bbb70c1dddb82fb384b69a8201e6fb8
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/support/support.md
@@ -0,0 +1,31 @@
+# How to Ask for Support
+
+## Create a Ticket
+
+The best way to ask for help send a message to
+[hpcsupport@zih.tu-dresden.de](mailto:hpcsupport@zih.tu-dresden.de) with a
+detailed description of your problem.
+
+It should include:
+
+- Who is reporting? (login name)
+- Where have you seen the problem? (name of the HPC system and/or of the node)
+- When has the issue occurred? Maybe, when did it work last?
+- What exactly happened?
+
+If possible include
+
+- job ID,
+- batch script,
+- filesystem path,
+- loaded modules and environment,
+- output and error logs,
+- steps to reproduce the error.
+
+This email automatically opens a trouble ticket which will be tracked by the HPC team. Please
+always keep the ticket number in the subject on your answers so that our system can keep track
+on our communication.
+
+For a new request, please simply send a new email (without any ticket number).
+
+!!! hint "Please try to find an answer in this documentation first."
diff --git a/doc.zih.tu-dresden.de/docs/tests.md b/doc.zih.tu-dresden.de/docs/tests.md
deleted file mode 100644
index 7601eb3748d21ce8d414cdb24c7ebef9c0a68cd4..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/tests.md
+++ /dev/null
@@ -1,12 +0,0 @@
-# Tests
-
-Dies ist eine Seite zum Testen der Markdown-Syntax.
-
-```python
-import os
-
-def debug(mystring):
-  print("Debug: ", mystring)
-
-debug("Dies ist ein Syntax-Highligthing-Test")
-```
diff --git a/doc.zih.tu-dresden.de/hackathon.md b/doc.zih.tu-dresden.de/hackathon.md
index 4a49d2b68ede0134d9672d6b8513ceb8d0210060..d41781c45455139c62708c708cf42e05babc3b65 100644
--- a/doc.zih.tu-dresden.de/hackathon.md
+++ b/doc.zih.tu-dresden.de/hackathon.md
@@ -10,21 +10,21 @@ The goals for the hackathon are:
 
 ## twiki2md
 
-The script `twiki2md` converts twiki source files into markdown source files using pandoc. It outputs the
-markdown source files according to the old pages tree into subdirectories. The output and **starting
-point for transferring** old content into the new system can be found at branch `preview` within
-directory `twiki2md/root/`.
+The script `twiki2md` converts twiki source files into markdown source files using pandoc. It
+outputs the markdown source files according to the old pages tree into subdirectories. The
+output and **starting point for transferring** old content into the new system can be found
+at branch `preview` within directory `twiki2md/root/`.
 
 ## Steps
 
 ### Familiarize with New Wiki System
 
-* Make sure your are member of the [repository](https://gitlab.hrz.tu-chemnitz.de/zih/hpc-compendium/hpc-compendium).
+* Make sure your are member of the [repository](https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium).
   If not, ask Danny Rotscher for adding you.
 * Clone repository and checkout branch `preview`
 
 ```Shell Session
-~ git clone git@gitlab.hrz.tu-chemnitz.de:zih/hpc-compendium/hpc-compendium.git
+~ git clone git@gitlab.hrz.tu-chemnitz.de:zih/hpcsupport/hpc-compendium.git
 ~ cd hpc-compendium
 ~ git checkout preview
 ```
@@ -38,23 +38,27 @@ directory `twiki2md/root/`.
 1. Grab a markdown source file from `twiki2md/root/` directory (a topic you are comfortable with)
 1. Find place in new structure according to
 [Typical Project Schedule](https://doc.zih.tu-dresden.de/hpc-wiki/bin/view/Compendium/TypicalProjectSchedule)
-  * Create new feature branch holding your work `~ git checkout -b <BRANCHNAME>`, whereas branch name can
-      be `<FILENAME>` for simplicity
+
+  * Create new feature branch holding your work `~ git checkout -b <BRANCHNAME>`, whereas branch
+      name can be `<FILENAME>` for simplicity
   * Copy reviewed markdown source file to `docs/` directory via
     `~ git mv twiki2md/root/<FILENAME>.md doc.zih.tu-dresden.de/docs/<SUBDIR>/<FILENAME>.md`
   * Update navigation section in `mkdocs.yaml`
+
 1. Commit and push to feature branch via
+
 ```Shell Session
 ~ git commit docs/<SUBDIR>/<FILENAME>.md mkdocs.yaml -m "MESSAGE"
 ~ git push origin <BRANCHNAME>
 ```
+
 1. Run checks locally and fix the issues. Otherwise the pipeline will fail.
     * [Check links](README.md#check-links) (There might be broken links which can only be solved
         with ongoing transfer of content.)
     * [Check pages structure](README.md#check-pages-structure)
     * [Markdown Linter](README.md#markdown-linter)
 1. Create
-  [merge request](https://gitlab.hrz.tu-chemnitz.de/zih/hpc-compendium/hpc-compendium/-/merge_requests)
+  [merge request](https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium/-/merge_requests)
    against `preview` branch
 
 ### Review Content
diff --git a/doc.zih.tu-dresden.de/mkdocs.yml b/doc.zih.tu-dresden.de/mkdocs.yml
index 26c1381d36dfe624b99357c220566838ac0f727f..ae931caec53b198e49a6a9837431bbc579c6288d 100644
--- a/doc.zih.tu-dresden.de/mkdocs.yml
+++ b/doc.zih.tu-dresden.de/mkdocs.yml
@@ -1,5 +1,4 @@
 nav:
-
   - Home: index.md
   - Application for Login and Resources:
     - Overview: application/overview.md
@@ -19,7 +18,7 @@ nav:
     - Security Restrictions: access/security_restrictions.md
   - Transfer of Data:
     - Overview: data_transfer/overview.md
-    - Data Mover: data_transfer/datamover.md
+    - Datamover: data_transfer/datamover.md
     - Export Nodes: data_transfer/export_nodes.md
   - Environment and Software:
     - Overview: software/overview.md
@@ -85,41 +84,24 @@ nav:
     - Structuring Experiments: data_lifecycle/experiments.md
   - HPC Resources and Jobs:
     - Overview: jobs_and_resources/overview.md
-    - Batch Systems: jobs_and_resources/batch_systems.md
     - HPC Resources:
-      - Hardware Taurus: jobs_and_resources/hardware_taurus.md
+      - Overview: jobs_and_resources/hardware_overview.md
       - AMD Rome Nodes: jobs_and_resources/rome_nodes.md
       - IBM Power9 Nodes: jobs_and_resources/power9.md
       - NVMe Storage: jobs_and_resources/nvme_storage.md
       - Alpha Centauri: jobs_and_resources/alpha_centauri.md
       - HPE Superdome Flex: jobs_and_resources/sd_flex.md
-    - Checkpoint/Restart: jobs_and_resources/checkpoint_restart.md
-    - Overview2: jobs_and_resources/index.md
-    - Taurus: jobs_and_resources/system_taurus.md
-    - Slurm Examples: jobs_and_resources/slurm_examples.md
-    - Slurm: jobs_and_resources/slurm.md
-    - Binding And Distribution Of Tasks: jobs_and_resources/binding_and_distribution_of_tasks.md
-
-      #    - Queue Policy: jobs/policy.md
-
-      #    - Examples: jobs/examples/index.md
-
-      #    - Affinity: jobs/affinity/index.md
-
-      #    - Interactive: jobs/interactive.md
-
-      #    - Best Practices: jobs/best-practices.md
-
-      #    - Reservations: jobs/reservations.md
-
-      #    - Monitoring: jobs/monitoring.md
-
-      #    - FAQs: jobs/jobs-faq.md
-
-  #- Tests: tests.md
-
-  - Support: support.md
-  - Archive:
+    - Running Jobs:
+      - Batch System Slurm: jobs_and_resources/slurm.md
+      - Job Examples: jobs_and_resources/slurm_examples.md
+      - Partitions and Limits : jobs_and_resources/partitions_and_limits.md
+      - Checkpoint/Restart: jobs_and_resources/checkpoint_restart.md
+      - Job Profiling: jobs_and_resources/slurm_profiling.md
+      - Binding And Distribution Of Tasks: jobs_and_resources/binding_and_distribution_of_tasks.md
+  - Support:
+    - How to Ask for Support: support/support.md
+    - News Archive: support/news_archive.md
+  - Archive of the Old Wiki:
     - Overview: archive/overview.md
     - Bio Informatics: archive/bioinformatics.md
     - CXFS End of Support: archive/cxfs_end_of_support.md
@@ -153,13 +135,13 @@ site_name: ZIH HPC Compendium
 site_description: ZIH HPC Compendium
 site_author: ZIH Team
 site_dir: public
-site_url: https://gitlab.hrz.tu-chemnitz.de/zih/hpc-compendium/hpc-compendium
+site_url: https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium
 
 # uncomment next 3 lines if link to repo should not be displayed in the navbar
 
 repo_name: GitLab hpc-compendium
-repo_url: https://gitlab.hrz.tu-chemnitz.de/zih/hpc-compendium/hpc-compendium
-edit_uri: blob/master/docs/
+repo_url: https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium
+edit_uri: blob/main/doc.zih.tu-dresden.de/docs/
 
 # Configuration
 
diff --git a/doc.zih.tu-dresden.de/tud_theme/stylesheets/extra.css b/doc.zih.tu-dresden.de/tud_theme/stylesheets/extra.css
index 0fb1a3d46afe20b02e3fd9a03daf5b716819ad61..a3a992501bff7f7b153a1beb0779e7f3e576f9e6 100644
--- a/doc.zih.tu-dresden.de/tud_theme/stylesheets/extra.css
+++ b/doc.zih.tu-dresden.de/tud_theme/stylesheets/extra.css
@@ -28,19 +28,24 @@
 .md-typeset h5 {
     font-family: 'Open Sans Semibold';
     line-height: 130%;
+	margin: 0.2em;
 }
 
 .md-typeset h1 {
     font-family: 'Open Sans Regular';
-    font-size: 1.6rem;   
+    font-size: 1.6rem;
+	margin-bottom: 0.5em;
 }
 
 .md-typeset h2 {
-    font-size: 1.4rem;
+    font-size: 1.2rem;
+	margin: 0.5em;
+    border-bottom-style: solid;
+    border-bottom-width: 1px;
 }
 
 .md-typeset h3 {
-    font-size: 1.2rem;
+    font-size: 1.1rem;
 }
 
 .md-typeset h4 {
@@ -48,8 +53,8 @@
 }
 
 .md-typeset h5 {
-    font-size: 0.9rem;
-    line-height: 120%;
+    font-size: 0.8rem;
+    text-transform: initial;
 }
 
 strong {
@@ -161,6 +166,7 @@ hr.solid {
 
 p {
     padding: 0 0.6rem;
+	margin: 0.2em;	
 }
 /* main */
 
diff --git a/doc.zih.tu-dresden.de/util/check-links.sh b/doc.zih.tu-dresden.de/util/check-links.sh
index e553f9c4828a2286a5f053181dd09eaaa28746ad..0a0b47e6e3ede378fe6634696610498d608c5389 100755
--- a/doc.zih.tu-dresden.de/util/check-links.sh
+++ b/doc.zih.tu-dresden.de/util/check-links.sh
@@ -42,10 +42,13 @@ fi
 any_fails=false
 
 files=$(git diff --name-only "$(git merge-base HEAD "$branch")")
+echo "Check files:"
+echo "$files"
+echo ""
 for f in $files; do
   if [ "${f: -3}" == ".md" ]; then
     # do not check links for deleted files
-    if [ -e x.txt ]; then
+    if [ -e $f ]; then
       echo "Checking links for $f"
       if ! $mlc -q -p "$f"; then
         any_fails=true
diff --git a/doc.zih.tu-dresden.de/util/check-spelling.sh b/doc.zih.tu-dresden.de/util/check-spelling.sh
index 7fa9d2824d4a61ce86ae258d656acfe90c574269..8448d0bbffe534b0fd676dbd00ca82e17e7d167d 100755
--- a/doc.zih.tu-dresden.de/util/check-spelling.sh
+++ b/doc.zih.tu-dresden.de/util/check-spelling.sh
@@ -70,7 +70,8 @@ function isMistakeCountIncreasedByChanges(){
         fi
         if [ $current_count -gt $previous_count ]; then
           echo "-- File $newfile"
-          echo "Change increases spelling mistake count (from $previous_count to $current_count)"
+          echo "Change increases spelling mistake count (from $previous_count to $current_count), misspelled/unknown words:"
+          cat "$newfile" | getAspellOutput
           any_fails=true
         fi
       fi
diff --git a/doc.zih.tu-dresden.de/util/grep-forbidden-words.sh b/doc.zih.tu-dresden.de/util/grep-forbidden-words.sh
index 950579f5356d4efd06006386e2cec84381592882..456eb55e192634bf4e159ce0096c83076989f2fc 100755
--- a/doc.zih.tu-dresden.de/util/grep-forbidden-words.sh
+++ b/doc.zih.tu-dresden.de/util/grep-forbidden-words.sh
@@ -15,10 +15,10 @@ basedir=`dirname "$basedir"`
 ruleset="i	\<io\>	\.io
 s	\<SLURM\>
 i	file \+system	HDFS
-i	\<taurus\>	taurus\.hrsk	/taurus
+i	\<taurus\>	taurus\.hrsk	/taurus	/TAURUS
 i	\<hrskii\>
-i	hpc \+system
 i	hpc[ -]\+da\>
+i	\(alpha\|ml\|haswell\|romeo\|gpu\|smp\|julia\|hpdlf\|scs5\)-\?\(interactive\)\?[^a-z]*partition
 i	work[ -]\+space"
 
 # Whitelisted files will be ignored
diff --git a/doc.zih.tu-dresden.de/util/pre-commit b/doc.zih.tu-dresden.de/util/pre-commit
new file mode 100755
index 0000000000000000000000000000000000000000..043320f352b923a7e7be96c04de5914960285b65
--- /dev/null
+++ b/doc.zih.tu-dresden.de/util/pre-commit
@@ -0,0 +1,71 @@
+#!/bin/bash
+exit_ok=yes
+files=$(git diff-index --cached --name-only HEAD)
+
+function testPath(){
+path_to_test=doc.zih.tu-dresden.de/docs/$1
+test -f "$path_to_test" || echo $path_to_test does not exist
+}
+
+if ! `docker image inspect hpc-compendium:latest > /dev/null 2>&1`
+then
+  echo Container not built, building...
+  docker build -t hpc-compendium .
+fi
+
+export -f testPath
+
+for file in $files
+do
+  if [ $file == doc.zih.tu-dresden.de/mkdocs.yml ]
+  then
+    echo Testing $file
+    sed -n '/^ *- /s#.*: \([A-Za-z_/]*.md\).*#\1#p' doc.zih.tu-dresden.de/mkdocs.yml | xargs -L1 -I {} bash -c "testPath '{}'"
+    if [ $? -ne 0 ]
+    then
+      exit_ok=no
+    fi
+  elif [[ $file =~ ^doc.zih.tu-dresden.de/(.*.md)$ ]]
+  then
+    filepattern=${BASH_REMATCH[1]}
+
+    #lint
+    echo "Checking linter..."
+    docker run --name=hpc-compendium --rm -w /docs --mount src="$(pwd)"/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium markdownlint $filepattern
+    if [ $? -ne 0 ]
+    then
+      exit_ok=no
+    fi
+
+    #link-check
+    echo "Checking links..."
+    docker run --name=hpc-compendium --rm -w /docs --mount src="$(pwd)"/doc.zih.tu-dresden.de,target=/docs,type=bind hpc-compendium markdown-link-check $filepattern
+    if [ $? -ne 0 ]
+    then
+      exit_ok=no
+    fi
+  fi
+done
+
+#spell-check
+echo "Spell-checking..."
+docker run --name=hpc-compendium --rm -w /docs --mount src="$(pwd)",target=/docs,type=bind hpc-compendium ./doc.zih.tu-dresden.de/util/check-spelling.sh
+if [ $? -ne 0 ]
+then
+  exit_ok=no
+fi
+
+#forbidden words checking
+echo "Forbidden words checking..."
+docker run --name=hpc-compendium --rm -w /docs --mount src="$(pwd)",target=/docs,type=bind hpc-compendium ./doc.zih.tu-dresden.de/util/grep-forbidden-words.sh
+if [ $? -ne 0 ]
+then
+  exit_ok=no
+fi
+
+if [ $exit_ok == yes ]
+then
+  exit 0
+else
+  exit 1
+fi
diff --git a/doc.zih.tu-dresden.de/wordlist.aspell b/doc.zih.tu-dresden.de/wordlist.aspell
index 682a5ab8ab41f264acbf294615da9d0b30096deb..0317b567a804e41c33890d973993dc5d3e1a1745 100644
--- a/doc.zih.tu-dresden.de/wordlist.aspell
+++ b/doc.zih.tu-dresden.de/wordlist.aspell
@@ -1,13 +1,16 @@
 personal_ws-1.1 en 203 
 Abaqus
+ALLREDUCE
 Altix
 Amber
 Amdahl's
 analytics
 Analytics
 anonymized
+Ansys
 APIs
 AVX
+awk
 BeeGFS
 benchmarking
 BLAS
@@ -18,15 +21,22 @@ CCM
 ccNUMA
 centauri
 CentOS
+CFX
 cgroups
 checkpointing
 Chemnitz
 citable
 CLI
+COMSOL
 conda
+config
+CONFIG
+cpu
 CPU
 CPUID
+cpus
 CPUs
+crossentropy
 css
 CSV
 CUDA
@@ -38,9 +48,14 @@ dataframes
 DataFrames
 datamover
 DataParallel
+dataset
+DCV
+ddl
 DDP
 DDR
 DFG
+dir
+distr
 DistributedDataParallel
 DMTCP
 DNS
@@ -80,29 +95,39 @@ GitLab
 GitLab's
 glibc
 gnuplot
+gpu
 GPU
 GPUs
+gres
 GROMACS
+GUIs
 hadoop
 haswell
 HBM
 HDF
 HDFS
 HDFView
+hiera
+horovod
 Horovod
+horovodrun
 hostname
 Hostnames
 HPC
 HPE
 HPL
 html
+hvd
 hyperparameter
 hyperparameters
+hyperthreading
 icc
 icpc
 ifort
 ImageNet
+img
 Infiniband
+init
 inode
 IPs
 Itanium
@@ -114,9 +139,11 @@ JupyterHub
 JupyterLab
 Keras
 KNL
+Kunststofftechnik
 LAMMPS
 LAPACK
 lapply
+Leichtbau
 LINPACK
 linter
 Linter
@@ -130,12 +157,14 @@ MathKernel
 MathWorks
 matlab
 MEGWARE
+mem
 MiB
 MIMD
 Miniconda
 mkdocs
 MKL
 MNIST
+modenv
 Montecito
 mountpoint
 mpi
@@ -147,9 +176,11 @@ mpif
 mpifort
 mpirun
 multicore
+multiphysics
+Multiphysics
 multithreaded
-MultiThreading
 Multithreading
+MultiThreading
 NAMD
 natively
 nbsp
@@ -157,7 +188,11 @@ NCCL
 Neptun
 NFS
 NGC
+nodelist
+NODELIST
 NRINGS
+ntasks
+NUM
 NUMA
 NUMAlink
 NumPy
@@ -196,14 +231,19 @@ PMI
 png
 PowerAI
 ppc
-Pre
 pre
+Pre
 Preload
 preloaded
 preloading
+preprocessing
 PSOCK
 Pthreads
+pty
 pymdownx
+PythonAnaconda
+pytorch
+PyTorch
 Quantum
 queue
 randint
@@ -211,6 +251,7 @@ reachability
 README
 reproducibility
 requeueing
+resnet
 RHEL
 Rmpi
 rome
@@ -220,8 +261,8 @@ RSS
 RStudio
 Rsync
 runnable
-Runtime
 runtime
+Runtime
 salloc
 Sandybridge
 Saxonid
@@ -261,6 +302,7 @@ SXM
 TBB
 TCP
 TensorBoard
+tensorflow
 TensorFlow
 TFLOPS
 Theano
@@ -273,8 +315,10 @@ tracefile
 tracefiles
 transferability
 Trition
+undistinguishable
 unencrypted
 uplink
+userspace
 Vampir
 VampirTrace
 VampirTrace's