diff --git a/doc.zih.tu-dresden.de/docs/software/fem_software.md b/doc.zih.tu-dresden.de/docs/software/fem_software.md
index bd65ea9832462bae475841f2e3ed2fa8193e3355..843530a570593edb7e74790aece6f8385da63134 100644
--- a/doc.zih.tu-dresden.de/docs/software/fem_software.md
+++ b/doc.zih.tu-dresden.de/docs/software/fem_software.md
@@ -1,247 +1,238 @@
 # FEM Software
 
-For an up-to-date list of the installed software versions on our
-cluster, please refer to SoftwareModulesList **TODO LINK** (SoftwareModulesList).
+!!! hint "Its all in the modules"
 
-## Abaqus
-
-[ABAQUS](http://www.hks.com) **TODO links to realestate site** is a general-purpose finite-element program
-designed for advanced linear and nonlinear engineering analysis
-applications with facilities for linking-in user developed material
-models, elements, friction laws, etc.
-
-Eike Dohmen (from Inst.f. Leichtbau und Kunststofftechnik) sent us the
-attached description of his ABAQUS calculations. Please try to adapt
-your calculations in that way.\<br />Eike is normally a Windows-User and
-his description contains also some hints for basic Unix commands. (
-ABAQUS-SLURM.pdf **TODO LINK** (%ATTACHURL%/ABAQUS-SLURM.pdf) - only in German)
-
-Please note: Abaqus calculations should be started with a batch script.
-Please read the information about the Batch System **TODO LINK **  (BatchSystems)
-SLURM.
-
-The detailed Abaqus documentation can be found at
-abaqus **TODO LINK MISSING** (only accessible from within the
-TU Dresden campus net).
+    All packages described in this section, are organized in so-called modules. To list the available versions of a package and load a
+    particular, e.g., ANSYS, version, invoke the commands
 
-**Example - Thanks to Benjamin Groeger, Inst. f. Leichtbau und
-Kunststofftechnik) **
-
-1. Prepare an Abaqus input-file (here the input example from Benjamin)
-
-Rot-modell-BenjaminGroeger.inp **TODO LINK**  (%ATTACHURL%/Rot-modell-BenjaminGroeger.inp)
-
-2. Prepare a batch script on taurus like this
-
-```
-#!/bin/bash<br>
-### Thanks to Benjamin Groeger, Institut fuer Leichtbau und Kunststofftechnik, 38748<br />### runs on taurus and needs ca 20sec with 4cpu<br />### generates files:
-###  yyyy.com
-###  yyyy.dat
-###  yyyy.msg
-###  yyyy.odb
-###  yyyy.prt
-###  yyyy.sim
-###  yyyy.sta
-#SBATCH --nodes=1  ### with &gt;1 node abaqus needs a nodeliste
-#SBATCH --ntasks-per-node=4
-#SBATCH --mem=500  ### memory (sum)
-#SBATCH --time=00:04:00
-### give a name, what ever you want
-#SBATCH --job-name=yyyy
-### you get emails when the job will finished or failed
-### set your right email
-#SBATCH --mail-type=END,FAIL
-#SBATCH --mail-user=xxxxx.yyyyyy@mailbox.tu-dresden.de
-### set your project
-#SBATCH -A p_xxxxxxx
-### Abaqus have its own MPI
-unset SLURM_GTIDS
-### load and start
-module load ABAQUS/2019
-abaqus interactive input=Rot-modell-BenjaminGroeger.inp job=yyyy cpus=4 mp_mode=mpi
+    ```console
+    marie@login$ module avail ANSYS
+    [...]
+    marie@login$ module load ANSYS/<version>
+    ```
 
-```
+    The section [runtime environment](runtime_environment.md) provides a comprehensive overview
+    on the module system and relevant commands.
 
-3. Start the batch script (name of our script is
-"batch-Rot-modell-BenjaminGroeger")
+## Abaqus
 
-```
-sbatch batch-Rot-modell-BenjaminGroeger      --->; you will get a jobnumber = JobID (for example 3130522)
-```
+[Abaqus](https://www.3ds.com/de/produkte-und-services/simulia/produkte/abaqus/) is a general-purpose
+finite element method program designed for advanced linear and nonlinear engineering analysis
+applications with facilities for linking-in user developed material models, elements, friction laws,
+etc.
 
-4. Control the status of the job
+### Guide by User
 
-```
-squeue -u your_login     -->; in column "ST" (Status) you will find a R=Running or P=Pending (waiting for resources)
-```
+Eike Dohmen (from Inst. f. Leichtbau und Kunststofftechnik) sent us the description of his
+Abaqus calculations. Please try to adapt your calculations in that way. Eike is normally a
+Windows user and his description contains also some hints for basic Unix commands:
+[Abaqus-Slurm.pdf (only in German)](misc/ABAQUS-SLURM.pdf).
 
-## ANSYS
+### General
 
-ANSYS is a general-purpose finite-element program for engineering
-analysis, and includes preprocessing, solution, and post-processing
-functions. It is used in a wide range of disciplines for solutions to
-mechanical, thermal, and electronic problems. [ANSYS and ANSYS
-CFX](http://www.ansys.com) used to be separate packages in the past and
-are now combined.
+Abaqus calculations should be started using a job file (aka. batch script). Please refer to the
+page covering the [batch system Slurm](../jobs_and_resources/slurm.md) if you are not familiar with
+Slurm or [writing job files](../jobs_and_resources/slurm.md#job-files).
 
-ANSYS, like all other installed software, is organized in so-called
-modules **TODO LINK** (RuntimeEnvironment). To list the available versions and load a
-particular ANSYS version, type
+??? example "Usage of Abaqus"
 
-```
-module avail ANSYS
-...
-module load ANSYS/VERSION
-```
+    (Thanks to Benjamin Groeger, Inst. f. Leichtbau und Kunststofftechnik)).
 
-In general, HPC-systems are not designed for interactive "GUI-working".
-Even so, it is possible to start a ANSYS workbench on Taurus (login
-nodes) interactively for short tasks. The second and recommended way is
-to use batch files. Both modes are documented in the following.
+    1. Prepare an Abaqus input-file. You can start with the input example from Benjamin:
+    [Rot-modell-BenjaminGroeger.inp](misc/Rot-modell-BenjaminGroeger.inp)
+    2. Prepare a job file on ZIH systems like this
+    ```bash
+    #!/bin/bash
+    ### needs ca 20 sec with 4cpu
+    ### generates files:
+    ###  yyyy.com
+    ###  yyyy.dat
+    ###  yyyy.msg
+    ###  yyyy.odb
+    ###  yyyy.prt
+    ###  yyyy.sim
+    ###  yyyy.sta
+    #SBATCH --nodes=1               # with >1 node Abaqus needs a nodeliste
+    #SBATCH --ntasks-per-node=4
+    #SBATCH --mem=500               # total memory
+    #SBATCH --time=00:04:00
+    #SBATCH --job-name=yyyy         # give a name, what ever you want
+    #SBATCH --mail-type=END,FAIL    # send email when the job finished or failed
+    #SBATCH --mail-user=<name>@mailbox.tu-dresden.de  # set your email
+    #SBATCH -A p_xxxxxxx            # charge compute time to your project
+
+
+    # Abaqus has its own MPI
+    unset SLURM_GTIDS
+
+    # load module and start Abaqus
+    module load ABAQUS/2019
+    abaqus interactive input=Rot-modell-BenjaminGroeger.inp job=yyyy cpus=4 mp_mode=mpi
+    ```
+    3. Start the job file (e.g., name `batch-Rot-modell-BenjaminGroeger.sh`)
+    ```
+    marie@login$ sbatch batch-Rot-modell-BenjaminGroeger.sh      # Slurm will provide the Job Id (e.g., 3130522)
+    ```
+    4. Control the status of the job
+    ```
+    marie@login squeue -u your_login     # in column "ST" (Status) you will find a R=Running or P=Pending (waiting for resources)
+    ```
+
+## Ansys
+
+Ansys is a general-purpose finite element method program for engineering analysis, and includes
+preprocessing, solution, and post-processing functions. It is used in a wide range of disciplines
+for solutions to mechanical, thermal, and electronic problems.
+[Ansys and Ansys CFX](http://www.ansys.com) used to be separate packages in the past and are now
+combined.
+
+In general, HPC systems are not designed for interactive working with GUIs. Even so, it is possible to
+start a Ansys workbench on the login nodes interactively for short tasks. The second and
+**recommended way** is to use job files. Both modes are documented in the following.
+
+!!! note ""
+
+    Since the MPI library that Ansys uses internally (Platform MPI) has some problems integrating
+    seamlessly with Slurm, you have to unset the enviroment variable `SLURM_GTIDS` in your
+    environment bevor running Ansysy workbench in interactive andbatch mode.
 
 ### Using Workbench Interactively
 
-For fast things, ANSYS workbench can be invoked interactively on the
-login nodes of Taurus. X11 forwarding needs to enabled when establishing
-the SSH connection. For OpenSSH this option is '-X' and it is valuable
-to use compression of all data via '-C'.
+Ansys workbench (`runwb2`) an be invoked interactively on the login nodes of ZIH systems for short tasks.
+[X11 forwarding](../access/ssh_login.md#x11-forwarding) needs to enabled when establishing the SSH
+connection. For OpenSSH the corresponding option is `-X` and it is valuable to use compression of
+all data via `-C`.
 
-```
-# Connect to taurus, e.g. ssh -CX
-module load ANSYS/VERSION
-runwb2
+```console
+# SSH connection established using -CX
+marie@login$ module load ANSYS/<version>
+marie@login$ runwb2
 ```
 
-If more time is needed, a CPU has to be allocated like this (see topic
-batch systems **TODO LINK** (BatchSystems) for further information):
+If more time is needed, a CPU has to be allocated like this (see
+[batch systems Slurm](../jobs_and_resources/slurm.md) for further information):
 
+```console
+marie@login$ module load ANSYS/<version>
+marie@login$ srun -t 00:30:00 --x11=first [SLURM_OPTIONS] --pty bash
+[...]
+marie@login$ runwb2
 ```
-module load ANSYS/VERSION  
-srun -t 00:30:00 --x11=first [SLURM_OPTIONS] --pty bash
-runwb2
-```
-
-**Note:** The software NICE Desktop Cloud Visualization (DCV) enables to
-remotly access OpenGL-3D-applications running on taurus using its GPUs
-(cf. virtual desktops **TODO LINK** (Compendium.VirtualDesktops)). Using ANSYS
-together with dcv works as follows:
-
--   Follow the instructions within virtual
-    desktops **TODO LINK** (Compendium.VirtualDesktops)
 
-```
-module load ANSYS
-```
+!!! hint "Better use DCV"
 
-```
-unset SLURM_GTIDS
-```
+    The software NICE Desktop Cloud Visualization (DCV) enables to
+    remotly access OpenGL-3D-applications running on ZIH systems using its GPUs
+    (cf. [virtual desktops](virtual_desktops.md)).
 
--   Note the hints w.r.t. GPU support on dcv side
+Ansys can be used under DCV to make use of GPU acceleration. Follow the instructions within
+[virtual desktops](virtual_desktops.md) to set up a DCV session. Then, load a Ansys module, unset
+the environment variable `SLURM_GTIDS`, and finally start the workbench:
 
-```
-runwb2
+```console
+marie@gpu$ module load ANSYS
+marie@gpu$ unset SLURM_GTIDS
+marie@gpu$ runwb2
 ```
 
 ### Using Workbench in Batch Mode
 
-The ANSYS workbench (runwb2) can also be used in a batch script to start
-calculations (the solver, not GUI) from a workbench project into the
-background. To do so, you have to specify the -B parameter (for batch
-mode), -F for your project file, and can then either add different
-commands via -E parameters directly, or specify a workbench script file
-containing commands via -R.
+The Ansys workbench (`runwb2`) can also be used in a job file to start calculations (the solver,
+not GUI) from a workbench project into the background. To do so, you have to specify the `-B`
+parameter (for batch mode), `-F` for your project file, and can then either add different commands via
+`-E parameters directly`, or specify a workbench script file containing commands via `-R`.
 
-**NOTE:** Since the MPI library that ANSYS uses internally (Platform
-MPI) has some problems integrating seamlessly with SLURM, you have to
-unset the enviroment variable SLURM_GTIDS in your job environment before
-running workbench. An example batch script could look like this:
+??? example "Ansys Job File"
 
+    ```bash
     #!/bin/bash
     #SBATCH --time=0:30:00
     #SBATCH --nodes=1
     #SBATCH --ntasks=2
     #SBATCH --mem-per-cpu=1000M
 
+    unset SLURM_GTIDS              # Odd, but necessary!
 
-    unset SLURM_GTIDS         # Odd, but necessary!
-
-    module load ANSYS/VERSION
+    module load ANSYS/<version>
 
     runwb2 -B -F Workbench_Taurus.wbpj -E 'Project.Update' -E 'Save(Overwrite=True)'
     #or, if you wish to use a workbench replay file, replace the -E parameters with: -R mysteps.wbjn
+    ```
 
 ### Running Workbench in Parallel
 
-Unfortunately, the number of CPU cores you wish to use cannot simply be
-given as a command line parameter to your runwb2 call. Instead, you have
-to enter it into an XML file in your home. This setting will then be
-used for all your runwb2 jobs. While it is also possible to edit this
-setting via the Mechanical GUI, experience shows that this can be
-problematic via X-Forwarding and we only managed to use the GUI properly
-via DCV **TODO LINK** (DesktopCloudVisualization), so we recommend you simply edit
-the XML file directly with a text editor of your choice. It is located
+Unfortunately, the number of CPU cores you wish to use cannot simply be given as a command line
+parameter to your `runwb2` call. Instead, you have to enter it into an XML file in your `home`
+directory. This setting will then be **used for all** your `runwb2` jobs. While it is also possible
+to edit this setting via the Mechanical GUI, experience shows that this can be problematic via
+X11-forwarding and we only managed to use the GUI properly via [DCV](virtual_desktops.md), so we
+recommend you simply edit the XML file directly with a text editor of your choice. It is located
 under:
 
-'$HOME/.mw/Application Data/Ansys/v181/SolveHandlers.xml'
+`$HOME/.mw/Application Data/Ansys/v181/SolveHandlers.xml`
 
-(mind the space in there.) You might have to adjust the ANSYS Version
-(v181) in the path. In this file, you can find the parameter
+(mind the space in there.) You might have to adjust the Ansys version
+(here `v181`) in the path to your preferred version. In this file, you can find the parameter
 
-    <MaxNumberProcessors>2</MaxNumberProcessors>
+`<MaxNumberProcessors>2</MaxNumberProcessors>`
 
-that you can simply change to something like 16 oder 24. For now, you
-should stay within single-node boundaries, because multi-node
-calculations require additional parameters. The number you choose should
-match your used --cpus-per-task parameter in your sbatch script.
+that you can simply change to something like 16 oder 24. For now, you should stay within single-node
+boundaries, because multi-node calculations require additional parameters. The number you choose
+should match your used `--cpus-per-task` parameter in your job file.
 
 ## COMSOL Multiphysics
 
-"[COMSOL Multiphysics](http://www.comsol.com) (formerly FEMLAB) is a
-finite element analysis, solver and Simulation software package for
-various physics and engineering applications, especially coupled
-phenomena, or multiphysics."
-[\[http://en.wikipedia.org/wiki/COMSOL_Multiphysics Wikipedia\]](
-    http://en.wikipedia.org/wiki/COMSOL_Multiphysics Wikipedia)
+[COMSOL Multiphysics](http://www.comsol.com) (formerly FEMLAB) is a finite element analysis, solver
+and Simulation software package for various physics and engineering applications, especially coupled
+phenomena, or multiphysics.
 
-Comsol may be used remotely on ZIH machines or locally on the desktop,
-using ZIH license server.
+COMSOL may be used remotely on ZIH systems or locally on the desktop, using ZIH license server.
 
-For using Comsol on ZIH machines, the following operating modes (see
-Comsol manual) are recommended:
+For using COMSOL on ZIH systems, we recommend the interactive client-server mode (see COMSOL
+manual).
 
--   Interactive Client Server Mode
+### Client-Server Mode
 
-In this mode Comsol runs as server process on the ZIH machine and as
-client process on your local workstation. The client process needs a
-dummy license for installation, but no license for normal work. Using
-this mode is almost undistinguishable from working with a local
-installation. It works well with Windows clients. For this operation
-mode to work, you must build an SSH tunnel through the firewall of ZIH.
-For further information, see the Comsol manual.
+In this mode, COMSOL runs as server process on the ZIH system and as client process on your local
+workstation. The client process needs a dummy license for installation, but no license for normal
+work. Using this mode is almost undistinguishable from working with a local installation. It also works
+well with Windows clients. For this operation mode to work, you must build an SSH tunnel through the
+firewall of ZIH. For further information, please refer to the COMSOL manual.
 
-Example for starting the server process (4 cores, 10 GB RAM, max. 8
-hours running time):
+### Usage
 
-    module load COMSOL
-    srun -c4 -t 8:00 --mem-per-cpu=2500 comsol -np 4 server
+??? example "Server Process"
 
--   Interactive Job via Batchsystem SLURM
+    Start the server process with 4 cores, 10 GB RAM and max. 8 hours running time using an
+    interactive Slurm job like this:
 
-<!-- -->
+    ```console
+    marie@login$ module load COMSOL
+    marie@login$ srun -n 1 -c 4 --mem-per-cpu=2500 -t 8:00 comsol -np 4 server
+    ```
 
-    module load COMSOL
-    srun -n1 -c4 --mem-per-cpu=2500 -t 8:00 --pty --x11=first comsol -np 4
+??? example "Interactive Job"
+
+    If you'd like to work interactively using COMSOL, you can request for an interactive job with,
+    e.g., 4 cores and 2500 MB RAM for 8 hours and X11 forwarding to open the COMSOL GUI:
+
+    ```console
+    marie@login$ module load COMSOL
+    marie@login$ srun -n 1 -c 4 --mem-per-cpu=2500 -t 8:00 --pty --x11=first comsol -np 4
+    ```
 
-Man sollte noch schauen, ob das Rendering unter Options -> Preferences
--> Graphics and Plot Windows auf Software-Rendering steht - und dann
-sollte man campusintern arbeiten knnen.
+    Please make sure, that the option *Preferences* --> Graphics --> *Renedering* is set to *software
+    rendering*. Than, you can work from within the campus network.
 
--   Background Job via Batchsystem SLURM
+??? example "Background Job"
 
-<!-- -->
+    Interactive working is great for debugging and setting experiments up. But, if you have a huge
+    workload, you should definitively rely on job files. I.e., you put the necessary steps to get
+    the work done into scripts and submit these scripts to the batch system. These two steps are
+    outlined:
 
+    1. Create a [job file](../jobs_and_resources/slurm.md#job-files), e.g.
+    ```bash
     #!/bin/bash
     #SBATCH --time=24:00:00
     #SBATCH --nodes=2
@@ -251,21 +242,33 @@ sollte man campusintern arbeiten knnen.
 
     module load COMSOL
     srun comsol -mpi=intel batch -inputfile ./MyInputFile.mph
-
-Submit via: `sbatch <filename>`
+    ```
 
 ## LS-DYNA
 
-Both, the shared memory version and the distributed memory version (mpp)
-are installed on all machines.
+[LS-DYNA](https://www.dynamore.de/de) is a general-purpose, implicit and explicit FEM software for
+nonlinear structural analysis. Both, the shared memory version and the distributed memory version
+(`mpp`) are installed on ZIH systems.
+
+You need a job file (aka. batch script) to run the MPI version.
 
-To run the MPI version on Taurus or Venus you need a batchfile (sumbmit
-with `sbatch <filename>`) like:
+??? example "Minimal Job File"
 
+    ```bash
     #!/bin/bash
-    #SBATCH --time=01:00:00   # walltime
-    #SBATCH --ntasks=16   # number of processor cores (i.e. tasks)
+    #SBATCH --time=01:00:00       # walltime
+    #SBATCH --ntasks=16           # number of processor cores (i.e. tasks)
     #SBATCH --mem-per-cpu=1900M   # memory per CPU core
-    
+
     module load ls-dyna
     srun mpp-dyna i=neon_refined01_30ms.k memory=120000000
+    ```
+
+    Submit the job file to the batch system via
+
+    ```console
+    marie@login$ sbatch <filename>
+    ```
+
+    Please refer to the section [Slurm](../jobs_and_resources.md/slurm.md) for further details and
+    options on the batch system as well as monitoring commands.
diff --git a/Compendium_attachments/FEMSoftware/ABAQUS-SLURM.pdf b/doc.zih.tu-dresden.de/docs/software/misc/ABAQUS-SLURM.pdf
similarity index 100%
rename from Compendium_attachments/FEMSoftware/ABAQUS-SLURM.pdf
rename to doc.zih.tu-dresden.de/docs/software/misc/ABAQUS-SLURM.pdf
diff --git a/Compendium_attachments/FEMSoftware/Rot-modell-BenjaminGroeger.inp b/doc.zih.tu-dresden.de/docs/software/misc/Rot-modell-BenjaminGroeger.inp
similarity index 100%
rename from Compendium_attachments/FEMSoftware/Rot-modell-BenjaminGroeger.inp
rename to doc.zih.tu-dresden.de/docs/software/misc/Rot-modell-BenjaminGroeger.inp
diff --git a/doc.zih.tu-dresden.de/wordlist.aspell b/doc.zih.tu-dresden.de/wordlist.aspell
index 038262ca7b5a4418ceefb66b0d14f57d61582609..053c349ba619d18174a0784066fbc68618eaf8ce 100644
--- a/doc.zih.tu-dresden.de/wordlist.aspell
+++ b/doc.zih.tu-dresden.de/wordlist.aspell
@@ -5,6 +5,7 @@ Amber
 Amdahl's
 analytics
 anonymized
+Ansys
 APIs
 AVX
 BeeGFS
@@ -17,11 +18,13 @@ CCM
 ccNUMA
 centauri
 CentOS
+CFX
 cgroups
 checkpointing
 Chemnitz
 citable
 CLI
+COMSOL
 conda
 CPU
 CPUID
@@ -36,6 +39,7 @@ dataframes
 DataFrames
 datamover
 DataParallel
+DCV
 DDP
 DDR
 DFG
@@ -80,6 +84,7 @@ gnuplot
 GPU
 GPUs
 GROMACS
+GUIs
 hadoop
 haswell
 HBM
@@ -112,9 +117,11 @@ JupyterHub
 JupyterLab
 Keras
 KNL
+Kunststofftechnik
 LAMMPS
 LAPACK
 lapply
+Leichtbau
 LINPACK
 linter
 Linter
@@ -144,6 +151,8 @@ mpif
 mpifort
 mpirun
 multicore
+multiphysics
+Multiphysics
 multithreaded
 Multithreading
 MultiThreading
@@ -199,6 +208,7 @@ Pre
 Preload
 preloaded
 preloading
+preprocessing
 PSOCK
 Pthreads
 pymdownx
@@ -271,6 +281,7 @@ tracefile
 tracefiles
 transferability
 Trition
+undistinguishable
 unencrypted
 uplink
 userspace