diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md
index ff2a9ceb43aca51cedecf234ca67b680799eabab..f96168d8dc01b2ce9425d0f5b226f59bf35aa800 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md
@@ -165,30 +165,44 @@ allocation with desired switch count or the time limit expires. Acceptable time
 ## Interactive Jobs
 
 Interactive activities like editing, compiling, preparing experiments etc. are normally limited to
-the login nodes. For longer interactive sessions, you can allocate cores on the compute node with
-the command `salloc`. It takes the same options as `sbatch` to specify the required resources.
+the login nodes. For longer interactive sessions, you can allocate resources on the compute node
+with the command `salloc`. It takes the same options as `sbatch` to specify the required resources.
 
 `salloc` returns a new shell on the node where you submitted the job. You need to use the command
 `srun` in front of the following commands to have these commands executed on the allocated
-resources. If you allocate more than one task, please be aware that `srun` will run the command on
-each allocated task by default! To release the allocated resources, invoke the command `exit` or
+resources. If you request for  more than one task, please be aware that `srun` will run the command
+on each allocated task by default! To release the allocated resources, invoke the command `exit` or
 `scancel <jobid>`.
 
-```console
-marie@login$ salloc --nodes=2
-salloc: Pending job allocation 27410653
-salloc: job 27410653 queued and waiting for resources
-salloc: job 27410653 has been allocated resources
-salloc: Granted job allocation 27410653
-salloc: Waiting for resource configuration
-salloc: Nodes taurusi[6603-6604] are ready for job
-marie@login$ hostname
-tauruslogin5.taurus.hrsk.tu-dresden.de
-marie@login$ srun hostname
-taurusi6604.taurus.hrsk.tu-dresden.de
-taurusi6603.taurus.hrsk.tu-dresden.de
-marie@login$ exit # ending the resource allocation
-```
+!!! example "Example: Interactive allocation using `salloc`"
+
+    The following code listing depicts the allocation of two nodes with two tasks on each node with a
+    time limit of one hour on the cluster `Barnard` for interactive usage.
+
+    ```console linenums="1"
+    marie@login.barnard$ salloc --nodes=2 --ntasks-per-node=2 --time=01:00:00
+    salloc: Pending job allocation 1234567
+    salloc: job 1234567 queued and waiting for resources
+    salloc: job 1234567 has been allocated resources
+    salloc: Granted job allocation 1234567
+    salloc: Waiting for resource configuration
+    salloc: Nodes n[1184,1223] are ready for job
+    [...]
+    marie@login.barnard$ hostname
+    login1
+    marie@login.barnard$ srun hostname
+    n1184
+    n1184
+    n1223
+    n1223
+    marie@login.barnard$ exit # ending the resource allocation
+    ```
+
+    After Slurm successfully allocated resources for the job, a new shell is created on the submit
+    host (cf. lines 9-10).
+
+    In order to use the allocated resources, you need to invoke your commands with `srun` (cf. lines
+    11 ff).
 
 The command `srun` also creates an allocation, if it is running outside any `sbatch` or `salloc`
 allocation.
@@ -218,13 +232,6 @@ taurusi6604.taurus.hrsk.tu-dresden.de
     shell, as shown in the example above. If you missed adding `-l` at submitting the interactive
     session, no worry, you can source this files also later on manually (`source /etc/profile`).
 
-!!! note "Partition `interactive`"
-
-    A dedicated partition `interactive` is reserved for short jobs (< 8h) with no more than one job
-    per user. An interactive partition is available for every regular partition, e.g.
-    `alpha-interactive` for `alpha`. Please check the availability of nodes there with
-    `sinfo |grep 'interactive\|AVAIL' |less`
-
 ### Interactive X11/GUI Jobs
 
 Slurm will forward your X11 credentials to the first (or even all) node for a job with the
diff --git a/doc.zih.tu-dresden.de/docs/software/visualization.md b/doc.zih.tu-dresden.de/docs/software/visualization.md
index 427bf746840383e69c9a4a85a84997d618cc9b15..3a4ce5b05caa6fbd6bff94c184304de30f151db2 100644
--- a/doc.zih.tu-dresden.de/docs/software/visualization.md
+++ b/doc.zih.tu-dresden.de/docs/software/visualization.md
@@ -158,7 +158,7 @@ processes.
 
     ```console
     marie@login$ module ParaView/5.7.0-osmesa
-    marie@login$ srun --nodes=1 --ntasks=8 --mem-per-cpu=2500 --partition=interactive --pty pvserver --force-offscreen-rendering
+    marie@login$ srun --nodes=1 --ntasks=8 --mem-per-cpu=2500 --pty pvserver --force-offscreen-rendering
     srun: job 2744818 queued and waiting for resources
     srun: job 2744818 has been allocated resources
     Waiting for client...
@@ -254,5 +254,5 @@ it into thinking your provided GL rendering version is higher than what it actua
     marie@login$ export MESA_GL_VERSION_OVERRIDE=3.2
 
     # 3rd, start the ParaView GUI inside an interactive job. Don't forget the --x11 parameter for X forwarding:
-    marie@login$ srun --ntasks=1 --cpus-per-task=1 --partition=interactive --mem-per-cpu=2500 --pty --x11=first paraview
+    marie@login$ srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=2500 --pty --x11=first paraview
     ```