diff --git a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md
index 47c7567b1a063a4b67cca2982d53bf729b288295..c4e7fad813a4e41cec05d63bb27c53d7b383e0d9 100644
--- a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md
+++ b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md
@@ -37,16 +37,16 @@ as via [Jupyter notebooks](#jupyter-notebook). All three ways are outlined in th
 
 ### Default Configuration
 
-The Spark and Flink modules are available in both `scs5` and `ml` environments.
-Thus, Spark and Flink can be executed using different CPU architectures, e.g., Haswell and Power9.
+The Spark and Flink modules are available in the `power` environment.
+Thus, Spark and Flink can be executed using different CPU architectures, e.g., Power.
 
 Let us assume that two nodes should be used for the computation. Use a `srun` command similar to
-the following to start an interactive session using the partition `haswell`. The following code
-snippet shows a job submission to haswell nodes with an allocation of two nodes with 60000 MB main
+the following to start an interactive session. The following code
+snippet shows a job submission with an allocation of two nodes with 60000 MB main
 memory exclusively for one hour:
 
 ```console
-marie@login$ srun --partition=haswell --nodes=2 --mem=60000M --exclusive --time=01:00:00 --pty bash -l
+marie@login.power$ srun --nodes=2 --mem=60000M --exclusive --time=01:00:00 --pty bash -l
 ```
 
 Once you have the shell, load desired Big Data framework using the command
@@ -117,11 +117,11 @@ can start with a copy of the default configuration ahead of your interactive ses
 
 === "Spark"
     ```console
-    marie@login$ cp -r $SPARK_HOME/conf my-config-template
+    marie@login.power$ cp -r $SPARK_HOME/conf my-config-template
     ```
 === "Flink"
     ```console
-    marie@login$ cp -r $FLINK_ROOT_DIR/conf my-config-template
+    marie@login.power$ cp -r $FLINK_ROOT_DIR/conf my-config-template
     ```
 
 After you have changed `my-config-template`, you can use your new template in an interactive job
@@ -175,7 +175,6 @@ example below:
         ```bash
         #!/bin/bash -l
         #SBATCH --time=01:00:00
-        #SBATCH --partition=haswell
         #SBATCH --nodes=2
         #SBATCH --exclusive
         #SBATCH --mem=60000M
@@ -205,7 +204,6 @@ example below:
         ```bash
         #!/bin/bash -l
         #SBATCH --time=01:00:00
-        #SBATCH --partition=haswell
         #SBATCH --nodes=2
         #SBATCH --exclusive
         #SBATCH --mem=60000M