diff --git a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md
index 6cd6a94fb93c6861393d7ba3cb3b8689a28f7637..869e80aafcf59b344f06f2c859cf440cc18f1078 100644
--- a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md
+++ b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md
@@ -242,7 +242,6 @@ You can run Jupyter notebooks with Spark on the ZIH systems in a similar way as
 [JupyterHub](../access/jupyterhub.md) page. Interaction of Flink with JupyterHub is currently
 under examination and will be posted here upon availability.
 
-
 ### Spawning a Notebook
 
 Go to [https://taurus.hrsk.tu-dresden.de/jupyter](https://taurus.hrsk.tu-dresden.de/jupyter).
@@ -252,7 +251,7 @@ In the tab "Advanced", go to the field "Preload modules" and select the followin
 Spark/3.0.1-Hadoop-2.7-Java-1.8-Python-3.7.4-GCCcore-8.3.0
 ```
 
-When your Jupyter instance is started, you can set up Spark. Since the setup in the notebook 
+When your Jupyter instance is started, you can set up Spark. Since the setup in the notebook
 requires more steps than in an interactive session, we have created an example notebook that you can
 use as a starting point for convenience: [SparkExample.ipynb](misc/SparkExample.ipynb)
 
@@ -261,7 +260,6 @@ use as a starting point for convenience: [SparkExample.ipynb](misc/SparkExample.
     This notebook only works with the Spark module mentioned above. When using other Spark modules,
     it is possible that you have to do additional or other steps in order to make Spark running.
 
-
 !!! note
 
     You could work with simple examples in your home directory, but, according to the
diff --git a/doc.zih.tu-dresden.de/docs/software/flink.md b/doc.zih.tu-dresden.de/docs/software/flink.md
index b20d340fdf810fb508cf2f2714633995321446b3..bf4dddbae5e6550757961ab7181d27e24402363f 100644
--- a/doc.zih.tu-dresden.de/docs/software/flink.md
+++ b/doc.zih.tu-dresden.de/docs/software/flink.md
@@ -1,7 +1,7 @@
 # Apache Flink
 
 [Apache Flink](https://flink.apache.org/) is a framework for processing and integrating Big Data.
-It offers a similar API as [Apache Spark](big_data_frameworks_spark.md), but is more appropriate
+It offers a similar API as [Apache Spark](big_data_frameworks.md), but is more appropriate
 for data stream processing. You can check module versions and availability with the command:
 
 ```console
@@ -158,13 +158,11 @@ example below:
     [workspaces](../data_lifecycle/workspaces.md) for your study and work projects**. For this
     reason, you have to use advanced options of Jupyterhub and put "/" in "Workspace scope" field.
 
-    
 ## Jupyter Notebook
 
 You can run Jupyter notebooks with Flink on the ZIH systems in a similar way as described on the
 [JupyterHub](../access/jupyterhub.md) page.
 
-
 ### Spawning a Notebook
 
 Go to [https://taurus.hrsk.tu-dresden.de/jupyter](https://taurus.hrsk.tu-dresden.de/jupyter).
@@ -173,8 +171,8 @@ In the tab "Advanced", go to the field "Preload modules" and select the followin
 ```
 Flink/1.12.3-Java-1.8.0_161-OpenJDK-Python-3.7.4-GCCcore-8.3.0
 ```
-    
-When your Jupyter instance is started, you can set up Flink. Since the setup in the notebook 
+
+When your Jupyter instance is started, you can set up Flink. Since the setup in the notebook
 requires more steps than in an interactive session, we have created an example notebook that you can
 use as a starting point for convenience: [FlinkExample.ipynb](misc/FlinkExample.ipynb)
 
@@ -182,7 +180,7 @@ use as a starting point for convenience: [FlinkExample.ipynb](misc/FlinkExample.
 
     This notebook only works with the Flink module mentioned above. When using other Flink modules,
     it is possible that you have to do additional or other steps in order to make Flink running.
-    
+
 !!! note
 
     You could work with simple examples in your home directory, but, according to the
@@ -190,7 +188,6 @@ use as a starting point for convenience: [FlinkExample.ipynb](misc/FlinkExample.
     [workspaces](../data_lifecycle/workspaces.md) for your study and work projects**. For this
     reason, you have to use advanced options of Jupyterhub and put "/" in "Workspace scope" field.
 
-    
 ## FAQ
 
 Q: Command `source framework-configure.sh hadoop