From 1f62899b42890901b0681089ae19f91d78265eb7 Mon Sep 17 00:00:00 2001
From: lazariv <taras.lazariv@tu-dresden.de>
Date: Fri, 27 Aug 2021 10:25:01 +0000
Subject: [PATCH] Remove dead link

---
 .../docs/software/big_data_frameworks_spark.md               | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md
index 9b7274016..5c3eee415 100644
--- a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md
+++ b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md
@@ -156,10 +156,7 @@ Please use a [batch job](../jobs_and_resources/slurm.md) similar to
 
 There are two general options on how to work with Jupyter notebooks:
 There is [JupyterHub](../access/jupyterhub.md), where you can simply
-run your Jupyter notebook on HPC nodes (the preferable way). Also, you
-can run a remote Jupyter server manually within a GPU job using
-the modules and packages you need. You can find the manual server
-setup [here](deep_learning.md).
+run your Jupyter notebook on HPC nodes (the preferable way).
 
 ### Preparation
 
-- 
GitLab