diff --git a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md index 9b727401644f1791dd999511fde1c6c8fa49cbad..5c3eee415359c62432a409dc1f0f55818c8986bb 100644 --- a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md +++ b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md @@ -156,10 +156,7 @@ Please use a [batch job](../jobs_and_resources/slurm.md) similar to There are two general options on how to work with Jupyter notebooks: There is [JupyterHub](../access/jupyterhub.md), where you can simply -run your Jupyter notebook on HPC nodes (the preferable way). Also, you -can run a remote Jupyter server manually within a GPU job using -the modules and packages you need. You can find the manual server -setup [here](deep_learning.md). +run your Jupyter notebook on HPC nodes (the preferable way). ### Preparation