diff --git a/doc.zih.tu-dresden.de/docs/access/desktop_cloud_visualization.md b/doc.zih.tu-dresden.de/docs/access/desktop_cloud_visualization.md index 39ad90b6cb5210657440df8d9acf39f9eb325d0b..d38ce1aa3fb1554cebd5e5691ab99d6499d45949 100644 --- a/doc.zih.tu-dresden.de/docs/access/desktop_cloud_visualization.md +++ b/doc.zih.tu-dresden.de/docs/access/desktop_cloud_visualization.md @@ -18,7 +18,7 @@ Click here, to start a session on our JupyterHub: [https://taurus.hrsk.tu-dresden.de/jupyter/hub/spawn#/\~(partition\~'dcv\~cpuspertask\~'6\~gres\~'gpu\*3a1\~mempercpu\~'2583\~environment\~'production)](https://taurus.hrsk.tu-dresden.de/jupyter/hub/spawn#/~(partition~'dcv~cpuspertask~'6~gres~'gpu*3a1~mempercpu~'2583~environment~'test))\<br /> This link starts your session on the dcv partition (taurusi210\[7-8\]) with a GPU, 6 CPU cores and 2583 MB memory per core. Optionally you can modify many different SLURM parameters. For this -follow the general [JupyterHub](../software/jupyterhub.md) documentation. +follow the general [JupyterHub](../access/jupyterhub.md) documentation. Your browser now should load into the JupyterLab application which looks like this: diff --git a/doc.zih.tu-dresden.de/docs/software/jupyterhub.md b/doc.zih.tu-dresden.de/docs/access/jupyterhub.md similarity index 100% rename from doc.zih.tu-dresden.de/docs/software/jupyterhub.md rename to doc.zih.tu-dresden.de/docs/access/jupyterhub.md diff --git a/doc.zih.tu-dresden.de/docs/software/jupyterhub_for_teaching.md b/doc.zih.tu-dresden.de/docs/access/jupyterhub_for_teaching.md similarity index 100% rename from doc.zih.tu-dresden.de/docs/software/jupyterhub_for_teaching.md rename to doc.zih.tu-dresden.de/docs/access/jupyterhub_for_teaching.md diff --git a/doc.zih.tu-dresden.de/docs/access/login.md b/doc.zih.tu-dresden.de/docs/access/login.md index 9635640e9b73057af2b0eef14a7f29417c80d1b3..f15efa68b681921174314be5ea69aac0977fa595 100644 --- a/doc.zih.tu-dresden.de/docs/access/login.md +++ b/doc.zih.tu-dresden.de/docs/access/login.md @@ -76,4 +76,4 @@ A JupyterHub installation offering IPython Notebook is available under: <https://taurus.hrsk.tu-dresden.de/jupyter> -See the documentation under [JupyterHub](../software/jupyterhub.md). +See the documentation under [JupyterHub](../access/jupyterhub.md). diff --git a/doc.zih.tu-dresden.de/docs/access/web_vnc.md b/doc.zih.tu-dresden.de/docs/access/web_vnc.md index 88c020b902575bb749cb39fc594f8320cf0a4627..d42f019f8ce8640fe16faf10c9453351716f5e91 100644 --- a/doc.zih.tu-dresden.de/docs/access/web_vnc.md +++ b/doc.zih.tu-dresden.de/docs/access/web_vnc.md @@ -12,7 +12,7 @@ Also, we have prepared a script that makes launching the VNC server much easier. **Check out the [new documentation about virtual desktops](../software/virtual_desktops.md).** -The [JupyterHub](../software/jupyterhub.md) service is now able to start a VNC session based on the +The [JupyterHub](../access/jupyterhub.md) service is now able to start a VNC session based on the Singularity container mentioned here. Quickstart: 1 Click here to start a session immediately: \<a diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md index 13c8c9c8b9892dffb7f60db3cfb00744608df892..bad6e1be5691fd2573355ae24af5db288a9f5929 100644 --- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md +++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md @@ -163,7 +163,7 @@ moment. ### JupyterHub -There is [JupyterHub](../software/jupyterhub.md) on Taurus, where you can simply run +There is [JupyterHub](../access/jupyterhub.md) on Taurus, where you can simply run your Jupyter notebook on Alpha-Centauri sub-cluster. Also, for more specific cases you can run a manually created remote jupyter server. You can find the manual server setup [here](../software/deep_learning.md). However, the simplest option for beginners is using diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hpcda.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hpcda.md index acdea9af1e75308acd0a2fe78c8465dfeecef3be..2368e95322517f33dcbed69b329f1ead1e2ec2fb 100644 --- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hpcda.md +++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hpcda.md @@ -62,7 +62,7 @@ Additional hardware: - [TensorFlow on x86](../software/deep_learning.md) - [PyTorch on HPC-DA (Power9)](../software/py_torch.md) - [Python on HPC-DA (Power9)](../software/python.md) -- [JupyterHub](../software/jupyterhub.md) +- [JupyterHub](../access/jupyterhub.md) - [R on HPC-DA (Power9)](../software/data_analytics_with_r.md) - [Big Data frameworks: Apache Spark, Apache Flink, Apache Hadoop] **todo** BigDataFrameworks:ApacheSparkApacheFlinkApacheHadoop diff --git a/doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md index cd079210e8d4ecd40c0ad4b46370a7dc8b91dee7..aabe64378dde3dbe3291c3ac2b56b8e1809d53fa 100644 --- a/doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md +++ b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md @@ -77,11 +77,11 @@ Rscript /path/to/script/your_script.R param1 param2 In addition to using interactive srun jobs and batch jobs, there is another way to work with the **R** on Taurus. JupyterHub is a quick and easy way to work with jupyter notebooks on Taurus. -See the [JupyterHub page](jupyterhub.md) for detailed instructions. +See the [JupyterHub page](../access/jupyterhub.md) for detailed instructions. -The [production environment](jupyterhub.md#standard-environments) of JupyterHub contains R as a module -for all partitions. R could be run in the Notebook or Console for -[JupyterLab](jupyterhub.md#jupyterlab). +The [production environment](../access/jupyterhub.md#standard-environments) of JupyterHub contains R +as a module for all partitions. R could be run in the Notebook or Console for +[JupyterLab](../access/jupyterhub.md#jupyterlab). ## RStudio @@ -93,7 +93,7 @@ x86 (scs5) and Power9 (ml) nodes/architectures. The best option to run RStudio is to use JupyterHub. RStudio will work in a browser. It is currently available in the **test** environment on both x86 (**scs5**) and Power9 (**ml**) architectures/partitions. It can be started similarly as a new kernel from -[JupyterLab](jupyterhub.md#jupyterlab) launcher. See the picture below. +[JupyterLab](../access/jupyterhub.md#jupyterlab) launcher. See the picture below. **todo** image \<img alt="environments.png" height="70" diff --git a/doc.zih.tu-dresden.de/docs/software/deep_learning.md b/doc.zih.tu-dresden.de/docs/software/deep_learning.md index 14f64769cbe8c17a7eb8a11f28e0105e736c4355..32455d4b704309ac9512cc31ae5ae91492c67d5c 100644 --- a/doc.zih.tu-dresden.de/docs/software/deep_learning.md +++ b/doc.zih.tu-dresden.de/docs/software/deep_learning.md @@ -144,10 +144,10 @@ jupyterhub. These sections show how to run and set up a remote jupyter server within a sbatch GPU job and which modules and packages you need for that. -**Note:** On Taurus, there is a [JupyterHub](jupyterhub.md), where you do not need the manual server -setup described below and can simply run your Jupyter notebook on HPC nodes. Keep in mind that with -Jupyterhub you can't work with some special instruments. However general data analytics tools are -available. +**Note:** On Taurus, there is a [JupyterHub](../access/jupyterhub.md), where you do not need the +manual server setup described below and can simply run your Jupyter notebook on HPC nodes. Keep in +mind that with Jupyterhub you can't work with some special instruments. However general data +analytics tools are available. The remote Jupyter server is able to offer more freedom with settings and approaches. @@ -313,7 +313,7 @@ important to use SSL cert To login into the jupyter notebook site, you have to enter the **token**. (`https://localhost:8887`). Now you can create and execute notebooks on Taurus with GPU support. -If you would like to use [JupyterHub](jupyterhub.md) after using a remote manually configurated +If you would like to use [JupyterHub](../access/jupyterhub.md) after using a remote manually configurated jupyter server (example above) you need to change the name of the configuration file (`/home//.jupyter/jupyter_notebook_config.py`) to any other. diff --git a/doc.zih.tu-dresden.de/docs/software/get_started_with_hpcda.md b/doc.zih.tu-dresden.de/docs/software/get_started_with_hpcda.md index 8740bfd78ae5b4f5c8d9f6138ed7f64a23ae5f09..f662ce564e41133ea3bde14eacc6aee99f48b9a0 100644 --- a/doc.zih.tu-dresden.de/docs/software/get_started_with_hpcda.md +++ b/doc.zih.tu-dresden.de/docs/software/get_started_with_hpcda.md @@ -263,10 +263,10 @@ with TensorFlow on Taurus with GUI (graphic user interface) in a **web browser** to see intermediate results step by step of your work. This can be useful for users who dont have huge experience with HPC or Linux. -There is [JupyterHub](jupyterhub.md) on Taurus, where you can simply run your Jupyter notebook on -HPC nodes. Also, for more specific cases you can run a manually created remote jupyter server. You -can find the manual server setup [here](deep_learning.md). However, the simplest option for -beginners is using JupyterHub. +There is [JupyterHub](../access/jupyterhub.md) on Taurus, where you can simply run your Jupyter +notebook on HPC nodes. Also, for more specific cases you can run a manually created remote jupyter +server. You can find the manual server setup [here](deep_learning.md). However, the simplest option +for beginners is using JupyterHub. JupyterHub is available at [taurus.hrsk.tu-dresden.de/jupyter](https://taurus.hrsk.tu-dresden.de/jupyter) @@ -277,14 +277,14 @@ You can select the required number of CPUs and GPUs. For the acquaintance with t the examples below the recommended amount of CPUs and 1 GPU will be enough. With the advanced form, you can use the configuration with 1 GPU and 7 CPUs. To access for all your workspaces use " / " in the -workspace scope. Please check updates and details [here](jupyterhub.md). +workspace scope. Please check updates and details [here](../access/jupyterhub.md). Several Tensorflow and PyTorch examples for the Jupyter notebook have been prepared based on some simple tasks and models which will give you an understanding of how to work with ML frameworks and JupyterHub. It could be found as the [attachment] **todo** %ATTACHURL%/machine_learning_example.py in the bottom of the page. A detailed explanation and examples for TensorFlow can be found [here](tensor_flow_on_jupyter_notebook.md). For the Pytorch - [here](py_torch.md). Usage information -about the environments for the JupyterHub could be found [here](jupyterhub.md) in the chapter +about the environments for the JupyterHub could be found [here](../access/jupyterhub.md) in the chapter *Creating and using your own environment*. Versions: TensorFlow 1.14, 1.15, 2.0, 2.1; PyTorch 1.1, 1.3 are diff --git a/doc.zih.tu-dresden.de/docs/software/keras.md b/doc.zih.tu-dresden.de/docs/software/keras.md index 122c446af42cf552aeec59fd4b615955a2d5a1e0..37bf6f4d5a04b249e27bf692499527ab9961ac37 100644 --- a/doc.zih.tu-dresden.de/docs/software/keras.md +++ b/doc.zih.tu-dresden.de/docs/software/keras.md @@ -29,12 +29,12 @@ options: Keras and GPUs. **Prerequisites**: To work with Keras you, first of all, need -[access](./../access/login.md) for the Taurus system, loaded +[access](../access/login.md) for the Taurus system, loaded Tensorflow module on ml partition, activated Python virtual environment. Basic knowledge about Python, SLURM system also required. **Aim** of this page is to introduce users on how to start working with -Keras and TensorFlow on the [HPC-DA](./../jobs_and_resources/hpcda.md) +Keras and TensorFlow on the [HPC-DA](../jobs_and_resources/hpcda.md) system - part of the TU Dresden HPC system. There are three main options on how to work with Keras and Tensorflow on @@ -46,8 +46,8 @@ environment. Please see the system. The information about the Jupyter notebook and the **JupyterHub** could -be found [here](./jupyterhub.md). The use of -Containers is described [here](./tensor_flow_container_on_hpcda.md). +be found [here](../access/jupyterhub.md). The use of +Containers is described [here](tensor_flow_container_on_hpcda.md). Keras contains numerous implementations of commonly used neural-network building blocks such as layers, diff --git a/doc.zih.tu-dresden.de/docs/software/py_torch.md b/doc.zih.tu-dresden.de/docs/software/py_torch.md index 080a61c83fa1ed27c40162207526b67d08603c8d..5aa3c4618720f1de9290ffeceaa6ecac1d2135f9 100644 --- a/doc.zih.tu-dresden.de/docs/software/py_torch.md +++ b/doc.zih.tu-dresden.de/docs/software/py_torch.md @@ -17,9 +17,9 @@ Taurus system and basic knowledge about Python, Numpy and SLURM system. There are numerous different possibilities of how to work with PyTorch on Taurus. Here we will consider two main methods. -1\. The first option is using Jupyter notebook with HPC-DA nodes. The easiest way is by using [Jupyterhub](jupyterhub.md). -It is a recommended way for beginners in PyTorch -and users who are just starting their work with Taurus. +1\. The first option is using Jupyter notebook with HPC-DA nodes. The easiest way is by using +[Jupyterhub](../access/jupyterhub.md). It is a recommended way for beginners in PyTorch and users +who are just starting their work with Taurus. 2\. The second way is using the Modules system and Python or conda virtual environment. See [the Python page](python.md) for the HPC-DA system. @@ -33,7 +33,7 @@ Note: The information on working with the PyTorch using Containers could be foun For working with PyTorch and python packages using virtual environments (kernels) is necessary. -Creating and using your kernel (environment) has the benefit that you can install your preferred +Creating and using your kernel (environment) has the benefit that you can install your preferred python packages and use them in your notebooks. A virtual environment is a cooperatively isolated runtime environment that allows Python users and @@ -97,7 +97,7 @@ which you can submit using *sbatch [options] <job_file_name>*. Below are examples of Jupyter notebooks with PyTorch models which you can run on ml nodes of HPC-DA. There are two ways how to work with the Jupyter notebook on HPC-DA system. You can use a -[remote Jupyter server](deep_learning.md) or [JupyterHub](jupyterhub.md). +[remote Jupyter server](deep_learning.md) or [JupyterHub](../access/jupyterhub.md). Jupyterhub is a simple and recommended way to use PyTorch. We are using Jupyterhub for our examples. @@ -105,17 +105,17 @@ Prepared examples of PyTorch models give you an understanding of how to work wit Jupyterhub and PyTorch models. It can be useful and instructive to start your acquaintance with PyTorch and HPC-DA system from these simple examples. -JupyterHub is available here: [https://taurus.hrsk.tu-dresden.de/jupyter](https://taurus.hrsk.tu-dresden.de/jupyter) +JupyterHub is available here: [taurus.hrsk.tu-dresden.de/jupyter](https://taurus.hrsk.tu-dresden.de/jupyter) After login, you can start a new session by clicking on the button. **Note:** Detailed guide (with pictures and instructions) how to run the Jupyterhub -you could find on [the page](jupyterhub.md). +you could find on [the page](../access/jupyterhub.md). Please choose the "IBM Power (ppc64le)". You need to download an example (prepared as jupyter notebook file) that already contains all you need for the start of the work. Please put the file into your previously created virtual environment in your working directory or -use the kernel for your notebook [see Jupyterhub page](jupyterhub.md). +use the kernel for your notebook [see Jupyterhub page](../access/jupyterhub.md). Note: You could work with simple examples in your home directory but according to [HPCStorageConcept2019](../data_lifecycle/hpc_storage_concept2019.md) please use **workspaces** @@ -132,7 +132,7 @@ virtual environment you could use the following command: unzip example_MNIST_Pytorch.zip Also, you could use kernels for all notebooks, not only for them which -placed in your virtual environment. See the [jupyterhub](jupyterhub.md) page. +placed in your virtual environment. See the [jupyterhub](../access/jupyterhub.md) page. Examples: @@ -147,7 +147,7 @@ for this kind of models. Recommended parameters for running this model are 1 GPU ### Running the model -Open [JupyterHub](jupyterhub.md) and follow instructions above. +Open [JupyterHub](../access/jupyterhub.md) and follow instructions above. In Jupyterhub documents are organized with tabs and a very versatile split-screen feature. On the left side of the screen, you can open your file. Use 'File-Open from Path' @@ -185,7 +185,7 @@ Recommended parameters for running this model are 1 GPU and 7 cores (28 thread). (example_Pytorch_image_recognition.zip) -Remember that for using [JupyterHub service](jupyterhub.md) +Remember that for using [JupyterHub service](../access/jupyterhub.md) for PyTorch you need to create and activate a virtual environment (kernel) with loaded essential modules (see "envtest" environment form the virtual environment example. @@ -225,7 +225,7 @@ model are **2 GPU** and 14 cores (56 thread). (example_PyTorch_parallel.zip) -Remember that for using [JupyterHub service](jupyterhub.md) +Remember that for using [JupyterHub service](../access/jupyterhub.md) for PyTorch you need to create and activate a virtual environment (kernel) with loaded essential modules. diff --git a/doc.zih.tu-dresden.de/docs/software/python.md b/doc.zih.tu-dresden.de/docs/software/python.md index 548ba169d58c6c9e6ae74c7a19d109a3ae2739d7..962184d3b6fbb49f27e6c526081976d1296e500f 100644 --- a/doc.zih.tu-dresden.de/docs/software/python.md +++ b/doc.zih.tu-dresden.de/docs/software/python.md @@ -12,10 +12,9 @@ Taurus system and basic knowledge about Python, Numpy and SLURM system. **Aim** of this page is to introduce users on how to start working with Python on the [HPC-DA](../jobs_and_resources/power9.md) system - part of the TU Dresden HPC system. -There are three main options on how to -work with Keras and Tensorflow on the HPC-DA: 1. Modules; 2. [JupyterNotebook](jupyterhub.md); -3.[Containers](containers.md). The main way is using the -[Modules system](modules.md) and Python virtual environment. +There are three main options on how to work with Keras and Tensorflow on the HPC-DA: 1. Modules; 2. +[JupyterNotebook](../access/jupyterhub.md); 3.[Containers](containers.md). The main way is using +the [Modules system](modules.md) and Python virtual environment. Note: You could work with simple examples in your home directory but according to [HPCStorageConcept2019](../data_lifecycle/hpc_storage_concept2019.md) please use **workspaces** @@ -42,63 +41,66 @@ vice versa! Prefer virtualenv whenever possible. This example shows how to start working with **Virtualenv** and Python virtual environment (using the module system) - srun -p ml -N 1 -n 1 -c 7 --mem-per-cpu=5772 --gres=gpu:1 --time=04:00:00 --pty bash #Job submission in ml nodes with 1 gpu on 1 node. - - mkdir python-environments # Optional: Create folder. Please use Workspaces! - - module load modenv/ml # Changing the environment. Example output: The following have been reloaded with a version change: 1 modenv/scs5 => modenv/ml - ml av Python #Check the available modules with Python - module load Python #Load default Python. Example output: Module Python/3.7 4-GCCcore-8.3.0 with 7 dependencies loaded - which python #Check which python are you using - virtualenv --system-site-packages python-environments/envtest #Create virtual environment - source python-environments/envtest/bin/activate #Activate virtual environment. Example output: (envtest) bash-4.2$ - python #Start python - - from time import gmtime, strftime - print(strftime("%Y-%m-%d %H:%M:%S", gmtime())) #Example output: 2019-11-18 13:54:16 - deactivate #Leave the virtual environment - -The [virtualenv](https://virtualenv.pypa.io/en/latest/) Python module (Python 3) provides support -for creating virtual environments with their own sitedirectories, -optionally isolated from system site directories. Each -virtual environment has its own Python binary (which matches the version -of the binary that was used to create this environment) and can have its -own independent set of installed Python packages in its site -directories. This allows you to manage separate package installations -for different projects. It essentially allows us to create a virtual -isolated Python installation and install packages into that virtual -installation. When you switch projects, you can simply create a new -virtual environment and not have to worry about breaking the packages -installed in other environments. - -In your virtual environment, you can use packages from the (Complete -List of Modules)(SoftwareModulesList) or if you didn't find what you -need you can install required packages with the command: `pip install`. With the command -`pip freeze`, you can see a list of all installed packages and their versions. +```Bash +srun -p ml -N 1 -n 1 -c 7 --mem-per-cpu=5772 --gres=gpu:1 --time=04:00:00 --pty bash #Job submission in ml nodes with 1 gpu on 1 node. + +mkdir python-environments # Optional: Create folder. Please use Workspaces! + +module load modenv/ml # Changing the environment. Example output: The following have been reloaded with a version change: 1 modenv/scs5 => modenv/ml +ml av Python #Check the available modules with Python +module load Python #Load default Python. Example output: Module Python/3.7 4-GCCcore-8.3.0 with 7 dependencies loaded +which python #Check which python are you using +virtualenv --system-site-packages python-environments/envtest #Create virtual environment +source python-environments/envtest/bin/activate #Activate virtual environment. Example output: (envtest) bash-4.2$ +python #Start python + +from time import gmtime, strftime +print(strftime("%Y-%m-%d %H:%M:%S", gmtime())) #Example output: 2019-11-18 13:54:16 +deactivate #Leave the virtual environment +``` + +The [virtualenv](https://virtualenv.pypa.io/en/latest/) Python module (Python 3) provides support +for creating virtual environments with their own sitedirectories, optionally isolated from system +site directories. Each virtual environment has its own Python binary (which matches the version of +the binary that was used to create this environment) and can have its own independent set of +installed Python packages in its site directories. This allows you to manage separate package +installations for different projects. It essentially allows us to create a virtual isolated Python +installation and install packages into that virtual installation. When you switch projects, you can +simply create a new virtual environment and not have to worry about breaking the packages installed +in other environments. + +In your virtual environment, you can use packages from the (Complete List of +Modules)(SoftwareModulesList) or if you didn't find what you need you can install required packages +with the command: `pip install`. With the command `pip freeze`, you can see a list of all installed +packages and their versions. This example shows how to start working with **Conda** and virtual environment (with using module system) - srun -p ml -N 1 -n 1 -c 7 --mem-per-cpu=5772 --gres=gpu:1 --time=04:00:00 --pty bash # Job submission in ml nodes with 1 gpu on 1 node. +```Bash +srun -p ml -N 1 -n 1 -c 7 --mem-per-cpu=5772 --gres=gpu:1 --time=04:00:00 --pty bash # Job submission in ml nodes with 1 gpu on 1 node. - module load modenv/ml - mkdir conda-virtual-environments #create a folder - cd conda-virtual-environments #go to folder - which python #check which python are you using - module load PythonAnaconda/3.6 #load Anaconda module - which python #check which python are you using now +module load modenv/ml +mkdir conda-virtual-environments #create a folder +cd conda-virtual-environments #go to folder +which python #check which python are you using +module load PythonAnaconda/3.6 #load Anaconda module +which python #check which python are you using now - conda create -n conda-testenv python=3.6 #create virtual environment with the name conda-testenv and Python version 3.6 - conda activate conda-testenv #activate conda-testenv virtual environment +conda create -n conda-testenv python=3.6 #create virtual environment with the name conda-testenv and Python version 3.6 +conda activate conda-testenv #activate conda-testenv virtual environment - conda deactivate #Leave the virtual environment +conda deactivate #Leave the virtual environment +``` You can control where a conda environment lives by providing a path to a target directory when creating the environment. For example, the following command will create a new environment in a workspace located in `scratch` - conda create --prefix /scratch/ws/<name_of_your_workspace>/conda-virtual-environment/<name_of_your_environment> +```Bash +conda create --prefix /scratch/ws/<name_of_your_workspace>/conda-virtual-environment/<name_of_your_environment> +``` Please pay attention, using srun directly on the shell will lead to blocking and launch an @@ -117,10 +119,9 @@ course with machine learning. There are two general options on how to work Jupyter notebooks using HPC. -On Taurus, there is [JupyterHub](jupyterhub.md) where you can simply run your Jupyter notebook -on HPC nodes. Also, you can run a remote jupyter server within a sbatch -GPU job and with the modules and packages you need. The manual server -setup you can find [here](deep_learning.md). +On Taurus, there is [JupyterHub](../access/jupyterhub.md) where you can simply run your Jupyter +notebook on HPC nodes. Also, you can run a remote jupyter server within a sbatch GPU job and with +the modules and packages you need. The manual server setup you can find [here](deep_learning.md). With Jupyterhub you can work with general data analytics tools. This is the recommended way to start working with diff --git a/doc.zih.tu-dresden.de/docs/software/tensor_flow.md b/doc.zih.tu-dresden.de/docs/software/tensor_flow.md index 1bae1ed4139b969d1956bb4b5c1725418d269540..e912c9260a4416b7211b2e25a3fc744099cdbb6d 100644 --- a/doc.zih.tu-dresden.de/docs/software/tensor_flow.md +++ b/doc.zih.tu-dresden.de/docs/software/tensor_flow.md @@ -32,7 +32,7 @@ Python virtual environment. Please see the next chapters and the [Python page](p HPC-DA system. The information about the Jupyter notebook and the **JupyterHub** could -be found [here](jupyterhub.md). The use of +be found [here](../access/jupyterhub.md). The use of Containers is described [here](tensor_flow_container_on_hpcda.md). On Taurus, there exist different module environments, each containing a set diff --git a/doc.zih.tu-dresden.de/docs/software/tensor_flow_on_jupyter_notebook.md b/doc.zih.tu-dresden.de/docs/software/tensor_flow_on_jupyter_notebook.md index 9ed4195af6281224c2e1cd979b1092b6b06966c7..42f7e699358beeddd70fc839c574eba8be49dcce 100644 --- a/doc.zih.tu-dresden.de/docs/software/tensor_flow_on_jupyter_notebook.md +++ b/doc.zih.tu-dresden.de/docs/software/tensor_flow_on_jupyter_notebook.md @@ -145,12 +145,12 @@ with jupyterhub and tensorflow models. It can be useful and instructive to start your acquaintance with Tensorflow and HPC-DA system from these simple examples. -You can use a [remote Jupyter server](jupyterhub.md). For simplicity, we +You can use a [remote Jupyter server](../access/jupyterhub.md). For simplicity, we will recommend using Jupyterhub for our examples. JupyterHub is available [here](https://taurus.hrsk.tu-dresden.de/jupyter) -Please check updates and details [JupyterHub](jupyterhub.md). However, +Please check updates and details [JupyterHub](../access/jupyterhub.md). However, the general pipeline can be briefly explained as follows. After logging, you can start a new session and configure it. There are @@ -184,7 +184,7 @@ created virtual environment you could use the following command: ``` Also, you could use kernels for all notebooks, not only for them which placed -in your virtual environment. See the [jupyterhub](jupyterhub.md) page. +in your virtual environment. See the [jupyterhub](../access/jupyterhub.md) page. ### Examples: diff --git a/doc.zih.tu-dresden.de/docs/software/virtual_desktops.md b/doc.zih.tu-dresden.de/docs/software/virtual_desktops.md index bc5db15748e3dcfdbdbc8afe858a1e6f1be9c390..123c323b2d3acc9c24863ca203179f8338da4dce 100644 --- a/doc.zih.tu-dresden.de/docs/software/virtual_desktops.md +++ b/doc.zih.tu-dresden.de/docs/software/virtual_desktops.md @@ -15,7 +15,7 @@ Use WebVNC or NICE DCV to run GUI applications on HPC resources. <span class="twiki-macro TABLE" columnwidths="10%,45%,45%"></span> \| **step 1** \| Navigate to \<a href="<https://taurus.hrsk.tu-dresden.de>" target="\_blank"><https://taurus.hrsk.tu-dresden.de>\</a>. There is our -[JupyterHub](../software/jupyterhub.md) instance. \|\| \| **step 2** \| +[JupyterHub](../access/jupyterhub.md) instance. \|\| \| **step 2** \| Click on the "advanced" tab and choose a preset: \|\| | | | | diff --git a/doc.zih.tu-dresden.de/mkdocs.yml b/doc.zih.tu-dresden.de/mkdocs.yml index 35505da1793e96b4d81ec16ab161f71cd348af40..59f841162b9d7606323be177eec434e0beb254ee 100644 --- a/doc.zih.tu-dresden.de/mkdocs.yml +++ b/doc.zih.tu-dresden.de/mkdocs.yml @@ -14,6 +14,9 @@ nav: - Login: access/login.md - Security Restrictions: access/security_restrictions.md - SSH with Putty: access/ssh_mit_putty.md + - JupyterHub: + - Overview: ../../access/jupyterhub.md + - JupyterHub for Teaching: ../../access/jupyterhub_for_teaching.md - Transfer of Data: - Overview: data_transfer/data_moving.md - Data Mover: data_transfer/data_mover.md @@ -24,9 +27,6 @@ nav: - Modules: software/modules.md - Runtime Environment: software/runtime_environment.md - Custom EasyBuild Modules: software/custom_easy_build_environment.md - - JupyterHub: - - Overview: software/jupyterhub.md - - JupyterHub for Teaching: software/jupyterhub_for_teaching.md - Containers: - Singularity: software/containers.md - Singularity Recicpe Hints: software/singularity_recipe_hints.md