diff --git a/doc.zih.tu-dresden.de/docs/access/jupyterhub.md b/doc.zih.tu-dresden.de/docs/access/jupyterhub.md index 3a9751596fb9b3d6815fcb078d13782598e1cf07..e4d81db3f0dadbe46add42a586035bb8f0c70845 100644 --- a/doc.zih.tu-dresden.de/docs/access/jupyterhub.md +++ b/doc.zih.tu-dresden.de/docs/access/jupyterhub.md @@ -261,156 +261,24 @@ With these **standard environments** we have tried to integrate a set of compati \*\* R is loaded from the [module system](../software/modules.md) -### Creating and Using a Custom Environment -!!! info - - Interactive code interpreters which are used by Jupyter notebooks are called - *kernels*. Creating and using your own kernel has the benefit that you can - install your own preferred Python packages and use them in your notebooks. - -We currently have two different architectures at ZIH systems. -Build your kernel environment on the **same architecture** that you want to use -later on with the kernel. In the examples below, we use the name -"my-kernel" for our user kernel. We recommend to prefix your kernels -with keywords like `haswell`, `ml`, `romeo`, `venv`, `conda`. This way, you -can later recognize easier how you built the kernel and on which hardware it -will work. Depending on that hardware, allocate resources: - -=== "Nodes with x86_64 (Intel) CPU" - Use **one srun command** of these: - - ```console - maria@login$ srun --partition=haswell64 --pty --ntasks=1 --cpus-per-task=2 \ - --mem-per-cpu=2541 --time=08:00:00 bash -l - maria@login$ srun --partition=gpu2 --pty --ntasks=1 --cpus-per-task=2 \ - --mem-per-cpu=2541 --time=08:00:00 bash -l - ``` -=== "Nodes with x86_64 (AMD) CPU" - Use **one srun command** of these: - - ```console - maria@login$ srun --partition=romeo --pty --ntasks=1 --cpus-per-task=3 \ - --mem-per-cpu=1972 --time=08:00:00 bash -l - maria@login$ srun --partition=alpha --gres=gpu:1 --pty --ntasks=1 \ - --cpus-per-task=6 --mem-per-cpu=10312 --time=08:00:00 bash -l - ``` -=== "Nodes with ppc64le CPU" - ```console - maria@login$ srun --pty --partition=ml --ntasks=1 --cpus-per-task=2 --mem-per-cpu=1443 \ - --time=08:00:00 bash -l - ``` - -When creating a virtual environment in your home directory, you got to decide -to either use "Python virtualenv" or "conda environment". - -!!! note - Please keep in mind that Python virtualenv is the preferred way to create a Python - virtual environment. - For working with conda virtual environments, it may be necessary to configure your shell - as described in [Python virtual environments](../software/python_virtual_environments.md#conda-virtual-environment) - -#### Python Virtualenv - -```console -marie@compute$ module load Python/3.8.6-GCCcore-10.2.0 -Module Python/3.8.6-GCCcore-10.2.0 and 11 dependencies loaded. -marie@compute$ mkdir user-kernel # please use workspaces! -marie@compute$ cd user-kernel -marie@compute$ virtualenv --system-site-packages my-kernel -created virtual environment CPython3.8.6.final.0-64 in 5985ms - creator CPython3Posix(dest=[...]/my-kernel, clear=False, global=True) - seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=[...]) - added seed packages: pip==20.2.3, setuptools==50.3.0, wheel==0.35.1 - activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator -marie@compute$ source my-kernel/bin/activate -(my-kernel) marie@compute$ pip install ipykernel -Collecting ipykernel -[...] -Successfully installed [...] ipykernel-6.9.1 ipython-8.0.1 [...] -(my-kernel) marie@compute$ pip install --upgrade pip -(my-kernel) marie@compute$ python -m ipykernel install --user --name my-kernel --display-name="my kernel" -Installed kernelspec my-kernel in .../.local/share/jupyter/kernels/my-kernel -(my-kernel) marie@compute$ pip install [...] # now install additional packages for your notebooks -(my-kernel) marie@compute$ deactivate -``` - -!!! warning - - Depending on the Python module you have loaded for creating your virtual environment, you should - select the apropriate [Standard environment](#standard-environments). For example, you could - select `scs5_gcccore-10.2.0_python-3.8.6`, when you want to use `my-kernel`. Furthermore, - ensure, that you pre-load the same modules via [Spawner Options](#start-a-session) that you used - for creating your kernel. - -#### Conda Environment - -Load the needed module depending on partition architecture: - -=== "x86 nodes (e.g. partition `haswell`, `gpu2`)" - ```console - marie@compute$ module load Anaconda3 - ``` -=== "PowerPC nodes (partition `ml`)" - ```console - marie@ml$ module load PythonAnaconda - ``` - -!!! hint - For working with conda virtual environments, it may be necessary to configure your shell as - described in - [Python virtual environments](../software/python_virtual_environments.md#conda-virtual-environment). - -Continue with environment creation, package installation and kernel -registration: - -```console -marie@compute$ mkdir user-kernel # please use workspaces! -marie@compute$ conda create --prefix $HOME/user-kernel/my-kernel python=3.8.6 -Collecting package metadata: done -Solving environment: done -[...] -marie@compute$ conda activate $HOME/user-kernel/my-kernel -marie@compute$ conda install ipykernel -Collecting package metadata: done -Solving environment: done -[...] -marie@compute$ python -m ipykernel install --user --name my-kernel --display-name="my kernel" -Installed kernelspec my-kernel in [...] -marie@compute$ conda install [..] # now install additional packages for your notebooks -marie@compute$ conda deactivate -``` - -Now you can start a new session and your kernel should be available. - -**JupyterLab**: Your kernels are listed on the launcher page: - - -{: align="center"} - -You can switch kernels of existing notebooks in the menu: - - -{: align="center"} - -**Classic Jupyter notebook**: Your kernel is listed in the New menu: - - -{: align="center"} - -You can switch kernels of existing notebooks in the kernel menu: - - -{: align="center"} - -!!! note - Both python venv and conda virtual environments will be mention in the same - list. ### Loading Modules You have now the option to preload modules from the [module system](../software/modules.md). Select multiple modules that will be preloaded before your notebook server starts. The list of available modules depends on the module environment you want -to start the session in (`scs5` or `ml`). The right module environment will be +to start the session in (`scs5`, `hiera` or `ml`). The right module environment will be chosen by your selected partition. + +### Custom Kernels + +As you might have noticed, after launching Jupyter**Lab**, +there are several boxes with icons therein visible in the `Launcher`. +Each box therein represents a so called 'Kernel' +(note that these are not to be confused with Operating System Kernel, +but similarly provide basic functionality for running your use cases, +e.g. Python or R) + +You can find further documentation on creating your own Kernels [here](/access/jupyterhub_custom_environments) + diff --git a/doc.zih.tu-dresden.de/docs/access/jupyterhub_custom_environments.md b/doc.zih.tu-dresden.de/docs/access/jupyterhub_custom_environments.md new file mode 100644 index 0000000000000000000000000000000000000000..8fdd3dc1ceb86ddce40a409c49367fef70018d61 --- /dev/null +++ b/doc.zih.tu-dresden.de/docs/access/jupyterhub_custom_environments.md @@ -0,0 +1,147 @@ +# Creating and Using a Custom Environment for JupyterHub + +!!! info + + Interactive code interpreters which are used by Jupyter notebooks are called + *kernels*. Creating and using your own kernel has the benefit that you can + install your own preferred Python packages and use them in your notebooks. + +We currently have two different architectures at ZIH systems. +Build your kernel environment on the **same architecture** that you want to use +later on with the kernel. In the examples below, we use the name +"my-kernel" for our user kernel. We recommend to prefix your kernels +with keywords like `haswell`, `ml`, `romeo`, `venv`, `conda`. This way, you +can later recognize easier how you built the kernel and on which hardware it +will work. Depending on that hardware, allocate resources: + +## Preliminary Steps + +=== "Nodes with x86_64 (Intel) CPU" + Use **one srun command** of these: + + ```console + maria@login$ srun --partition=haswell64 --pty --ntasks=1 --cpus-per-task=2 \ + --mem-per-cpu=2541 --time=08:00:00 bash -l + maria@login$ srun --partition=gpu2 --pty --ntasks=1 --cpus-per-task=2 \ + --mem-per-cpu=2541 --time=08:00:00 bash -l + ``` +=== "Nodes with x86_64 (AMD) CPU" + Use **one srun command** of these: + + ```console + maria@login$ srun --partition=romeo --pty --ntasks=1 --cpus-per-task=3 \ + --mem-per-cpu=1972 --time=08:00:00 bash -l + maria@login$ srun --partition=alpha --gres=gpu:1 --pty --ntasks=1 \ + --cpus-per-task=6 --mem-per-cpu=10312 --time=08:00:00 bash -l + ``` +=== "Nodes with ppc64le CPU" + ```console + maria@login$ srun --pty --partition=ml --ntasks=1 --cpus-per-task=2 --mem-per-cpu=1443 \ + --time=08:00:00 bash -l + ``` + +When creating a virtual environment in your home directory, you got to decide +to either use "Python virtualenv" or "conda environment". + +!!! note + Please keep in mind that Python virtualenv is the preferred way to create a Python + virtual environment. + For working with conda virtual environments, it may be necessary to configure your shell + as described in [Python virtual environments](../software/python_virtual_environments.md#conda-virtual-environment) + +## Python Virtualenv + +```console +marie@compute$ module load Python/3.8.6-GCCcore-10.2.0 +Module Python/3.8.6-GCCcore-10.2.0 and 11 dependencies loaded. +marie@compute$ mkdir user-kernel # please use workspaces! +marie@compute$ cd user-kernel +marie@compute$ virtualenv --system-site-packages my-kernel +created virtual environment CPython3.8.6.final.0-64 in 5985ms + creator CPython3Posix(dest=[...]/my-kernel, clear=False, global=True) + seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=[...]) + added seed packages: pip==20.2.3, setuptools==50.3.0, wheel==0.35.1 + activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator +marie@compute$ source my-kernel/bin/activate +(my-kernel) marie@compute$ pip install ipykernel +Collecting ipykernel +[...] +Successfully installed [...] ipykernel-6.9.1 ipython-8.0.1 [...] +(my-kernel) marie@compute$ pip install --upgrade pip +(my-kernel) marie@compute$ python -m ipykernel install --user --name my-kernel --display-name="my kernel" +Installed kernelspec my-kernel in .../.local/share/jupyter/kernels/my-kernel +(my-kernel) marie@compute$ pip install [...] # now install additional packages for your notebooks +(my-kernel) marie@compute$ deactivate +``` + +!!! warning + + Depending on the Python module you have loaded for creating your virtual environment, you should + select the apropriate [Standard environment](#standard-environments). For example, you could + select `scs5_gcccore-10.2.0_python-3.8.6`, when you want to use `my-kernel`. Furthermore, + ensure, that you pre-load the same modules via [Spawner Options](#start-a-session) that you used + for creating your kernel. + +## Conda Environment + +Load the needed module depending on partition architecture: + +=== "x86 nodes (e.g. partition `haswell`, `gpu2`)" + ```console + marie@compute$ module load Anaconda3 + ``` +=== "PowerPC nodes (partition `ml`)" + ```console + marie@ml$ module load PythonAnaconda + ``` + +!!! hint + For working with conda virtual environments, it may be necessary to configure your shell as + described in + [Python virtual environments](../software/python_virtual_environments.md#conda-virtual-environment). + +Continue with environment creation, package installation and kernel +registration: + +```console +marie@compute$ mkdir user-kernel # please use workspaces! +marie@compute$ conda create --prefix $HOME/user-kernel/my-kernel python=3.8.6 +Collecting package metadata: done +Solving environment: done +[...] +marie@compute$ conda activate $HOME/user-kernel/my-kernel +marie@compute$ conda install ipykernel +Collecting package metadata: done +Solving environment: done +[...] +marie@compute$ python -m ipykernel install --user --name my-kernel --display-name="my kernel" +Installed kernelspec my-kernel in [...] +marie@compute$ conda install [..] # now install additional packages for your notebooks +marie@compute$ conda deactivate +``` + +Now you can start a new session and your kernel should be available. + +**JupyterLab**: Your kernels are listed on the launcher page: + + +{: align="center"} + +You can switch kernels of existing notebooks in the menu: + + +{: align="center"} + +**Classic Jupyter notebook**: Your kernel is listed in the New menu: + + +{: align="center"} + +You can switch kernels of existing notebooks in the kernel menu: + + +{: align="center"} + +!!! note + Both python venv and conda virtual environments will be mention in the same + list.