Skip to content
Snippets Groups Projects
Commit 58fef819 authored by Taras Lazariv's avatar Taras Lazariv
Browse files

Merge branch 'reduce-number-of-forbidden-pattern-matches' into 'preview'

Reduce the number of forbidden pattern matches further

See merge request !420
parents 2b2ffbf2 e6720d95
No related branches found
No related tags found
3 merge requests!446docs: Add Jupyter Teaching Example,!423Automated merge from preview to main,!420Reduce the number of forbidden pattern matches further
Showing
with 57 additions and 52 deletions
# JupyterHub
With our JupyterHub service we offer you a quick and easy way to work with Jupyter notebooks on ZIH
systems. This page covers starting and stopping JuperterHub sessions, error handling and customizing
systems. This page covers starting and stopping JupyterHub sessions, error handling and customizing
the environment.
We also provide a comprehensive documentation on how to use
......@@ -21,7 +21,8 @@ cannot give extensive support in every case.
!!! note
This service is only available for users with an active HPC project.
See [here](../access/overview.md) how to apply for an HPC project.
See [Application for Login and Resources](../application/overview.md), if you need to apply for
an HPC project.
JupyterHub is available at
[https://taurus.hrsk.tu-dresden.de/jupyter](https://taurus.hrsk.tu-dresden.de/jupyter).
......@@ -100,7 +101,7 @@ running the code. We currently offer one for Python, C++, MATLAB and R.
## Stop a Session
It is good practise to stop your session once your work is done. This releases resources for other
It is good practice to stop your session once your work is done. This releases resources for other
users and your quota is less charged. If you just log out or close the window, your server continues
running and **will not stop** until the Slurm job runtime hits the limit (usually 8 hours).
......@@ -147,8 +148,8 @@ Useful pages for valid batch system parameters:
If the connection to your notebook server unexpectedly breaks, you will get this error message.
Sometimes your notebook server might hit a batch system or hardware limit and gets killed. Then
usually the logfile of the corresponding batch job might contain useful information. These logfiles
are located in your `home` directory and have the name `jupyter-session-<jobid>.log`.
usually the log file of the corresponding batch job might contain useful information. These log
files are located in your `home` directory and have the name `jupyter-session-<jobid>.log`.
## Advanced Tips
......@@ -309,4 +310,4 @@ You can switch kernels of existing notebooks in the kernel menu:
You have now the option to preload modules from the [module system](../software/modules.md).
Select multiple modules that will be preloaded before your notebook server starts. The list of
available modules depends on the module environment you want to start the session in (`scs5` or
`ml`). The right module environment will be chosen by your selected partition.
`ml`). The right module environment will be chosen by your selected partition.
# JupyterHub for Teaching
On this page we want to introduce to you some useful features if you
want to use JupyterHub for teaching.
On this page, we want to introduce to you some useful features if you want to use JupyterHub for
teaching.
!!! note
......@@ -9,23 +9,21 @@ want to use JupyterHub for teaching.
Please be aware of the following notes:
- ZIH systems operate at a lower availability level than your usual Enterprise Cloud VM. There
can always be downtimes, e.g. of the filesystems or the batch system.
- ZIH systems operate at a lower availability level than your usual Enterprise Cloud VM. There can
always be downtimes, e.g. of the filesystems or the batch system.
- Scheduled downtimes are announced by email. Please plan your courses accordingly.
- Access to HPC resources is handled through projects. See your course as a project. Projects need
to be registered beforehand (more info on the page [Access](../application/overview.md)).
- Don't forget to [add your users](../application/project_management.md#manage-project-members-dis-enable)
(eg. students or tutors) to your project.
(e.g. students or tutors) to your project.
- It might be a good idea to [request a reservation](../jobs_and_resources/overview.md#exclusive-reservation-of-hardware)
of part of the compute resources for your project/course to
avoid unnecessary waiting times in the batch system queue.
of part of the compute resources for your project/course to avoid unnecessary waiting times in
the batch system queue.
## Clone a Repository With a Link
This feature bases on
[nbgitpuller](https://github.com/jupyterhub/nbgitpuller).
Documentation can be found at
[this page](https://jupyterhub.github.io/nbgitpuller/).
This feature bases on [nbgitpuller](https://github.com/jupyterhub/nbgitpuller). Further information
can be found in the [external documentation about nbgitpuller](https://jupyterhub.github.io/nbgitpuller/).
This extension for Jupyter notebooks can clone every public git repository into the users work
directory. It's offering a quick way to distribute notebooks and other material to your students.
......@@ -50,14 +48,14 @@ The following parameters are available:
|---|---|
|`repo` | path to git repository|
|`branch` | branch in the repository to pull from default: `master`|
|`urlpath` | URL to redirect the user to a certain file [more info](https://jupyterhub.github.io/nbgitpuller/topic/url-options.html#urlpath)|
|`urlpath` | URL to redirect the user to a certain file, [more info about parameter urlpath](https://jupyterhub.github.io/nbgitpuller/topic/url-options.html#urlpath)|
|`depth` | clone only a certain amount of latest commits not recommended|
This [link
generator](https://jupyterhub.github.io/nbgitpuller/link?hub=https://taurus.hrsk.tu-dresden.de/jupyter/)
might help creating those links
## Spawner Options Passthrough with URL Parameters
## Spawn Options Pass-through with URL Parameters
The spawn form now offers a quick start mode by passing URL parameters.
......
......@@ -36,15 +36,16 @@ Any project have:
## Third step: Hardware
![picture 4: Hardware >](misc/request_step3_machines.png "Hardware"){loading=lazy width=300 style="float:right"}
This step inquire the required hardware. You can find the specifications
[here](../jobs_and_resources/hardware_overview.md).
This step inquire the required hardware. The
[hardware specifications](../jobs_and_resources/hardware_overview.md) might help you to estimate,
e. g. the compute time.
Please fill in the total computing time you expect in the project runtime. The compute time is
Please fill in the total computing time you expect in the project runtime. The compute time is
given in cores per hour (CPU/h), this refers to the 'virtual' cores for nodes with hyperthreading.
If they require GPUs, then this is given as GPU units per hour (GPU/h). Please add 6 CPU hours per
If they require GPUs, then this is given as GPU units per hour (GPU/h). Please add 6 CPU hours per
GPU hour in your application.
The project home is a shared storage in your project. Here you exchange data or install software
The project home is a shared storage in your project. Here you exchange data or install software
for your project group in userspace. The directory is not intended for active calculations, for this
the scratch is available.
......
......@@ -64,7 +64,8 @@ True
### Python Virtual Environments
Virtual environments allow users to install additional python packages and create an isolated
[Virtual environments](../software/python_virtual_environments.md) allow users to install
additional python packages and create an isolated
runtime environment. We recommend using `virtualenv` for this purpose.
```console
......
......@@ -58,10 +58,10 @@ For MPI-parallel jobs one typically allocates one core per task that has to be s
### Multiple Programs Running Simultaneously in a Job
In this short example, our goal is to run four instances of a program concurrently in a **single**
batch script. Of course we could also start a batch script four times with `sbatch` but this is not
what we want to do here. Please have a look at
[this subsection](#multiple-programs-running-simultaneously-in-a-job)
in case you intend to run GPU programs simultaneously in a **single** job.
batch script. Of course, we could also start a batch script four times with `sbatch` but this is not
what we want to do here. However, you can also find an example about
[how to run GPU programs simultaneously in a single job](#running-multiple-gpu-applications-simultaneously-in-a-batch-job)
below.
!!! example " "
......@@ -355,4 +355,4 @@ file) that will be executed one after each other with different CPU numbers:
## Array-Job with Afterok-Dependency and Datamover Usage
This is a *todo*
This part is under construction.
......@@ -24,7 +24,8 @@ marie@compute$ module spider <software_name>
Refer to the section covering [modules](modules.md) for further information on the modules system.
Additional software or special versions of [individual modules](custom_easy_build_environment.md)
can be installed individually by each user. If possible, the use of virtual environments is
can be installed individually by each user. If possible, the use of
[virtual environments](python_virtual_environments.md) is
recommended (e.g. for Python). Likewise, software can be used within [containers](containers.md).
For the transfer of larger amounts of data into and within the system, the
......
......@@ -270,9 +270,9 @@ This GUI guides through the configuration process and as result a configuration
automatically according to the GUI input. If you are more familiar with using OmniOpt later on,
this configuration file can be modified directly without using the GUI.
A screenshot of the GUI, including a properly configuration for the MNIST fashion example is shown
below. The GUI, in which the below displayed values are already entered, can be reached
[here](https://imageseg.scads.ai/omnioptgui/?maxevalserror=5&mem_per_worker=1000&number_of_parameters=3&param_0_values=10%2C50%2C100&param_1_values=8%2C16%2C32&param_2_values=10%2C15%2C30&param_0_name=out-layer1&param_1_name=batchsize&param_2_name=batchsize&account=&projectname=mnist_fashion_optimization_set_1&partition=alpha&searchtype=tpe.suggest&param_0_type=hp.choice&param_1_type=hp.choice&param_2_type=hp.choice&max_evals=1000&objective_program=bash%20%3C%2Fpath%2Fto%2Fwrapper-script%2Frun-mnist-fashion.sh%3E%20--out-layer1%3D%28%24x_0%29%20--batchsize%3D%28%24x_1%29%20--epochs%3D%28%24x_2%29&workdir=%3C%2Fscratch%2Fws%2Fomniopt-workdir%2F%3E).
A screenshot of
[the GUI](https://imageseg.scads.ai/omnioptgui/?maxevalserror=5&mem_per_worker=1000&number_of_parameters=3&param_0_values=10%2C50%2C100&param_1_values=8%2C16%2C32&param_2_values=10%2C15%2C30&param_0_name=out-layer1&param_1_name=batchsize&param_2_name=batchsize&account=&projectname=mnist_fashion_optimization_set_1&partition=alpha&searchtype=tpe.suggest&param_0_type=hp.choice&param_1_type=hp.choice&param_2_type=hp.choice&max_evals=1000&objective_program=bash%20%3C%2Fpath%2Fto%2Fwrapper-script%2Frun-mnist-fashion.sh%3E%20--out-layer1%3D%28%24x_0%29%20--batchsize%3D%28%24x_1%29%20--epochs%3D%28%24x_2%29&workdir=%3C%2Fscratch%2Fws%2Fomniopt-workdir%2F%3E),
including a properly configuration for the MNIST fashion example is shown below.
Please modify the paths for `objective program` and `workdir` according to your needs.
......
......@@ -20,8 +20,8 @@ To collect performance events, PAPI provides two APIs, the *high-level* and *low
The high-level API provides the ability to record performance events inside instrumented regions of
serial, multi-processing (MPI, SHMEM) and thread (OpenMP, Pthreads) parallel applications. It is
designed for simplicity, not flexibility. For more details click
[here](https://bitbucket.org/icl/papi/wiki/PAPI-HL.md).
designed for simplicity, not flexibility. More details can be found in the
[PAPI wiki High-Level API description](https://bitbucket.org/icl/papi/wiki/PAPI-HL.md).
The following code example shows the use of the high-level API by marking a code section.
......@@ -86,19 +86,19 @@ more output files in JSON format.
### Low-Level API
The low-level API manages hardware events in user-defined groups
called Event Sets. It is meant for experienced application programmers and tool developers wanting
fine-grained measurement and control of the PAPI interface. It provides access to both PAPI preset
and native events, and supports all installed components. For more details on the low-level API,
click [here](https://bitbucket.org/icl/papi/wiki/PAPI-LL.md).
The low-level API manages hardware events in user-defined groups called Event Sets. It is meant for
experienced application programmers and tool developers wanting fine-grained measurement and
control of the PAPI interface. It provides access to both PAPI preset and native events, and
supports all installed components. The PAPI wiki contains also a page with more details on the
[low-level API](https://bitbucket.org/icl/papi/wiki/PAPI-LL.md).
## Usage on ZIH Systems
Before you start a PAPI measurement, check which events are available on the desired architecture.
For this purpose PAPI offers the tools `papi_avail` and `papi_native_avail`. If you want to measure
For this purpose, PAPI offers the tools `papi_avail` and `papi_native_avail`. If you want to measure
multiple events, please check which events can be measured concurrently using the tool
`papi_event_chooser`. For more details on the PAPI tools click
[here](https://bitbucket.org/icl/papi/wiki/PAPI-Overview.md#markdown-header-papi-utilities).
`papi_event_chooser`. The PAPI wiki contains more details on
[the PAPI tools](https://bitbucket.org/icl/papi/wiki/PAPI-Overview.md#markdown-header-papi-utilities).
!!! hint
......@@ -133,8 +133,7 @@ compile your application against the PAPI library.
!!! hint
The PAPI modules on ZIH systems are only installed with the default `perf_event` component. If you
want to measure, e.g., GPU events, you have to install your own PAPI. Instructions on how to
download and install PAPI can be found
[here](https://bitbucket.org/icl/papi/wiki/Downloading-and-Installing-PAPI.md). To install PAPI
with additional components, you have to specify them during configure, for details click
[here](https://bitbucket.org/icl/papi/wiki/PAPI-Overview.md#markdown-header-components).
want to measure, e.g., GPU events, you have to install your own PAPI. Please see the
[external instructions on how to download and install PAPI](https://bitbucket.org/icl/papi/wiki/Downloading-and-Installing-PAPI.md).
To install PAPI with additional components, you have to specify them during configure as
described for the [Installation of Components](https://bitbucket.org/icl/papi/wiki/PAPI-Overview.md#markdown-header-components).
......@@ -93,8 +93,6 @@ are in the virtual environment. You can deactivate the conda environment as foll
(conda-env) marie@compute$ conda deactivate #Leave the virtual environment
```
TODO: Link to this page from other DA/ML topics. insert link in alpha centauri
??? example
This is an example on partition Alpha. The example creates a virtual environment, and installs
......
......@@ -48,7 +48,7 @@ doc.zih.tu-dresden.de/docs/contrib/content_rules.md
i \(alpha\|ml\|haswell\|romeo\|gpu\|smp\|julia\|hpdlf\|scs5\)-\?\(interactive\)\?[^a-z]*partition
Give hints in the link text. Words such as \"here\" or \"this link\" are meaningless.
doc.zih.tu-dresden.de/docs/contrib/content_rules.md
i \[\s\?\(documentation\|here\|this \(link\|page\|subsection\)\|slides\?\|manpage\)\s\?\]
i \[\s\?\(documentation\|here\|more info\|this \(link\|page\|subsection\)\|slides\?\|manpage\)\s\?\]
Use \"workspace\" instead of \"work space\" or \"work-space\".
doc.zih.tu-dresden.de/docs/contrib/content_rules.md
i work[ -]\+space"
......
......@@ -65,6 +65,8 @@ DockerHub
dockerized
dotfile
dotfiles
downtime
downtimes
EasyBuild
ecryptfs
engl
......@@ -142,6 +144,7 @@ Itanium
jobqueue
jpg
jss
jupyter
Jupyter
JupyterHub
JupyterLab
......@@ -194,6 +197,7 @@ multithreaded
Multithreading
NAMD
natively
nbgitpuller
nbsp
NCCL
Neptun
......@@ -260,6 +264,8 @@ pytorch
PyTorch
Quantum
queue
quickstart
Quickstart
randint
reachability
README
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment