@@ -7,37 +7,95 @@ processing extremely large data sets. Moreover it is also a perfect platform for
...
@@ -7,37 +7,95 @@ processing extremely large data sets. Moreover it is also a perfect platform for
data-intensive and compute-intensive applications and has extensive capabilities for energy measurement and performance monitoring. Therefore provides ideal conditions to achieve the ambitious research goals of the users and the ZIH.
data-intensive and compute-intensive applications and has extensive capabilities for energy measurement and performance monitoring. Therefore provides ideal conditions to achieve the ambitious research goals of the users and the ZIH.
The HPC system, redesigned in December 2023, consists of five homogeneous clusters with their own [Slurm](../jobs_and_resources/slurm.md) instances and cluster specific [login nodes](hardware_overview.md#login-nodes). The clusters share one
The HPC system, redesigned in December 2023, consists of five homogeneous clusters with their own [Slurm](../jobs_and_resources/slurm.md) instances and cluster specific [login nodes](hardware_overview.md#login-nodes). The clusters share one
[filesystem](../data_lifecycle/file_systems.md).
[filesystem](../data_lifecycle/file_systems.md) which enables users to easily switch
This setup enables users to easily switch
between the components.
between [the components](hardware_overview.md), each specialized for different application
scenarios.
| Name | Description | Year| DNS |
| --- | --- | --- | --- |
| **Barnard** | CPU cluster |2023| `n[1001-1630].barnard.hpc.tu-dresden.de` |
| **Romeo** | CPU cluster |2020| `i[8001-8190].romeo.hpc.tu-dresden.de` |
| **Julia** | single SMP system |2021| `smp8.julia.hpc.tu-dresden.de` |
| **Power** | IBM Power/GPU system |2018| `ml[1-29].power9.hpc.tu-dresden.de` |
## Selection of Suitable Hardware
## Selection of Suitable Hardware
The five clusters `barnard`, `alpha`,`romeo`, `power` and `julia` differ, among others, in number of nodes, cores per node, and GPUs and memory. The particular characteristica qualify them for specific applications.
The five clusters [`barnard`](../jobs_and_resources/barnard.md), [`alpha`](../jobs_and_resources/alpha_centauri.md),[`romeo`](../jobs_and_resources/romeo.md), [`power`](../jobs_and_resources/power9.md) and [`julia`](doc.zih.tu-dresden.de/docs/jobs_and_resources/julia.md) differ, among others, in number of nodes, cores per node, and GPUs and memory. The particular [characteristica](hardware_overview.md) qualify them for different applications.
### Which cluster do I need?
The majority of the basic tasks can be executed on the conventional nodes like on `barnard`. When log in to ZIH systems, you are placed on a login node where you can execute short tests and compile moderate projects. The login nodes cannot be used for real
experiments and computations. Long and extensive computational work and experiments have to be
encapsulated into so called **jobs** and scheduled to the compute nodes.
There is no such thing as free lunch at ZIH systems. Since compute nodes are operated in multi-user
node by default, jobs of several users can run at the same time at the very same node sharing
resources, like memory (but not CPU). On the other hand, a higher throughput can be achieved by
smaller jobs. Thus, restrictions w.r.t. [memory](#memory-limits) and
[runtime limits](#runtime-limits) have to be respected when submitting jobs.
## Which cluster do I need?
overview of clusters:
<!-- partitions_and_limits_table -->
| Partition | Nodes | # Nodes | Cores per Node | Threads per Core | Memory per Node [in MB] | Memory per Core [in MB] | GPUs per Node
The following questions may help to decide which cluster to use
- my application
- is [interactive or a batch job](../jobs_and_ressources/slurm.md)?
However, using `srun` directly on the Shell will lead to blocking and launch an interactive job.
{: summary="Partitions and limits table" align="bottom"}
Apart from short test runs, it is recommended to encapsulate your experiments and computational
tasks into batch jobs and submit them to the batch system. For that, you can conveniently put the
parameters directly into the job file which you can submit using `sbatch [options] <job file>`.
### Parallel Jobs
**MPI jobs:** For MPI jobs typically allocates one core per task. Several nodes could be allocated
if it is necessary. The batch system [Slurm](slurm.md) will automatically find suitable hardware.
**OpenMP jobs:** SMP-parallel applications can only run **within a node**, so it is necessary to
include the [batch system](slurm.md) options `-N 1` and `-n 1`. Using `--cpus-per-task N` Slurm will
start one task and you will have `N` CPUs. The maximum number of processors for an SMP-parallel
program is 896 on partition `julia` (be aware that
the application has to be developed with that large number of threads in mind).
Partitions with GPUs are best suited for **repetitive** and **highly-parallel** computing tasks. If
you have a task with potential [data parallelism](../software/gpu_programming.md) most likely that
you need the GPUs. Beyond video rendering, GPUs excel in tasks such as machine learning, financial
simulations and risk modeling. Use the cluster `power` only if you need GPUs! Otherwise
using the x86-based partitions most likely would be more beneficial.
### Multithreading
Some cluster/nodes have Simultaneous Multithreading (SMT) enabled, e.g [`alpha`](slurm.md) You request for this
additional threads using the Slurm option `--hint=multithread` or by setting the environment
variable `SLURM_HINT=multithread`. Besides the usage of the threads to speed up the computations,
the memory of the other threads is allocated implicitly, too, and you will always get
`Memory per Core`*`number of threads` as memory pledge.
### What do I need, a CPU or GPU?
### What do I need, a CPU or GPU?
...
@@ -60,57 +118,174 @@ by a significant factor then this might be the obvious choice.
...
@@ -60,57 +118,174 @@ by a significant factor then this might be the obvious choice.
a single GPU's core can handle is small), GPUs are not as versatile as CPUs.
a single GPU's core can handle is small), GPUs are not as versatile as CPUs.
### How much time do I need?
#### Runtime limits
!!! warning "Runtime limits on login nodes"
When log in to ZIH systems, you are placed on a login node where you can
There is a time limit of 600 seconds set for processes on login nodes. Each process running
[manage data life cycle](../data_lifecycle/overview.md),
longer than this time limit is automatically killed. The login nodes are shared ressources
setup experiments,
between all users of ZIH system and thus, need to be available and cannot be used for productive
execute short tests and compile moderate projects. The login nodes cannot be used for real
runs.
experiments and computations. Long and extensive computational work and experiments have to be
encapsulated into so called **jobs** and scheduled to the compute nodes.
```
CPU time limit exceeded
```
Please submit extensive application runs to the compute nodes using the [batch system](slurm.md).
!!! note "Runtime limits are enforced."
A job is canceled as soon as it exceeds its requested limit. Currently, the maximum run time
limit is 7 days.
Shorter jobs come with multiple advantages:
- lower risk of loss of computing time,
- shorter waiting time for scheduling,
- higher job fluctuation; thus, jobs with high priorities may start faster.
To bring down the percentage of long running jobs we restrict the number of cores with jobs longer
than 2 days to approximately 50% and with jobs longer than 24 to 75% of the total number of cores.
(These numbers are subject to change.) As best practice we advise a run time of about 8h.
!!! hint "Please always try to make a good estimation of your needed time limit."
For this, you can use a command line like this to compare the requested timelimit with the
elapsed time for your completed jobs that started after a given date:
```console
marie@login$ sacct -X -S 2021-01-01 -E now --format=start,JobID,jobname,elapsed,timelimit -s COMPLETED
```
Instead of running one long job, you should split it up into a chain job. Even applications that are
not capable of checkpoint/restart can be adapted. Please refer to the section
[Checkpoint/Restart](../jobs_and_resources/checkpoint_restart.md) for further documentation.
### How many cores do I need?
ZIH systems are focused on data-intensive computing. They are meant to be used for highly
parallelized code. Please take that into account when migrating sequential code from a local
machine to our HPC systems. To estimate your execution time when executing your previously
sequential program in parallel, you can use [Amdahl's law](https://en.wikipedia.org/wiki/Amdahl%27s_law).
Think in advance about the parallelization strategy for your project and how to effectively use HPC resources.
However, this is highly depending on the used software, investigate if your application supports a parallel execution.
### How much memory do I need?
#### Memory Limits
!!! note "Memory limits are enforced."
Jobs which exceed their per-node memory limit are killed automatically by the batch system.
Memory requirements for your job can be specified via the `sbatch/srun` parameters:
`--mem-per-cpu=<MB>` or `--mem=<MB>` (which is "memory per node"). The **default limit** regardless
of the partition it runs on is quite low at **300 MB** per CPU. If you need more memory, you need
to request it.
ZIH systems comprise different sets of nodes with different amount of installed memory which affect
where your job may be run. To achieve the shortest possible waiting time for your jobs, you should
be aware of the limits shown in the
[Partitions and limits table](../jobs_and_resources/partitions_and_limits.md#slurm-partitions).
Follow the page [Slurm](slurm.md) for comprehensive documentation using the batch system at
Follow the page [Slurm](slurm.md) for comprehensive documentation using the batch system at
ZIH systems. There is also a page with extensive set of [Slurm examples](slurm_examples.md).
ZIH systems. There is also a page with extensive set of [Slurm examples](slurm_examples.md).
### Which software is required?
#### Available software
Pre-installed software on our HPC systems is managed via [modules](../software/modules.md).
You can see the
[list of software that's already installed and accessible via modules](https://gauss-allianz.de/de/application?organizations%5B0%5D=1200).
However, there are many
different variants of these modules available. Each cluster has its own set of installed modules, [depending on their purpose](doc.zih.tu-dresden.de/docs/software/.md)
Specific modules can be found with:
```console
marie@compute$module spider <software_name>
```
### Available Hardware
### Available Hardware
ZIH provides a broad variety of compute resources ranging from normal server CPUs of different
ZIH provides a broad variety of compute resources ranging from normal server CPUs of different
manufactures, large shared memory nodes, GPU-assisted nodes up to highly specialized resources for
manufactures, large shared memory nodes, GPU-assisted nodes up to highly specialized resources for
[Machine Learning](../software/machine_learning.md) and AI.
[Machine Learning](../software/machine_learning.md) and AI.
The page [ZIH Systems](hardware_overview.md) holds a comprehensive overview.
The desired hardware can be specified by the partition `-p, --partition` flag in Slurm.
The majority of the basic tasks can be executed on the conventional nodes like a Haswell. Slurm will
automatically select a suitable partition depending on your memory and GPU requirements.
### Parallel Jobs
## Barnard
**MPI jobs:** For MPI jobs typically allocates one core per task. Several nodes could be allocated
The cluster **Barnard** is a general purpose cluster by Bull. It is based on Intel Sapphire Rapids
if it is necessary. The batch system [Slurm](slurm.md) will automatically find suitable hardware.
CPUs.
**OpenMP jobs:** SMP-parallel applications can only run **within a node**, so it is necessary to
- 630 diskless nodes, each with
include the [batch system](slurm.md) options `-N 1` and `-n 1`. Using `--cpus-per-task N` Slurm will