Skip to content
Snippets Groups Projects
Commit 0599dbfd authored by Lars Jitschin's avatar Lars Jitschin
Browse files

Merge branch 'issue-441' into 'preview'

Resolve issue #441

Closes #441

See merge request !758
parents 453a1133 d0464a1d
No related branches found
No related tags found
2 merge requests!764Preview,!758Resolve issue #441
...@@ -63,7 +63,8 @@ to request it. ...@@ -63,7 +63,8 @@ to request it.
ZIH systems comprise different sets of nodes with different amount of installed memory which affect ZIH systems comprise different sets of nodes with different amount of installed memory which affect
where your job may be run. To achieve the shortest possible waiting time for your jobs, you should where your job may be run. To achieve the shortest possible waiting time for your jobs, you should
be aware of the limits shown in the table [Partitions and limits table](../jobs_and_resources/partitions_and_limits.md#slurm-partitions). be aware of the limits shown in the
[Partitions and limits table](../jobs_and_resources/partitions_and_limits.md#slurm-partitions).
## Slurm Partitions ## Slurm Partitions
...@@ -71,35 +72,37 @@ The available compute nodes are grouped into logical (possibly overlapping) sets ...@@ -71,35 +72,37 @@ The available compute nodes are grouped into logical (possibly overlapping) sets
**partitions**. You can submit your job to a certain partition using the Slurm option **partitions**. You can submit your job to a certain partition using the Slurm option
`--partition=<partition-name>`. `--partition=<partition-name>`.
Some nodes have Multithreading (SMT) enabled, so for every physical core allocated Some partitions/nodes have Simultaneous Multithreading (SMT) enabled. You request for this
(e.g., with `SLURM_HINT=nomultithread`), you will always get `MB per Core`*`number of threads`, additional threads using the Slurm option `--hint=multithread` or by setting the environment
because the memory of the other threads is allocated implicitly, too. varibale `SLURM_HINT=multithread`. Besides the usage of the threads to speed up the computations,
the memory of the other threads is allocated implicitly, too, and you will allways get
`Memory per Core`*`number of threads` as memory pledge.
Some partitions have a *interactive* counterpart for interactive jobs. The corresponding partitions Some partitions have a *interactive* counterpart for interactive jobs. The corresponding partitions
is suffixed with `-interactive` (e.g. `ml-interactive`) and have the same configuration. are suffixed with `-interactive` (e.g. `ml-interactive`) and have the same configuration.
There is also a meta partition `haswell`, which contain partition `haswell64`, There is also a meta partition `haswell`, which contains the partitions `haswell64`,
`haswell256` and `smp2`and this is also the default partition. If you specify no partition or `haswell256` and `smp2`. `haswell` is also the default partition. If you specify no partition or
partition `haswell` a Slurm plugin will choose the partition which fits to your memory requirements. partition `haswell` a Slurm plugin will choose the partition which fits to your memory requirements.
There are some other partitions, which are not specified in the table above, but those partitions There are some other partitions, which are not specified in the table above, but those partitions
should not be used directly. should not be used directly.
<!-- partitions_and_limits_table --> <!-- partitions_and_limits_table -->
| Partition | Nodes | # Nodes | Cores per Node (SMT) | RAM per Core (SMT) (MB) | RAM per Node (MB) | GPUs per Node | | Partition | Nodes | # Nodes | Cores per Node | Threads per Core | Memory per Node [in MB] | Memory per Core [in MB] | GPUs per Node
|:--------|:------|--------:|---------------:|------------:|------------:|--------------:| |:--------|:------|--------:|---------------:|------------:|------------:|--------------:|--------------:|
| gpu2 | taurusi[2045-2103] | 59 | 24 | 2,583 | 62,000 | 4 | | gpu2 | taurusi[2045-2103] | 59 | 24 | 1 | 62,000 | 2,583 | 4 |
| gpu2-interactive | taurusi[2045-2103] | 59 | 24 | 2,583 | 62,000 | 4 | | gpu2-interactive | taurusi[2045-2103] | 59 | 24 | 1 | 62,000 | 2,583 | 4 |
| haswell | taurusi[6001-6604],taurussmp[3-7] | 609 | | | | | | haswell | taurusi[6001-6604],taurussmp[3-7] | 609 | | | | | |
| haswell64 | taurusi[6001-6540,6559-6604] | 586 | 24 | 2,541 | 61,000 | none | | haswell64 | taurusi[6001-6540,6559-6604] | 586 | 24 | 1 | 61,000 | 2,541 | |
| haswell256 | taurusi[6541-6558] | 18 | 24 | 10,583 | 254,000 | none | | haswell256 | taurusi[6541-6558] | 18 | 24 | 1 | 254,000 | 10,583 | |
| interactive | taurusi[6605-6612] | 8 | 24 | 2,541 | 61,000 | none | | interactive | taurusi[6605-6612] | 8 | 24 | 1 | 61,000 | 2,541 | |
| smp2 | taurussmp[3-7] | 5 | 56 | 36,500 | 2,044,000 | none | | smp2 | taurussmp[3-7] | 5 | 56 | 1 | 2,044,000 | 36,500 | |
| hpdlf | taurusa[3-16] | 14 | 12 | 7,916 | 95,000 | 3 | | hpdlf | taurusa[3-16] | 14 | 12 | 1 | 95,000 | 7,916 | 3 |
| ml | taurusml[3-32] | 30 | 44 (SMT: 176) | 1,443 | 254,000 | 6 | | ml | taurusml[3-32] | 30 | 44 | 4 | 254,000 | 1,443 | 6 |
| ml-interactive | taurusml[1-2] | 2 | 44 (SMT: 176) | 1,443 | 254,000 | 6 | | ml-interactive | taurusml[1-2] | 2 | 44 | 4 | 254,000 | 1,443 | 6 |
| romeo | taurusi[7003-7192] | 190 | 128 (SMT: 256) | 1,972 | 505,000 | none | | romeo | taurusi[7003-7192] | 190 | 128 | 2 | 505,000 | 1,972 | |
| romeo-interactive | taurusi[7001-7002] | 2 | 128 (SMT: 256) | 1,972 | 505,000 | none | | romeo-interactive | taurusi[7001-7002] | 2 | 128 | 2 | 505,000 | 1,972 | |
| julia | taurussmp8 | 1 | 896 | 54,006 | 48,390,000 | none | | julia | taurussmp8 | 1 | 896 | 1 | 48,390,000 | 54,006 | |
| alpha | taurusi[8003-8034] | 32 | 48 (SMT: 96) | 10,312 | 990,000 | 8 | | alpha | taurusi[8003-8034] | 32 | 48 | 2 | 990,000 | 10,312 | 8 |
| alpha-interactive | taurusi[8001-8002] | 2 | 48 (SMT: 96) | 10,312 | 990,000 | 8 | | alpha-interactive | taurusi[8001-8002] | 2 | 48 | 2 | 990,000 | 10,312 | 8 |
{: summary="Partitions and limits table" align="bottom"} {: summary="Partitions and limits table" align="bottom"}
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment