Skip to content
Snippets Groups Projects
Commit 135c9953 authored by Danny Marc Rotscher's avatar Danny Marc Rotscher
Browse files

Merge branch 'issue-432' into 'preview'

WIP: Move partitions table into separate subsection

See merge request !736
parents e64ef901 a85c341c
No related branches found
No related tags found
2 merge requests!741Automated merge from preview to main,!736WIP: Move partitions table into separate subsection
...@@ -11,13 +11,13 @@ users and the ZIH. ...@@ -11,13 +11,13 @@ users and the ZIH.
## Login Nodes ## Login Nodes
- Login-Nodes (`tauruslogin[3-6].hrsk.tu-dresden.de`) - Login-Nodes (`tauruslogin[3-6].hrsk.tu-dresden.de`)
- each with 2x Intel(R) Xeon(R) CPU E5-2680 v3 each with 12 cores - each with 2x Intel(R) Xeon(R) CPU E5-2680 v3 each with 12 cores
@ 2.50GHz, Multithreading Disabled, 64 GB RAM, 128 GB SSD local disk 2.50GHz, Multithreading Disabled, 64 GB RAM, 128 GB SSD local disk
- IPs: 141.30.73.\[102-105\] - IPs: 141.30.73.\[102-105\]
- Transfer-Nodes (`taurusexport[3-4].hrsk.tu-dresden.de`, DNS Alias - Transfer-Nodes (`taurusexport[3-4].hrsk.tu-dresden.de`, DNS Alias
`taurusexport.hrsk.tu-dresden.de`) `taurusexport.hrsk.tu-dresden.de`)
- 2 Servers without interactive login, only available via file transfer protocols (`rsync`, `ftp`) - 2 Servers without interactive login, only available via file transfer protocols (`rsync`, `ftp`)
- IPs: 141.30.73.82/83 - IPs: 141.30.73.82/83
- Direct access to these nodes is granted via IP whitelisting (contact - Direct access to these nodes is granted via IP whitelisting (contact
hpcsupport@zih.tu-dresden.de) - otherwise use TU Dresden VPN. hpcsupport@zih.tu-dresden.de) - otherwise use TU Dresden VPN.
...@@ -27,11 +27,11 @@ users and the ZIH. ...@@ -27,11 +27,11 @@ users and the ZIH.
## AMD Rome CPUs + NVIDIA A100 ## AMD Rome CPUs + NVIDIA A100
- 32 nodes, each with - 34 nodes, each with
- 8 x NVIDIA A100-SXM4 - 8 x NVIDIA A100-SXM4
- 2 x AMD EPYC CPU 7352 (24 cores) @ 2.3 GHz, Multithreading disabled - 2 x AMD EPYC CPU 7352 (24 cores) @ 2.3 GHz, Multithreading disabled
- 1 TB RAM - 1 TB RAM
- 3.5 TB local memory at NVMe device at `/tmp` - 3.5 TB local memory at NVMe device at `/tmp`
- Hostnames: `taurusi[8001-8034]` - Hostnames: `taurusi[8001-8034]`
- Slurm partition `alpha` - Slurm partition `alpha`
- Dedicated mostly for ScaDS-AI - Dedicated mostly for ScaDS-AI
...@@ -39,20 +39,21 @@ users and the ZIH. ...@@ -39,20 +39,21 @@ users and the ZIH.
## Island 7 - AMD Rome CPUs ## Island 7 - AMD Rome CPUs
- 192 nodes, each with - 192 nodes, each with
- 2x AMD EPYC CPU 7702 (64 cores) @ 2.0GHz, Multithreading - 2x AMD EPYC CPU 7702 (64 cores) @ 2.0GHz, Multithreading
enabled, enabled,
- 512 GB RAM - 512 GB RAM
- 200 GB /tmp on local SSD local disk - 200 GB /tmp on local SSD local disk
- Hostnames: `taurusi[7001-7192]` - Hostnames: `taurusi[7001-7192]`
- Slurm partition `romeo` - Slurm partition `romeo`
- More information under [Rome Nodes](rome_nodes.md) - More information under [Rome Nodes](rome_nodes.md)
## Large SMP System HPE Superdome Flex ## Large SMP System HPE Superdome Flex
- 32 x Intel(R) Xeon(R) Platinum 8276M CPU @ 2.20GHz (28 cores) - 1 node, with
- 47 TB RAM - 32 x Intel(R) Xeon(R) Platinum 8276M CPU @ 2.20GHz (28 cores)
- 47 TB RAM
- Currently configured as one single node - Currently configured as one single node
- Hostname: `taurussmp8` - Hostname: `taurussmp8`
- Slurm partition `julia` - Slurm partition `julia`
- More information under [HPE SD Flex](sd_flex.md) - More information under [HPE SD Flex](sd_flex.md)
...@@ -60,27 +61,26 @@ users and the ZIH. ...@@ -60,27 +61,26 @@ users and the ZIH.
For machine learning, we have 32 IBM AC922 nodes installed with this configuration: For machine learning, we have 32 IBM AC922 nodes installed with this configuration:
- 2 x IBM Power9 CPU (2.80 GHz, 3.10 GHz boost, 22 cores) - 32 nodes, each with
- 256 GB RAM DDR4 2666MHz - 2 x IBM Power9 CPU (2.80 GHz, 3.10 GHz boost, 22 cores)
- 6x NVIDIA VOLTA V100 with 32GB HBM2 - 256 GB RAM DDR4 2666MHz
- NVLINK bandwidth 150 GB/s between GPUs and host - 6x NVIDIA VOLTA V100 with 32GB HBM2
- Slurm partition `ml` - NVLINK bandwidth 150 GB/s between GPUs and host
- Hostnames: `taurusml[1-32]` - Hostnames: `taurusml[1-32]`
- Slurm partition `ml`
## Island 4 to 6 - Intel Haswell CPUs ## Island 6 - Intel Haswell CPUs
- 1456 nodes, each with 2x Intel(R) Xeon(R) CPU E5-2680 v3 (12 cores) - 612 nodes, each with
@ 2.50GHz, Multithreading disabled, 128 GB SSD local disk - 2x Intel(R) Xeon(R) CPU E5-2680 v3 (12 cores)
- Hostname: `taurusi[4001-4232]`, `taurusi[5001-5612]`, @ 2.50GHz, Multithreading disabled, 128 GB SSD local disk
`taurusi[6001-6612]`
- Varying amounts of main memory (selected automatically by the batch - Varying amounts of main memory (selected automatically by the batch
system for you according to your job requirements) system for you according to your job requirements)
- 1328 nodes with 2.67 GB RAM per core (64 GB total): - 594 nodes with 2.67 GB RAM per core (64 GB total):
`taurusi[4001-4104,5001-5612,6001-6612]` `taurusi[6001-6540,6559-6612]`
- 84 nodes with 5.34 GB RAM per core (128 GB total): - 18 nodes with 10.67 GB RAM per core (256 GB total):
`taurusi[4105-4188]` `taurusi[6541-6558]`
- 44 nodes with 10.67 GB RAM per core (256 GB total): - Hostnames: `taurusi[6001-6612]`
`taurusi[4189-4232]`
- Slurm Partition `haswell` - Slurm Partition `haswell`
??? hint "Node topology" ??? hint "Node topology"
...@@ -88,29 +88,26 @@ For machine learning, we have 32 IBM AC922 nodes installed with this configurati ...@@ -88,29 +88,26 @@ For machine learning, we have 32 IBM AC922 nodes installed with this configurati
![Node topology](misc/i4000.png) ![Node topology](misc/i4000.png)
{: align=center} {: align=center}
### Extension of Island 4 with Broadwell CPUs
* 32 nodes, each with 2 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
(**14 cores**), Multithreading disabled, 64 GB RAM, 256 GB SSD local disk
* from the users' perspective: Broadwell is like Haswell
* Hostname: `taurusi[4233-4264]`
* Slurm partition `broadwell`
## Island 2 Phase 2 - Intel Haswell CPUs + NVIDIA K80 GPUs ## Island 2 Phase 2 - Intel Haswell CPUs + NVIDIA K80 GPUs
* 64 nodes, each with 2x Intel(R) Xeon(R) CPU E5-E5-2680 v3 (12 cores) - 64 nodes, each with
@ 2.50GHz, Multithreading Disabled, 64 GB RAM (2.67 GB per core), - 2x Intel(R) Xeon(R) CPU E5-E5-2680 v3 (12 cores)
128 GB SSD local disk, 4x NVIDIA Tesla K80 (12 GB GDDR RAM) GPUs @ 2.50GHz, Multithreading Disabled
* Hostname: `taurusi[2045-2108]` - 64 GB RAM (2.67 GB per core)
* Slurm Partition `gpu2` - 128 GB SSD local disk
* Node topology, same as [island 4 - 6](#island-4-to-6-intel-haswell-cpus) - 4x NVIDIA Tesla K80 (12 GB GDDR RAM) GPUs
- Hostnames: `taurusi[2045-2108]`
- Slurm Partition `gpu2`
- Node topology, same as [island 4 - 6](#island-6-intel-haswell-cpus)
## SMP Nodes - up to 2 TB RAM ## SMP Nodes - up to 2 TB RAM
- 5 Nodes each with 4x Intel(R) Xeon(R) CPU E7-4850 v3 (14 cores) @ - 5 Nodes, each with
2.20GHz, Multithreading Disabled, 2 TB RAM - 4x Intel(R) Xeon(R) CPU E7-4850 v3 (14 cores) @
- Hostname: `taurussmp[3-7]` 2.20GHz, Multithreading Disabled
- Slurm partition `smp2` - 2 TB RAM
- Hostnames: `taurussmp[3-7]`
- Slurm partition `smp2`
??? hint "Node topology" ??? hint "Node topology"
......
...@@ -10,19 +10,21 @@ smaller jobs. Thus, restrictions w.r.t. [memory](#memory-limits) and ...@@ -10,19 +10,21 @@ smaller jobs. Thus, restrictions w.r.t. [memory](#memory-limits) and
!!! warning "Runtime limits on login nodes" !!! warning "Runtime limits on login nodes"
There is a time limit set for processes on login nodes. If you run applications outside of a There is a time limit of 600 seconds set for processes on login nodes. Each process running
compute job, it will be stopped automatically after 5 minutes with longer than this time limit is automatically killed. The login nodes are shared ressources
between all users of ZIH system and thus, need to be available and cannot be used for productive
runs.
``` ```
CPU time limit exceeded CPU time limit exceeded
``` ```
Please start a job using the [batch system](slurm.md). Please submit extensive application runs to the compute nodes using the [batch system](slurm.md).
!!! note "Runtime limits are enforced." !!! note "Runtime limits are enforced."
A job is canceled as soon as it exceeds its requested limit. Currently, the maximum run time is A job is canceled as soon as it exceeds its requested limit. Currently, the maximum run time
7 days. limit is 7 days.
Shorter jobs come with multiple advantages: Shorter jobs come with multiple advantages:
...@@ -47,9 +49,6 @@ Instead of running one long job, you should split it up into a chain job. Even a ...@@ -47,9 +49,6 @@ Instead of running one long job, you should split it up into a chain job. Even a
not capable of checkpoint/restart can be adapted. Please refer to the section not capable of checkpoint/restart can be adapted. Please refer to the section
[Checkpoint/Restart](../jobs_and_resources/checkpoint_restart.md) for further documentation. [Checkpoint/Restart](../jobs_and_resources/checkpoint_restart.md) for further documentation.
![Partitions](misc/part.png)
{: align="center" summary="Partitions image"}
## Memory Limits ## Memory Limits
!!! note "Memory limits are enforced." !!! note "Memory limits are enforced."
...@@ -64,36 +63,44 @@ to request it. ...@@ -64,36 +63,44 @@ to request it.
ZIH systems comprise different sets of nodes with different amount of installed memory which affect ZIH systems comprise different sets of nodes with different amount of installed memory which affect
where your job may be run. To achieve the shortest possible waiting time for your jobs, you should where your job may be run. To achieve the shortest possible waiting time for your jobs, you should
be aware of the limits shown in the following table. be aware of the limits shown in the table [Partitions and limits table](../jobs_and_resources/partitions_and_limits.md#slurm-partitions).
???+ hint "Partitions and memory limits" ## Slurm Partitions
| Partition | Nodes | # Nodes | Cores per Node | MB per Core | MB per Node | GPUs per Node | The available compute nodes are grouped into logical (possibly overlapping) sets, the so-called
|:-------------------|:-----------------------------------------|:--------|:----------------|:------------|:------------|:------------------| **partitions**. You can submit your job to a certain partition using the Slurm option
| `interactive` | `taurusi[6605-6612]` | `8` | `24` | `2541` | `61000` | `-` | `--partition=<partition-name>`.
| `haswell64` | `taurusi[4037-4104,5001-5612,6001-6604]` | `1284` | `24` | `2541` | `61000` | `-` |
| `haswell64ht` | `taurusi[4018-4036]` | `18` | `24 (HT: 48)` | `1270*` | `61000` | `-` | Some nodes have Multithreading (SMT) enabled, so for every physical core allocated
| `haswell128` | `taurusi[4105-4188]` | `84` | `24` | `5250` | `126000` | `-` | (e.g., with `SLURM_HINT=nomultithread`), you will always get `MB per Core`*`number of threads`,
| `haswell256` | `taurusi[4189-4232]` | `44` | `24` | `10583` | `254000` | `-` | because the memory of the other threads is allocated implicitly, too.
| `broadwell` | `taurusi[4233-4264]` | `32` | `28` | `2214` | `62000` | `-` |
| `smp2` | `taurussmp[3-7]` | `5` | `56` | `36500` | `2044000` | `-` | Some partitions have a *interactive* counterpart for interactive jobs. The corresponding partitions
| `gpu2`** | `taurusi[2045-2103]` | `59` | `24` | `2583` | `62000` | `4 (2 dual GPUs)` | is suffixed with `-interactive` (e.g. `ml-interactive`) and have the same configuration.
| `hpdlf` | `taurusa[3-16]` | `14` | `12` | `7916` | `95000` | `3` |
| `ml`** | `taurusml[1-32]` | `32` | `44 (HT: 176)` | `1443*` | `254000` | `6` | There is also a meta partition `haswell`, which contain partition `haswell64`,
| `romeo`** | `taurusi[7001-7192]` | `192` | `128 (HT: 256)` | `1972*` | `505000` | `-` | `haswell256` and `smp2`and this is also the default partition. If you specify no partition or
| `julia` | `taurussmp8` | `1` | `896` | `54006` | `48390000` | `-` | partition `haswell` a Slurm plugin will choose the partition which fits to your memory requirements.
| `alpha`** | `taurusi[8001-8034]` | `34` | `48 (HT: 96)` | `10312*` | `990000` | `8` | There are some other partitions, which are not specified in the table above, but those partitions
{: summary="Partitions and limits table" align="bottom"} should not be used directly.
!!! note <!-- partitions_and_limits_table -->
| Partition | Nodes | # Nodes | Cores per Node (SMT) | MB per Core (SMT) | MB per Node | GPUs per Node |
Some nodes have multithreading (SMT) enabled, so for every physical core allocated |:--------|:------|--------:|---------------:|------------:|------------:|--------------:|
(e.g., with `SLURM_HINT=nomultithread`), you will always get `MB per Core`*`number of threads`, | gpu2 | taurusi[2045-2103] | 59 | 24 | 2583 | 62000 | gpu:4 |
because the memory of the other threads is allocated implicitly, too. | gpu2-interactive | taurusi[2045-2103] | 59 | 24 | 2583 | 62000 | gpu:4 |
Those nodes are marked with an asterisk. | haswell | taurusi[6001-6604],taurussmp[3-7] | 609 | 56 | 36500 | 2044000 | none |
Some of the partitions, denoted with a double asterisk, have a counterpart for interactive | haswell64 | taurusi[6001-6540,6559-6604] | 586 | 24 | 2541 | 61000 | none |
jobs. These partitions have a `-interactive` suffix (e.g. `ml-interactive`) and have the same | haswell256 | taurusi[6541-6558] | 18 | 24 | 10583 | 254000 | none |
configuration. | interactive | taurusi[6605-6612] | 8 | 24 | 2541 | 61000 | none |
There is also a meta partition `haswell`, which contain partition `haswell64`, `haswell128`, `haswell256` and `smp2`and this is also the default partition. | smp2 | taurussmp[3-7] | 5 | 56 | 36500 | 2044000 | none |
If you specify no partition or partition `haswell` a Slurm plugin will choose the partition which fits to your memory requirements. | ifm | taurusa2 | 1 | 16 (HT: 32) | 12000 | 384000 | gpu:1 |
There are some other partitions, which are not specified in the table above, but those partitions should not be used directly. | hpdlf | taurusa[3-16] | 14 | 12 | 7916 | 95000 | gpu:3 |
| ml | taurusml[3-32] | 30 | 44 (HT: 176) | 1443 | 254000 | gpu:6 |
| ml-interactive | taurusml[1-2] | 2 | 44 (HT: 176) | 1443 | 254000 | gpu:6 |
| romeo | taurusi[7003-7192] | 190 | 128 (HT: 256) | 1972 | 505000 | none |
| romeo-interactive | taurusi[7001-7002] | 2 | 128 (HT: 256) | 1972 | 505000 | none |
| julia | taurussmp8 | 1 | 896 | 5400 | 4839000 | none |
| alpha | taurusi[8003-8034] | 32 | 48 (HT: 96) | 10312 | 990000 | gpu:8 |
| alpha-interactive | taurusi[8001-8002] | 2 | 48 (HT: 96) | 10312 | 990000 | gpu:8 |
{: summary="Partitions and limits table" align="bottom"}
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment