Skip to content
Snippets Groups Projects
Commit fa63b915 authored by Martin Schroschk's avatar Martin Schroschk
Browse files

Refine table with cluster overview

parent 44eb1ef9
No related branches found
No related tags found
2 merge requests!938Automated merge from preview to main,!936Update to Five-Cluster-Operation
# HPC Resources # HPC Resources
TODO: Update this introduction <!--TODO: Update this introduction-->
HPC resources in ZIH systems comprise the *High Performance Computing and Storage Complex* and its HPC resources in ZIH systems comprise the *High Performance Computing and Storage Complex* and its
extension *High Performance Computing – Data Analytics*. In total it offers scientists extension *High Performance Computing – Data Analytics*. In total it offers scientists
about 60,000 CPU cores and a peak performance of more than 1.5 quadrillion floating point about 60,000 CPU cores and a peak performance of more than 1.5 quadrillion floating point
...@@ -18,18 +17,18 @@ users and the ZIH. ...@@ -18,18 +17,18 @@ users and the ZIH.
will have five homogeneous clusters with their own Slurm instances and with cluster specific will have five homogeneous clusters with their own Slurm instances and with cluster specific
login nodes running on the same CPU. login nodes running on the same CPU.
With the installation and start of operation of the [new HPC system Barnard(#barnard), With the installation and start of operation of the [new HPC system Barnard](#barnard),
quite significant changes w.r.t. HPC system landscape at ZIH follow. The former HPC system Taurus is quite significant changes w.r.t. HPC system landscape at ZIH follow. The former HPC system Taurus is
partly switched-off and partly split up into separate clusters. In the end, from the users' partly switched-off and partly split up into separate clusters. In the end, from the users'
perspective, there will be **five separate clusters**: perspective, there will be **five separate clusters**:
| Name | Description | Year| DNS | | Name | Description | Year of Installation | DNS |
| --- | --- | --- | --- | | ----------------------------------- | ----------------------| -------------------- | --- |
| **Barnard** | CPU cluster |2023| `n[1001-1630].barnard.hpc.tu-dresden.de` | | [`Barnard`](#barnard) | CPU cluster | 2023 | `n[1001-1630].barnard.hpc.tu-dresden.de` |
| **Romeo** | CPU cluster |2020| `i[8001-8190].romeo.hpc.tu-dresden.de` | | [`Alpha Centauri`](#alpha-centauri) | GPU cluster | 2021 | `i[8001-8037].alpha.hpc.tu-dresden.de` |
| **Alpha Centauri** | GPU cluster | 2021| `i[8001-8037].alpha.hpc.tu-dresden.de` | | [`Julia`](#julia) | Single SMP system | 2021 | `smp8.julia.hpc.tu-dresden.de` |
| **Julia** | single SMP system |2021| `smp8.julia.hpc.tu-dresden.de` | | [`Romeo`](#romeo) | CPU cluster | 2020 | `i[8001-8190].romeo.hpc.tu-dresden.de` |
| **Power** | IBM Power/GPU system |2018| `ml[1-29].power9.hpc.tu-dresden.de` | | [`Power`](#power9) | IBM Power/GPU cluster | 2018 | `ml[1-29].power9.hpc.tu-dresden.de` |
All clusters will run with their own [Slurm batch system](slurm.md) and job submission is possible All clusters will run with their own [Slurm batch system](slurm.md) and job submission is possible
only from their respective login nodes. only from their respective login nodes.
...@@ -145,7 +144,7 @@ CPUs. ...@@ -145,7 +144,7 @@ CPUs.
## Alpha Centauri ## Alpha Centauri
The cluster **Alpha Centauri** (short: **Alpha**) by NEC provides AMD Rome CPUs and NVIDIA A100 GPUs The cluster **Alpha Centauri** (short: **Alpha**) by NEC provides AMD Rome CPUs and NVIDIA A100 GPUs
and designed for AI and ML tasks. and is designed for AI and ML tasks.
- 34 nodes, each with - 34 nodes, each with
- 8 x NVIDIA A100-SXM4 Tensor Core-GPUs - 8 x NVIDIA A100-SXM4 Tensor Core-GPUs
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment