Skip to content
Snippets Groups Projects
Commit 9beb9a61 authored by Martin Schroschk's avatar Martin Schroschk
Browse files

Remove systems overview (is in hardware overview page) and fix links

parent bddf5025
No related branches found
No related tags found
2 merge requests!938Automated merge from preview to main,!936Update to Five-Cluster-Operation
...@@ -199,7 +199,7 @@ Pre-installed software on our HPC systems is managed via [modules](../software/m ...@@ -199,7 +199,7 @@ Pre-installed software on our HPC systems is managed via [modules](../software/m
You can see the You can see the
[list of software that's already installed and accessible via modules](https://gauss-allianz.de/de/application?organizations%5B0%5D=1200). [list of software that's already installed and accessible via modules](https://gauss-allianz.de/de/application?organizations%5B0%5D=1200).
However, there are many different variants of these modules available. Each cluster has its own set However, there are many different variants of these modules available. Each cluster has its own set
of installed modules, [depending on their purpose](doc.zih.tu-dresden.de/docs/software/.md) of installed modules, [depending on their purpose](../software/software.md).
Specific modules can be found with: Specific modules can be found with:
...@@ -207,80 +207,6 @@ Specific modules can be found with: ...@@ -207,80 +207,6 @@ Specific modules can be found with:
marie@compute$ module spider <software_name> marie@compute$ module spider <software_name>
``` ```
### Available Hardware
ZIH provides a broad variety of compute resources ranging from normal server CPUs of different
manufactures, large shared memory nodes, GPU-assisted nodes up to highly specialized resources for
[Machine Learning](../software/machine_learning.md) and AI.
## Barnard
The cluster **Barnard** is a general purpose cluster by Bull. It is based on Intel Sapphire Rapids
CPUs.
- 630 diskless nodes, each with
- 2 x Intel Xeon Platinum 8470 (52 cores) @ 2.00 GHz, Multithreading enabled
- 512 GB RAM
- Hostnames: `n[1001-1630].barnard.hpc.tu-dresden.de`
- Login nodes: `login[1-4].barnard.hpc.tu-dresden.de`
## Alpha Centauri
The cluster **Alpha Centauri** (short: **Alpha**) by NEC provides AMD Rome CPUs and NVIDIA A100 GPUs
and designed for AI and ML tasks.
- 34 nodes, each with
- 8 x NVIDIA A100-SXM4 Tensor Core-GPUs
- 2 x AMD EPYC CPU 7352 (24 cores) @ 2.3 GHz, Multithreading available
- 1 TB RAM
- 3.5 TB local memory on NVMe device at `/tmp`
- Hostnames: `i[8001-8037].alpha.hpc.tu-dresden.de`
- Login nodes: `login[1-2].alpha.hpc.tu-dresden.de`
- Further information on the usage is documented on the site [GPU Cluster Alpha Centauri](alpha_centauri.md)
## Romeo
The cluster **Romeo** is a general purpose cluster by NEC based on AMD Rome CPUs.
- 192 nodes, each with
- 2 x AMD EPYC CPU 7702 (64 cores) @ 2.0 GHz, Multithreading available
- 512 GB RAM
- 200 GB local memory on SSD at `/tmp`
- Hostnames: `i[7001-7190].romeo.hpc.tu-dresden.de` (after
[recabling phase](architecture_2023.md#migration-phase)])
- Login nodes: `login[1-2].romeo.hpc.tu-dresden.de`
- Further information on the usage is documented on the site [CPU Cluster Romeo](romeo.md)
## Julia
The cluster **Julia** is a large SMP (shared memory parallel) system by HPE based on Superdome Flex
architecture.
- 1 node, with
- 32 x Intel(R) Xeon(R) Platinum 8276M CPU @ 2.20 GHz (28 cores)
- 47 TB RAM
- Configured as one single node
- 48 TB RAM (usable: 47 TB - one TB is used for cache coherence protocols)
- 370 TB of fast NVME storage available at `/nvme/<projectname>`
- Hostname: `smp8.julia.hpc.tu-dresden.de` (after
[recabling phase](architecture_2023.md#migration-phase)])
- Further information on the usage is documented on the site [SMP System Julia](julia.md)
## Power
The cluster **power** by IBM is based on Power9 CPUs and provides NVIDIA V100 GPUs.
**power** is specifically designed for machine learning tasks.
- 32 nodes, each with
- 2 x IBM Power9 CPU (2.80 GHz, 3.10 GHz boost, 22 cores)
- 256 GB RAM DDR4 2666 MHz
- 6 x NVIDIA VOLTA V100 with 32 GB HBM2
- NVLINK bandwidth 150 GB/s between GPUs and host
- Hostnames: `ml[1-29].power9.hpc.tu-dresden.de` (after
[recabling phase](architecture_2023.md#migration-phase)])
- Login nodes: `login[1-2].power9.hpc.tu-dresden.de`
- Further information on the usage is documented on the site [GPU Cluster Power9](power9.md)
## Processing of Data for Input and Output ## Processing of Data for Input and Output
Pre-processing and post-processing of the data is a crucial part for the majority of data-dependent Pre-processing and post-processing of the data is a crucial part for the majority of data-dependent
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment