Skip to content
Snippets Groups Projects
Commit 5d465423 authored by Martin Schroschk's avatar Martin Schroschk
Browse files

Fix links and minor cleanup

parent 9beb9a61
No related branches found
No related tags found
2 merge requests!938Automated merge from preview to main,!936Update to Five-Cluster-Operation
......@@ -100,7 +100,7 @@ storages or long-term archives.
For about one month, the new cluster Barnard, and the old cluster Taurus
will run side-by-side - both with their respective filesystems. We provide a comprehensive
[description of the migration to Barnard](migration_to_barnard.md).
[description of the migration to Barnard](barnard.md).
<! --
The follwing figure provides a graphical overview of the overall process (red: user action
......@@ -191,8 +191,7 @@ The cluster **Romeo** is a general purpose cluster by NEC based on AMD Rome CPUs
- 2 x AMD EPYC CPU 7702 (64 cores) @ 2.0 GHz, Multithreading available
- 512 GB RAM
- 200 GB local memory on SSD at `/tmp`
- Hostnames: `i[7001-7190].romeo.hpc.tu-dresden.de` (after
[recabling phase](architecture_2023.md#migration-phase)])
- Hostnames: `i[7001-7190].romeo.hpc.tu-dresden.de`
- Login nodes: `login[1-2].romeo.hpc.tu-dresden.de`
- Further information on the usage is documented on the site [CPU Cluster Romeo](romeo.md)
......@@ -229,8 +228,7 @@ architecture.
- Configured as one single node
- 48 TB RAM (usable: 47 TB - one TB is used for cache coherence protocols)
- 370 TB of fast NVME storage available at `/nvme/<projectname>`
- Hostname: `smp8.julia.hpc.tu-dresden.de` (after
[recabling phase](architecture_2023.md#migration-phase)])
- Hostname: `smp8.julia.hpc.tu-dresden.de`
- Further information on the usage is documented on the site [SMP System Julia](julia.md)
??? note "Maintenance from November 27 to December 12"
......@@ -266,12 +264,11 @@ The cluster **Power9** by IBM is based on Power9 CPUs and provides NVIDIA V100 G
- 256 GB RAM DDR4 2666 MHz
- 6 x NVIDIA VOLTA V100 with 32 GB HBM2
- NVLINK bandwidth 150 GB/s between GPUs and host
- Hostnames: `ml[1-29].power9.hpc.tu-dresden.de` (after
[recabling phase](architecture_2023.md#migration-phase)])
- Hostnames: `ml[1-29].power9.hpc.tu-dresden.de` (after recabling phase; expected January '24)
- Login nodes: `login[1-2].power9.hpc.tu-dresden.de`
- Further information on the usage is documented on the site [GPU Cluster Power9](power9.md)
??? note "Maintenance from November 27 to December 12"
??? note "Maintenance"
The recabling will take place from November 27 to December 12. After the maintenance, the Power9
system reappears as a stand-alone cluster that can be reached via
......@@ -283,6 +280,6 @@ The cluster **Power9** by IBM is based on Power9 CPUs and provides NVIDIA V100 G
`/home` on Barnard](barnard.md#data-management-and-data-transfer).
The old work filesystems `/lustre/scratch` and `/lustre/ssd will` be turned off on January 1
2024 for good (no data access afterwards!). The new work filesystem available on the Power9
system will be `/horse`. Please
2024 for good (no data access afterwards!). The only work filesystem available on the Power9
system will be `/beegfs`. Please
[migrate your working data to `/horse`](barnard.md#data-migration-to-new-filesystems).
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment