From 9bdb41c4d78eb70927beadc21af99fddefbf7b58 Mon Sep 17 00:00:00 2001
From: Martin Schroschk <martin.schroschk@tu-dresden.de>
Date: Wed, 6 Nov 2024 07:24:11 +0100
Subject: [PATCH] Correct hardware specification for Romeo and Alpha

- Alpha has 37 compute nodes (not 34)
- Fix hostnames for Romeo nodes
---
 .../docs/jobs_and_resources/hardware_overview.md          | 8 ++++----
 .../docs/jobs_and_resources/slurm_limits.md               | 6 +++---
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md
index a86917c2f..86f5fd8f6 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md
@@ -16,7 +16,7 @@ HPC resources at ZIH comprise a total of **six systems**:
 | [`Barnard`](#barnard)               | CPU cluster           | 2023                 | `n[1001-1630].barnard.hpc.tu-dresden.de` |
 | [`Alpha Centauri`](#alpha-centauri) | GPU cluster           | 2021                 | `i[8001-8037].alpha.hpc.tu-dresden.de` |
 | [`Julia`](#julia)                   | Single SMP system     | 2021                 | `julia.hpc.tu-dresden.de` |
-| [`Romeo`](#romeo)                   | CPU cluster           | 2020                 | `i[8001-8190].romeo.hpc.tu-dresden.de` |
+| [`Romeo`](#romeo)                   | CPU cluster           | 2020                 | `i[7001-7186].romeo.hpc.tu-dresden.de` |
 | [`Power9`](#power9)                 | IBM Power/GPU cluster | 2018                 | `ml[1-29].power9.hpc.tu-dresden.de` |
 
 All clusters will run with their own [Slurm batch system](slurm.md) and job submission is possible
@@ -69,7 +69,7 @@ CPUs.
 The cluster `Alpha Centauri` (short: `Alpha`) by NEC provides AMD Rome CPUs and NVIDIA A100 GPUs
 and is designed for AI and ML tasks.
 
-- 34 nodes, each with
+- 37 nodes, each with
     - 8 x NVIDIA A100-SXM4 Tensor Core-GPUs
     - 2 x AMD EPYC CPU 7352 (24 cores) @ 2.3 GHz, Multithreading available
     - 1 TB RAM (16 x 32 GB DDR4-2933 MT/s per socket)
@@ -97,12 +97,12 @@ and is designed for AI and ML tasks.
 
 The cluster `Romeo` is a general purpose cluster by NEC based on AMD Rome CPUs.
 
-- 192 nodes, each with
+- 188 nodes, each with
     - 2 x AMD EPYC CPU 7702 (64 cores) @ 2.0 GHz, Multithreading available
     - 512 GB RAM (8 x 32 GB DDR4-3200 MT/s per socket)
     - 200 GB local storage on SSD at `/tmp`
 - Login nodes: `login[1-2].romeo.hpc.tu-dresden.de`
-- Hostnames: `i[7001-7190].romeo.hpc.tu-dresden.de`
+- Hostnames: `i[7001-7186].romeo.hpc.tu-dresden.de`
 - Operating system: Rocky Linux 8.9
 - Further information on the usage is documented on the site [CPU Cluster Romeo](romeo.md)
 
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_limits.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_limits.md
index 03a797906..1105dac4c 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_limits.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_limits.md
@@ -77,10 +77,10 @@ following table depicts the resource limits for [all our HPC systems](hardware_o
 | HPC System | Nodes | # Nodes | Cores per Node | Threads per Core | Memory per Node [in MB] | Memory per (SMT) Core [in MB] | GPUs per Node | Cores per GPU | Job Max Time |
 |:-----------|:------|--------:|---------------:|-----------------:|------------------------:|------------------------------:|--------------:|--------------:|-------------:|
 | [`Barnard`](hardware_overview.md#barnard) | `n[1001-1630].barnard` | 630 | 104 | 2 | 515,000    | 4,951  | - | - | unlimited |
+| [`Alpha Centauri`](alpha_centauri.md)     | `i[8001-8037].alpha`   | 37  | 48  | 2 | 990,000    | 10,312 | 8 | 6 | unlimited |
+| [`Julia`](julia.md)                       | `julia`                | 1   | 896 | 1 | 48,390,000 | 54,006 | - | - | unlimited |
+| [`Romeo`](romeo.md)                       | `i[7001-7186].romeo`   | 186 | 128 | 2 | 505,000    | 1,972  | - | - | unlimited |
 | [`Power9`](hardware_overview.md#power9)   | `ml[1-29].power9`      | 29  | 44  | 4 | 254,000    | 1,443  | 6 | - | unlimited |
-| [`Romeo`](romeo.md)                   | `i[8001-8190].romeo`   | 190 | 128 | 2 | 505,000    | 1,972  | - | - | unlimited |
-| [`Julia`](julia.md)                   | `julia`                | 1   | 896 | 1 | 48,390,000 | 54,006 | - | - | unlimited |
-| [`Alpha Centauri`](alpha_centauri.md) | `i[8001-8037].alpha`   | 37  | 48  | 2 | 990,000    | 10,312 | 8 | 6 | unlimited |
 {: summary="Slurm resource limits table" align="bottom"}
 
 All HPC systems have Simultaneous Multithreading (SMT) enabled. You request for this
-- 
GitLab