From 5e78b7c589cf193ec8e72640e2528de53c2357ac Mon Sep 17 00:00:00 2001
From: Martin Schroschk <martin.schroschk@tu-dresden.de>
Date: Mon, 6 Nov 2023 12:54:35 +0100
Subject: [PATCH] Add note on recabling phase

---
 .../docs/jobs_and_resources/hardware_overview_2023.md | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview_2023.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview_2023.md
index cc09c236c..68cc84815 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview_2023.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview_2023.md
@@ -1,4 +1,4 @@
-# Overview 2023
+# HPC Resources Overview 2023
 
 With the installation and start of operation of the [new HPC system Barnard](#barnard-intel-sapphire-rapids-cpus),
 quite significant changes w.r.t. HPC system landscape at ZIH follow. The former HPC system Taurus is
@@ -49,7 +49,8 @@ All clusters will have access to these shared parallel filesystems:
     - 2 x AMD EPYC CPU 7702 (64 cores) @ 2.0 GHz, Multithreading available
     - 512 GB RAM
     - 200 GB local memory on SSD at `/tmp`
-- Hostnames: `taurusi[7001-7192]` -> `i[7001-7190].romeo.hpc.tu-dresden.de`
+- Hostnames: `taurusi[7001-7192]` -> `i[7001-7190].romeo.hpc.tu-dresden.de` (after
+  [recabling phase](architecture_2023.md#migration-phase)])
 - Login nodes: `login[1-2].romeo.hpc.tu-dresden.de`
 - Further information on the usage is documented on the site [AMD Rome Nodes](rome_nodes.md)
 
@@ -61,7 +62,8 @@ All clusters will have access to these shared parallel filesystems:
 - Configured as one single node
 - 48 TB RAM (usable: 47 TB - one TB is used for cache coherence protocols)
 - 370 TB of fast NVME storage available at `/nvme/<projectname>`
-- Hostname: `taurussmp8` -> `smp8.julia.hpc.tu-dresden.de`
+- Hostname: `taurussmp8` -> `smp8.julia.hpc.tu-dresden.de` (after
+  [recabling phase](architecture_2023.md#migration-phase)])
 - Further information on the usage is documented on the site [HPE Superdome Flex](sd_flex.md)
 
 ## IBM Power9 Nodes for Machine Learning
@@ -73,5 +75,6 @@ For machine learning, we have IBM AC922 nodes installed with this configuration:
     - 256 GB RAM DDR4 2666 MHz
     - 6 x NVIDIA VOLTA V100 with 32 GB HBM2
     - NVLINK bandwidth 150 GB/s between GPUs and host
-- Hostnames: `taurusml[1-32]` -> `ml[1-29].power9.hpc.tu-dresden.de`
+- Hostnames: `taurusml[1-32]` -> `ml[1-29].power9.hpc.tu-dresden.de` (after
+  [recabling phase](architecture_2023.md#migration-phase)])
 - Login nodes: `login[1-2].power9.hpc.tu-dresden.de`
-- 
GitLab