diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview_2023.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview_2023.md
index 3dc238fe601046bebafcd75e5eb7c7b27fbd357f..cc09c236cd8ae47fb1f24f3011251b011a2e8fdd 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview_2023.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview_2023.md
@@ -32,7 +32,7 @@ All clusters will have access to these shared parallel filesystems:
 - Hostnames: `n[1001-1630].barnard.hpc.tu-dresden.de`
 - Login nodes: `login[1-4].barnard.hpc.tu-dresden.de`
 
-## AMD Rome CPUs + NVIDIA A100
+## Alpha Centauri - AMD Rome CPUs + NVIDIA A100
 
 - 34 nodes, each with
     - 8 x NVIDIA A100-SXM4 Tensor Core-GPUs
@@ -43,7 +43,7 @@ All clusters will have access to these shared parallel filesystems:
 - Login nodes: `login[1-2].alpha.hpc.tu-dresden.de`
 - Further information on the usage is documented on the site [Alpha Centauri Nodes](alpha_centauri.md)
 
-## Island 7 - AMD Rome CPUs
+## Romeo - AMD Rome CPUs
 
 - 192 nodes, each with
     - 2 x AMD EPYC CPU 7702 (64 cores) @ 2.0 GHz, Multithreading available
@@ -53,7 +53,7 @@ All clusters will have access to these shared parallel filesystems:
 - Login nodes: `login[1-2].romeo.hpc.tu-dresden.de`
 - Further information on the usage is documented on the site [AMD Rome Nodes](rome_nodes.md)
 
-## Large SMP System HPE Superdome Flex
+## Julia - Large SMP System HPE Superdome Flex
 
 - 1 node, with
     - 32 x Intel Xeon Platinum 8276M CPU @ 2.20 GHz (28 cores)