diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md index 9391c0eb93e9c035108e11fab0d1d649b5a8b957..eae6e2e7b1b2d7cd24468cd26a13b0d2dafae557 100644 --- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md +++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md @@ -62,7 +62,7 @@ CPUs. - All other nodes are diskless and have no or very limited local storage (i.e. `/tmp`) - Login nodes: `login[1-4].barnard.hpc.tu-dresden.de` - Hostnames: `n[1001-1630].barnard.hpc.tu-dresden.de` -- Operating system: Red Hat Enterpise Linux 8.7 +- Operating system: Red Hat Enterpise Linux 8.9 - Further information on the usage is documented on the site [CPU Cluster Barnard](barnard.md) ## Alpha Centauri @@ -77,7 +77,7 @@ and is designed for AI and ML tasks. - 3.5 TB local storage on NVMe device at `/tmp` - Login nodes: `login[1-2].alpha.hpc.tu-dresden.de` - Hostnames: `i[8001-8037].alpha.hpc.tu-dresden.de` -- Operating system: Rocky Linux 8.7 +- Operating system: Rocky Linux 8.9 - Further information on the usage is documented on the site [GPU Cluster Alpha Centauri](alpha_centauri.md) ## Capella @@ -104,7 +104,7 @@ The cluster `Romeo` is a general purpose cluster by NEC based on AMD Rome CPUs. - 200 GB local storage on SSD at `/tmp` - Login nodes: `login[1-2].romeo.hpc.tu-dresden.de` - Hostnames: `i[7001-7190].romeo.hpc.tu-dresden.de` -- Operating system: Rocky Linux 8.7 +- Operating system: Rocky Linux 8.9 - Further information on the usage is documented on the site [CPU Cluster Romeo](romeo.md) ## Julia @@ -120,6 +120,7 @@ architecture. - 370 TB of fast NVME storage available at `/nvme/<projectname>` - Login node: `julia.hpc.tu-dresden.de` - Hostname: `julia.hpc.tu-dresden.de` +- Operating system: Rocky Linux 8.7 - Further information on the usage is documented on the site [SMP System Julia](julia.md) ## Power9 @@ -134,4 +135,5 @@ The cluster `Power9` by IBM is based on Power9 CPUs and provides NVIDIA V100 GPU - NVLINK bandwidth 150 GB/s between GPUs and host - Login nodes: `login[1-2].power9.hpc.tu-dresden.de` - Hostnames: `ml[1-29].power9.hpc.tu-dresden.de` +- Operating system: Alma Linux 8.7 - Further information on the usage is documented on the site [GPU Cluster Power9](power9.md)