From 3746bc3658035063317bcea26bbf13eeb655af8c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sebastian=20D=C3=B6bel?= <sebastian.doebel@tu-dresden.de> Date: Thu, 7 Nov 2024 10:22:44 +0100 Subject: [PATCH] Apply 3 suggestion(s) to 3 file(s) Co-authored-by: Bert Wesarg <bert.wesarg@tu-dresden.de> --- doc.zih.tu-dresden.de/docs/jobs_and_resources/capella.md | 2 +- doc.zih.tu-dresden.de/docs/jobs_and_resources/power9.md | 2 +- doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_limits.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/capella.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/capella.md index 8290e03f0..0748ef8e7 100644 --- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/capella.md +++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/capella.md @@ -38,7 +38,7 @@ Although all other [filesystems](../data_lifecycle/workspaces.md) ### Modules The easiest way using software is using the [module system](../software/modules.md). -All software available from the module system has been deliberately build for the cluster `Capella` +All software available from the module system has been specifically build for the cluster `Capella` i.e., with optimization for Zen4 (Genoa) microarchitecture and CUDA-support enabled. ### Python Virtual Environments diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/power9.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/power9.md index cfa999dfe..c08d9936f 100644 --- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/power9.md +++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/power9.md @@ -28,7 +28,7 @@ cluster `Power9` has six Tesla V100 GPUs. You can find a detailed specification !!! note - The cluster `power` is based on the Power9 architecture, which means that the software built + The cluster `Power9` is based on the PPC64 architecture, which means that the software built for x86_64 will not work on this cluster. ### Power AI diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_limits.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_limits.md index 338ce78b8..8f77e9908 100644 --- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_limits.md +++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_limits.md @@ -72,7 +72,7 @@ The physical installed memory might differ from the amount available for Slurm j so-called diskless compute nodes, i.e., nodes without additional local drive. At these nodes, the operating system and other components reside in the main memory, lowering the available memory for jobs. The reserved amount of memory for the system operation might vary slightly over time. The -following table depics the resource limits for [all our HPC systems](hardware_overview.md). +following table depicts the resource limits for [all our HPC systems](hardware_overview.md). | HPC System | Nodes | # Nodes | Cores per Node | Threads per Core | Memory per Node [in MB] | Memory per (SMT) Core [in MB] | GPUs per Node | Cores per GPU | Job Max Time [in days] | |:-----------|:------|--------:|---------------:|-----------------:|------------------------:|------------------------------:|--------------:|--------------:|-------------:| -- GitLab