diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/capella.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/capella.md index 8290e03f05fe9693d92a809c8b2ac113f721402b..0748ef8e7a2c6cb59f3a8c321214190d18961c50 100644 --- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/capella.md +++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/capella.md @@ -38,7 +38,7 @@ Although all other [filesystems](../data_lifecycle/workspaces.md) ### Modules The easiest way using software is using the [module system](../software/modules.md). -All software available from the module system has been deliberately build for the cluster `Capella` +All software available from the module system has been specifically build for the cluster `Capella` i.e., with optimization for Zen4 (Genoa) microarchitecture and CUDA-support enabled. ### Python Virtual Environments diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/power9.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/power9.md index cfa999dfeb52266f5799cfd215460b106c1e602d..c08d9936f0fd4b0abf72e7e42fa678504186b9cb 100644 --- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/power9.md +++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/power9.md @@ -28,7 +28,7 @@ cluster `Power9` has six Tesla V100 GPUs. You can find a detailed specification !!! note - The cluster `power` is based on the Power9 architecture, which means that the software built + The cluster `Power9` is based on the PPC64 architecture, which means that the software built for x86_64 will not work on this cluster. ### Power AI diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_limits.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_limits.md index 338ce78b8dc3f7347f4f9a68c9f6e61cc45fb048..8f77e9908a61a2c6ba595b80bf5debd93c80e84f 100644 --- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_limits.md +++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_limits.md @@ -72,7 +72,7 @@ The physical installed memory might differ from the amount available for Slurm j so-called diskless compute nodes, i.e., nodes without additional local drive. At these nodes, the operating system and other components reside in the main memory, lowering the available memory for jobs. The reserved amount of memory for the system operation might vary slightly over time. The -following table depics the resource limits for [all our HPC systems](hardware_overview.md). +following table depicts the resource limits for [all our HPC systems](hardware_overview.md). | HPC System | Nodes | # Nodes | Cores per Node | Threads per Core | Memory per Node [in MB] | Memory per (SMT) Core [in MB] | GPUs per Node | Cores per GPU | Job Max Time [in days] | |:-----------|:------|--------:|---------------:|-----------------:|------------------------:|------------------------------:|--------------:|--------------:|-------------:|