diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/overview.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/overview.md index 4377658eda7e723d8b78257fcd2c981f0f1de4e9..59a17f1faee331569532500930c091a8d2ffabbb 100644 --- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/overview.md +++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/overview.md @@ -15,6 +15,7 @@ The HPC system, redesigned in December 2023, consists of five homogeneous cluste components. ## Selection of Suitable Hardware + The five clusters [`barnard`](barnard.md), [`alpha`](alpha_centauri.md), [`romeo`](romeo.md), @@ -89,17 +90,19 @@ simulations and risk modeling. Use the cluster `power` only if you need GPUs! Ot using the x86-based partitions most likely would be more beneficial. ### Multithreading -Some cluster/nodes have Simultaneous Multithreading (SMT) enabled, e.g [`alpha`](slurm.md) You request for this -additional threads using the Slurm option `--hint=multithread` or by setting the environment -variable `SLURM_HINT=multithread`. Besides the usage of the threads to speed up the computations, -the memory of the other threads is allocated implicitly, too, and you will always get + +Some cluster/nodes have Simultaneous Multithreading (SMT) enabled, e.g [`alpha`](slurm.md) You +request for this additional threads using the Slurm option `--hint=multithread` or by setting the +environment variable `SLURM_HINT=multithread`. Besides the usage of the threads to speed up the +computations, the memory of the other threads is allocated implicitly, too, and you will always get `Memory per Core`*`number of threads` as memory pledge. ### What do I need, a CPU or GPU? If an application is designed to run on GPUs this is normally announced unmistakable since the efforts of adapting an existing software to make use of a GPU can be overwhelming. -And even if the software was listed in [NVIDIA's list of GPU-Accelerated Applications](https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/tesla-product-literature/gpu-applications-catalog.pdf) +And even if the software was listed in +[NVIDIA's list of GPU-Accelerated Applications](https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/tesla-product-literature/gpu-applications-catalog.pdf) only certain parts of the computations may run on the GPU. To answer the question: The easiest way is to compare a typical computation @@ -116,6 +119,7 @@ by a significant factor then this might be the obvious choice. a single GPU's core can handle is small), GPUs are not as versatile as CPUs. ### How much time do I need? + #### Runtime limits !!! warning "Runtime limits on login nodes" @@ -172,6 +176,7 @@ However, this is highly depending on the used software, investigate if your appl parallel execution. ### How much memory do I need? + #### Memory Limits !!! note "Memory limits are enforced." @@ -193,6 +198,7 @@ Follow the page [Slurm](slurm.md) for comprehensive documentation using the batc ZIH systems. There is also a page with extensive set of [Slurm examples](slurm_examples.md). ### Which software is required? + #### Available software Pre-installed software on our HPC systems is managed via [modules](../software/modules.md).