Skip to content
Snippets Groups Projects
Commit 69ea061a authored by Gitlab Bot's avatar Gitlab Bot
Browse files

Merge branch 'preview' into merge-preview-in-main

parents 3d144822 1e5fa05b
No related branches found
No related tags found
2 merge requests!1037Main,!1032Automated merge from preview to main
...@@ -37,7 +37,7 @@ users and the ZIH. ...@@ -37,7 +37,7 @@ users and the ZIH.
## Island 2 Phase 2 - Intel Haswell CPUs + NVIDIA K80 GPUs ## Island 2 Phase 2 - Intel Haswell CPUs + NVIDIA K80 GPUs
- 64 nodes, each with - 64 nodes, each with
- 2 x Intel(R) Xeon(R) CPU E5-E5-2680 v3 (12 cores) @ 2.50 GHz, Multithreading disabled - 2 x Intel(R) Xeon(R) CPU E5-2680 v3 (12 cores) @ 2.50 GHz, Multithreading disabled
- 64 GB RAM (2.67 GB per core) - 64 GB RAM (2.67 GB per core)
- 128 GB local memory on SSD - 128 GB local memory on SSD
- 4 x NVIDIA Tesla K80 (12 GB GDDR RAM) GPUs - 4 x NVIDIA Tesla K80 (12 GB GDDR RAM) GPUs
......
...@@ -4,7 +4,25 @@ ...@@ -4,7 +4,25 @@
The full hardware specifications of the GPU-compute nodes may be found in the The full hardware specifications of the GPU-compute nodes may be found in the
[HPC Resources](../jobs_and_resources/hardware_overview.md#hpc-resources) page. [HPC Resources](../jobs_and_resources/hardware_overview.md#hpc-resources) page.
Each node uses a different modules(modules.md#module-environments): Note that the clusters may have different [modules](modules.md#module-environments) available:
E.g. the available CUDA versions can be listed with
```bash
marie@compute$ module spider CUDA
```
Note that some modules use a specific CUDA version which is visible in the module name,
e.g. `GDRCopy/2.1-CUDA-11.1.1` or `Horovod/0.28.1-CUDA-11.7.0-TensorFlow-2.11.0`.
This especially applies to the optimized CUDA libraries like `cuDNN`, `NCCL` and `magma`.
!!! important "CUDA-aware MPI"
When running CUDA applications using MPI for interprocess communication you need to additionally load the modules
that enable CUDA-aware MPI which may provide improved performance.
Those are `UCX-CUDA` and `UCC-CUDA` which supplement the `UCX` and `UCC` modules respectively.
Some modules, like `NCCL`, load those automatically.
## Using GPUs with Slurm ## Using GPUs with Slurm
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment