diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md
index 2c00d715b97cb01e71940d2147bac7fb2a0ec599..bd317354d0775d3c692da0e69258b2acc96c9c9a 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md
@@ -146,16 +146,15 @@ This is useful to document the cluster used or avoid accidentally using the wron
 
 The number of cores per node that are currently allowed to be allocated for GPU jobs is limited
 depending on how many GPUs are being requested.
-On Alpha Centauri you may only request up to 6 cores per requested GPU.
 This is because we do not wish that GPUs become unusable due to all cores on a node being used by
 a single job which does not, at the same time, request all GPUs.
 
 E.g., if you specify `--gres=gpu:2`, your total number of cores per node (meaning:
-`ntasks`*`cpus-per-task`) may not exceed 12 (on Alpha Centauri)
+`ntasks`*`cpus-per-task`) may not exceed 12 on [`Alpha Centauri`](alpha_centauri.md).
 
 Note that this also has implications for the use of the `--exclusive` parameter.
-Since this sets the number of allocated cores to 48, you also **must** request all eight GPUs
-by specifying `--gres=gpu:8`, otherwise your job will not start.
+Since this sets the number of allocated cores to the maximum, you also **must** request all GPUs
+otherwise your job will not start.
 In the case of `--exclusive`, it won't be denied on submission,
 because this is evaluated in a later scheduling step.
 Jobs that directly request too many cores per GPU will be denied with the error message:
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_limits.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_limits.md
index 717505309502ead9d80419c0b510dcfbd815899f..3c3c2726a77975d5638ae82ce50f1ff2f35ae4e5 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_limits.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_limits.md
@@ -74,13 +74,13 @@ operating system and other components reside in the main memory, lowering the av
 jobs. The reserved amount of memory for the system operation might vary slightly over time. The
 following table depicts the resource limits for [all our HPC systems](hardware_overview.md).
 
-| HPC System | Nodes | # Nodes | Cores per Node | Threads per Core | Memory per Node [in MB] | Memory per (SMT) Core [in MB] | GPUs per Node | Job Max Time |
-|:-----------|:------|--------:|---------------:|-----------------:|------------------------:|------------------------------:|--------------:|-------------:|
-| [`Barnard`](barnard.md)               | `n[1001-1630].barnard` | 630 | 104 | 2 | 515,000    | 4,951  | - | unlimited |
-| [`Power9`](power9.md)                 | `ml[1-29].power9`      | 29  | 44  | 4 | 254,000    | 1,443  | 6 | unlimited |
-| [`Romeo`](romeo.md)                   | `i[8001-8190].romeo`   | 190 | 128 | 2 | 505,000    | 1,972  | - | unlimited |
-| [`Julia`](julia.md)                   | `julia`                | 1   | 896 | 1 | 48,390,000 | 54,006 | - | unlimited |
-| [`Alpha Centauri`](alpha_centauri.md) | `i[8001-8037].alpha`   | 37  | 48  | 2 | 990,000    | 10,312 | 8 | unlimited |
+| HPC System | Nodes | # Nodes | Cores per Node | Threads per Core | Memory per Node [in MB] | Memory per (SMT) Core [in MB] | GPUs per Node | Cores per GPU | Job Max Time |
+|:-----------|:------|--------:|---------------:|-----------------:|------------------------:|------------------------------:|--------------:|--------------:|-------------:|
+| [`Barnard`](barnard.md)               | `n[1001-1630].barnard` | 630 | 104 | 2 | 515,000    | 4,951  | - | - | unlimited |
+| [`Power9`](power9.md)                 | `ml[1-29].power9`      | 29  | 44  | 4 | 254,000    | 1,443  | 6 | - | unlimited |
+| [`Romeo`](romeo.md)                   | `i[8001-8190].romeo`   | 190 | 128 | 2 | 505,000    | 1,972  | - | - | unlimited |
+| [`Julia`](julia.md)                   | `julia`                | 1   | 896 | 1 | 48,390,000 | 54,006 | - | - | unlimited |
+| [`Alpha Centauri`](alpha_centauri.md) | `i[8001-8037].alpha`   | 37  | 48  | 2 | 990,000    | 10,312 | 8 | 6 | unlimited |
 {: summary="Slurm resource limits table" align="bottom"}
 
 All HPC systems have Simultaneous Multithreading (SMT) enabled. You request for this