Skip to content
Snippets Groups Projects
Commit 0431fdc3 authored by Martin Schroschk's avatar Martin Schroschk
Browse files

Review section on exclusive jobs

- Tighten desc.
- Remove "partitions"
parent a507ec74
No related branches found
No related tags found
2 merge requests!1071Automated merge from preview to main,!1061Spring clean: Remove partition; use cluster
......@@ -253,40 +253,40 @@ enough resources in total were specified in the header of the batch script.
## Exclusive Jobs for Benchmarking
Jobs ZIH systems run, by default, in shared-mode, meaning that multiple jobs (from different users)
can run at the same time on the same compute node. Sometimes, this behavior is not desired (e.g.
for benchmarking purposes). Thus, the Slurm parameter `--exclusive` request for exclusive usage of
resources.
Setting `--exclusive` **only** makes sure that there will be **no other jobs running on your nodes**.
It does not, however, mean that you automatically get access to all the resources which the node
might provide without explicitly requesting them, e.g. you still have to request a GPU via the
generic resources parameter (`gres`) to run on the partitions with GPU, or you still have to
request all cores of a node if you need them. CPU cores can either to be used for a task
(`--ntasks`) or for multi-threading within the same task (`--cpus-per-task`). Since those two
options are semantically different (e.g., the former will influence how many MPI processes will be
spawned by `srun` whereas the latter does not), Slurm cannot determine automatically which of the
two you might want to use. Since we use cgroups for separation of jobs, your job is not allowed to
use more resources than requested.*
If you just want to use all available cores in a node, you have to specify how Slurm should organize
them, like with `--partition=haswell --cpus-per-tasks=24` or `--partition=haswell --ntasks-per-node=24`.
Jobs on ZIH systems run, by default, in shared-mode, meaning that multiple jobs (from different
users) can run at the same time on the same compute node. Sometimes, this behavior is not desired
(e.g. for benchmarking purposes). You can request for exclusive usage of resources using the Slurm
parameter `--exclusive`.
!!! note "Exclusive does not allocate all available resources"
Setting `--exclusive` **only** makes sure that there will be **no other jobs running on your
nodes**. It does not, however, mean that you automatically get access to all the resources
which the node might provide without explicitly requesting them.
E.g. you still have to request for a GPU via the generic resources parameter (`gres`) on the GPU
cluster. On the other hand, you also have to request all cores of a node if you need them.
CPU cores can either to be used for a task (`--ntasks`) or for multi-threading within the same task
(`--cpus-per-task`). Since those two options are semantically different (e.g., the former will
influence how many MPI processes will be spawned by `srun` whereas the latter does not), Slurm
cannot determine automatically which of the two you might want to use. Since we use cgroups for
separation of jobs, your job is not allowed to use more resources than requested.
Here is a short example to ensure that a benchmark is not spoiled by other jobs, even if it doesn't
use up all resources in the nodes:
use up all resources of the nodes:
!!! example "Exclusive resources"
!!! example "Job file with exclusive resources"
```Bash
#!/bin/bash
#SBATCH --partition=haswell
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=2
#SBATCH --cpus-per-task=8
#SBATCH --exclusive # ensure that nobody spoils my measurement on 2 x 2 x 8 cores
#SBATCH --exclusive # ensure that nobody spoils my measurement on 2 x 2 x 8 cores
#SBATCH --time=00:10:00
#SBATCH --job-name=Benchmark
#SBATCH --mail-type=end
#SBATCH --job-name=benchmark
#SBATCH --mail-type=start,end
#SBATCH --mail-user=<your.email>@tu-dresden.de
srun ./my_benchmark
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment