Spring clean: Remove partition; use cluster
All threads resolved!
All threads resolved!
Compare changes
@@ -8,8 +8,8 @@ depend on the type of parallelization and architecture.
@@ -8,8 +8,8 @@ depend on the type of parallelization and architecture.
An SMP-parallel job can only run within a node, so it is necessary to include the options `--node=1`
@@ -22,8 +22,7 @@ An example job file would look like:
@@ -22,8 +22,7 @@ An example job file would look like:
@@ -131,10 +130,6 @@ where `NUM_PER_NODE` is the number of GPUs **per node** that will be used for th
@@ -131,10 +130,6 @@ where `NUM_PER_NODE` is the number of GPUs **per node** that will be used for th
With the transition to the sub-clusters it is no longer required to specify the partition with `-p, --partition`.
@@ -209,10 +204,10 @@ three things:
@@ -209,10 +204,10 @@ three things:
1. Start job steps with srun as background processes. This is achieved by adding an ampersand at the
@@ -254,40 +249,40 @@ enough resources in total were specified in the header of the batch script.
@@ -254,40 +249,40 @@ enough resources in total were specified in the header of the batch script.
Setting `--exclusive` **only** makes sure that there will be **no other jobs running on your nodes**.
If you just want to use all available cores in a node, you have to specify how Slurm should organize
them, like with `--partition=haswell --cpus-per-tasks=24` or `--partition=haswell --ntasks-per-node=24`.