Skip to content
Snippets Groups Projects
Commit 5860bc1f authored by Etienne Keller's avatar Etienne Keller
Browse files

Merge branch 'issue-587' into 'preview'

Remove partition interactive

See merge request !1065
parents 2fdc6312 3d5a5a21
No related branches found
No related tags found
2 merge requests!1086Automated merge from preview to main,!1065Remove partition interactive
......@@ -165,30 +165,44 @@ allocation with desired switch count or the time limit expires. Acceptable time
## Interactive Jobs
Interactive activities like editing, compiling, preparing experiments etc. are normally limited to
the login nodes. For longer interactive sessions, you can allocate cores on the compute node with
the command `salloc`. It takes the same options as `sbatch` to specify the required resources.
the login nodes. For longer interactive sessions, you can allocate resources on the compute node
with the command `salloc`. It takes the same options as `sbatch` to specify the required resources.
`salloc` returns a new shell on the node where you submitted the job. You need to use the command
`srun` in front of the following commands to have these commands executed on the allocated
resources. If you allocate more than one task, please be aware that `srun` will run the command on
each allocated task by default! To release the allocated resources, invoke the command `exit` or
resources. If you request for more than one task, please be aware that `srun` will run the command
on each allocated task by default! To release the allocated resources, invoke the command `exit` or
`scancel <jobid>`.
```console
marie@login$ salloc --nodes=2
salloc: Pending job allocation 27410653
salloc: job 27410653 queued and waiting for resources
salloc: job 27410653 has been allocated resources
salloc: Granted job allocation 27410653
salloc: Waiting for resource configuration
salloc: Nodes taurusi[6603-6604] are ready for job
marie@login$ hostname
tauruslogin5.taurus.hrsk.tu-dresden.de
marie@login$ srun hostname
taurusi6604.taurus.hrsk.tu-dresden.de
taurusi6603.taurus.hrsk.tu-dresden.de
marie@login$ exit # ending the resource allocation
```
!!! example "Example: Interactive allocation using `salloc`"
The following code listing depicts the allocation of two nodes with two tasks on each node with a
time limit of one hour on the cluster `Barnard` for interactive usage.
```console linenums="1"
marie@login.barnard$ salloc --nodes=2 --ntasks-per-node=2 --time=01:00:00
salloc: Pending job allocation 1234567
salloc: job 1234567 queued and waiting for resources
salloc: job 1234567 has been allocated resources
salloc: Granted job allocation 1234567
salloc: Waiting for resource configuration
salloc: Nodes n[1184,1223] are ready for job
[...]
marie@login.barnard$ hostname
login1
marie@login.barnard$ srun hostname
n1184
n1184
n1223
n1223
marie@login.barnard$ exit # ending the resource allocation
```
After Slurm successfully allocated resources for the job, a new shell is created on the submit
host (cf. lines 9-10).
In order to use the allocated resources, you need to invoke your commands with `srun` (cf. lines
11 ff).
The command `srun` also creates an allocation, if it is running outside any `sbatch` or `salloc`
allocation.
......@@ -218,13 +232,6 @@ taurusi6604.taurus.hrsk.tu-dresden.de
shell, as shown in the example above. If you missed adding `-l` at submitting the interactive
session, no worry, you can source this files also later on manually (`source /etc/profile`).
!!! note "Partition `interactive`"
A dedicated partition `interactive` is reserved for short jobs (< 8h) with no more than one job
per user. An interactive partition is available for every regular partition, e.g.
`alpha-interactive` for `alpha`. Please check the availability of nodes there with
`sinfo |grep 'interactive\|AVAIL' |less`
### Interactive X11/GUI Jobs
Slurm will forward your X11 credentials to the first (or even all) node for a job with the
......
......@@ -158,7 +158,7 @@ processes.
```console
marie@login$ module ParaView/5.7.0-osmesa
marie@login$ srun --nodes=1 --ntasks=8 --mem-per-cpu=2500 --partition=interactive --pty pvserver --force-offscreen-rendering
marie@login$ srun --nodes=1 --ntasks=8 --mem-per-cpu=2500 --pty pvserver --force-offscreen-rendering
srun: job 2744818 queued and waiting for resources
srun: job 2744818 has been allocated resources
Waiting for client...
......@@ -254,5 +254,5 @@ it into thinking your provided GL rendering version is higher than what it actua
marie@login$ export MESA_GL_VERSION_OVERRIDE=3.2
# 3rd, start the ParaView GUI inside an interactive job. Don't forget the --x11 parameter for X forwarding:
marie@login$ srun --ntasks=1 --cpus-per-task=1 --partition=interactive --mem-per-cpu=2500 --pty --x11=first paraview
marie@login$ srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=2500 --pty --x11=first paraview
```
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment