Skip to content
Snippets Groups Projects
Commit 5860bc1f authored by Etienne Keller's avatar Etienne Keller
Browse files

Merge branch 'issue-587' into 'preview'

Remove partition interactive

See merge request !1065
parents 2fdc6312 3d5a5a21
No related branches found
No related tags found
2 merge requests!1086Automated merge from preview to main,!1065Remove partition interactive
...@@ -165,30 +165,44 @@ allocation with desired switch count or the time limit expires. Acceptable time ...@@ -165,30 +165,44 @@ allocation with desired switch count or the time limit expires. Acceptable time
## Interactive Jobs ## Interactive Jobs
Interactive activities like editing, compiling, preparing experiments etc. are normally limited to Interactive activities like editing, compiling, preparing experiments etc. are normally limited to
the login nodes. For longer interactive sessions, you can allocate cores on the compute node with the login nodes. For longer interactive sessions, you can allocate resources on the compute node
the command `salloc`. It takes the same options as `sbatch` to specify the required resources. with the command `salloc`. It takes the same options as `sbatch` to specify the required resources.
`salloc` returns a new shell on the node where you submitted the job. You need to use the command `salloc` returns a new shell on the node where you submitted the job. You need to use the command
`srun` in front of the following commands to have these commands executed on the allocated `srun` in front of the following commands to have these commands executed on the allocated
resources. If you allocate more than one task, please be aware that `srun` will run the command on resources. If you request for more than one task, please be aware that `srun` will run the command
each allocated task by default! To release the allocated resources, invoke the command `exit` or on each allocated task by default! To release the allocated resources, invoke the command `exit` or
`scancel <jobid>`. `scancel <jobid>`.
```console !!! example "Example: Interactive allocation using `salloc`"
marie@login$ salloc --nodes=2
salloc: Pending job allocation 27410653 The following code listing depicts the allocation of two nodes with two tasks on each node with a
salloc: job 27410653 queued and waiting for resources time limit of one hour on the cluster `Barnard` for interactive usage.
salloc: job 27410653 has been allocated resources
salloc: Granted job allocation 27410653 ```console linenums="1"
salloc: Waiting for resource configuration marie@login.barnard$ salloc --nodes=2 --ntasks-per-node=2 --time=01:00:00
salloc: Nodes taurusi[6603-6604] are ready for job salloc: Pending job allocation 1234567
marie@login$ hostname salloc: job 1234567 queued and waiting for resources
tauruslogin5.taurus.hrsk.tu-dresden.de salloc: job 1234567 has been allocated resources
marie@login$ srun hostname salloc: Granted job allocation 1234567
taurusi6604.taurus.hrsk.tu-dresden.de salloc: Waiting for resource configuration
taurusi6603.taurus.hrsk.tu-dresden.de salloc: Nodes n[1184,1223] are ready for job
marie@login$ exit # ending the resource allocation [...]
``` marie@login.barnard$ hostname
login1
marie@login.barnard$ srun hostname
n1184
n1184
n1223
n1223
marie@login.barnard$ exit # ending the resource allocation
```
After Slurm successfully allocated resources for the job, a new shell is created on the submit
host (cf. lines 9-10).
In order to use the allocated resources, you need to invoke your commands with `srun` (cf. lines
11 ff).
The command `srun` also creates an allocation, if it is running outside any `sbatch` or `salloc` The command `srun` also creates an allocation, if it is running outside any `sbatch` or `salloc`
allocation. allocation.
...@@ -218,13 +232,6 @@ taurusi6604.taurus.hrsk.tu-dresden.de ...@@ -218,13 +232,6 @@ taurusi6604.taurus.hrsk.tu-dresden.de
shell, as shown in the example above. If you missed adding `-l` at submitting the interactive shell, as shown in the example above. If you missed adding `-l` at submitting the interactive
session, no worry, you can source this files also later on manually (`source /etc/profile`). session, no worry, you can source this files also later on manually (`source /etc/profile`).
!!! note "Partition `interactive`"
A dedicated partition `interactive` is reserved for short jobs (< 8h) with no more than one job
per user. An interactive partition is available for every regular partition, e.g.
`alpha-interactive` for `alpha`. Please check the availability of nodes there with
`sinfo |grep 'interactive\|AVAIL' |less`
### Interactive X11/GUI Jobs ### Interactive X11/GUI Jobs
Slurm will forward your X11 credentials to the first (or even all) node for a job with the Slurm will forward your X11 credentials to the first (or even all) node for a job with the
......
...@@ -158,7 +158,7 @@ processes. ...@@ -158,7 +158,7 @@ processes.
```console ```console
marie@login$ module ParaView/5.7.0-osmesa marie@login$ module ParaView/5.7.0-osmesa
marie@login$ srun --nodes=1 --ntasks=8 --mem-per-cpu=2500 --partition=interactive --pty pvserver --force-offscreen-rendering marie@login$ srun --nodes=1 --ntasks=8 --mem-per-cpu=2500 --pty pvserver --force-offscreen-rendering
srun: job 2744818 queued and waiting for resources srun: job 2744818 queued and waiting for resources
srun: job 2744818 has been allocated resources srun: job 2744818 has been allocated resources
Waiting for client... Waiting for client...
...@@ -254,5 +254,5 @@ it into thinking your provided GL rendering version is higher than what it actua ...@@ -254,5 +254,5 @@ it into thinking your provided GL rendering version is higher than what it actua
marie@login$ export MESA_GL_VERSION_OVERRIDE=3.2 marie@login$ export MESA_GL_VERSION_OVERRIDE=3.2
# 3rd, start the ParaView GUI inside an interactive job. Don't forget the --x11 parameter for X forwarding: # 3rd, start the ParaView GUI inside an interactive job. Don't forget the --x11 parameter for X forwarding:
marie@login$ srun --ntasks=1 --cpus-per-task=1 --partition=interactive --mem-per-cpu=2500 --pty --x11=first paraview marie@login$ srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=2500 --pty --x11=first paraview
``` ```
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment