Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
hpc-compendium
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Deploy
Releases
Package Registry
Container Registry
Model registry
Operate
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
ZIH
hpcsupport
hpc-compendium
Commits
5860bc1f
Commit
5860bc1f
authored
10 months ago
by
Etienne Keller
Browse files
Options
Downloads
Plain Diff
Merge branch 'issue-587' into 'preview'
Remove partition interactive See merge request
!1065
parents
2fdc6312
3d5a5a21
No related branches found
No related tags found
2 merge requests
!1086
Automated merge from preview to main
,
!1065
Remove partition interactive
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md
+33
-26
33 additions, 26 deletions
doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md
doc.zih.tu-dresden.de/docs/software/visualization.md
+2
-2
2 additions, 2 deletions
doc.zih.tu-dresden.de/docs/software/visualization.md
with
35 additions
and
28 deletions
doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md
+
33
−
26
View file @
5860bc1f
...
...
@@ -165,30 +165,44 @@ allocation with desired switch count or the time limit expires. Acceptable time
## Interactive Jobs
Interactive activities like editing, compiling, preparing experiments etc. are normally limited to
the login nodes. For longer interactive sessions, you can allocate
cor
es on the compute node
with
the command
`salloc`
. It takes the same options as
`sbatch`
to specify the required resources.
the login nodes. For longer interactive sessions, you can allocate
resourc
es on the compute node
with
the command
`salloc`
. It takes the same options as
`sbatch`
to specify the required resources.
`salloc`
returns a new shell on the node where you submitted the job. You need to use the command
`srun`
in front of the following commands to have these commands executed on the allocated
resources. If you
allocate
more than one task, please be aware that
`srun`
will run the command
on
each allocated task by default! To release the allocated resources, invoke the command
`exit`
or
resources. If you
request for
more than one task, please be aware that
`srun`
will run the command
on
each allocated task by default! To release the allocated resources, invoke the command
`exit`
or
`scancel <jobid>`
.
```
console
marie@login$
salloc
--nodes
=
2
salloc: Pending job allocation 27410653
salloc: job 27410653 queued and waiting for resources
salloc: job 27410653 has been allocated resources
salloc: Granted job allocation 27410653
salloc: Waiting for resource configuration
salloc: Nodes taurusi[6603-6604] are ready for job
marie@login$
hostname
tauruslogin5.taurus.hrsk.tu-dresden.de
marie@login$
srun
hostname
taurusi6604.taurus.hrsk.tu-dresden.de
taurusi6603.taurus.hrsk.tu-dresden.de
marie@login$
exit
# ending the resource allocation
```
!!! example "Example: Interactive allocation using
`salloc`
"
The following code listing depicts the allocation of two nodes with two tasks on each node with a
time limit of one hour on the cluster `Barnard` for interactive usage.
```console linenums="1"
marie@login.barnard$ salloc --nodes=2 --ntasks-per-node=2 --time=01:00:00
salloc: Pending job allocation 1234567
salloc: job 1234567 queued and waiting for resources
salloc: job 1234567 has been allocated resources
salloc: Granted job allocation 1234567
salloc: Waiting for resource configuration
salloc: Nodes n[1184,1223] are ready for job
[...]
marie@login.barnard$ hostname
login1
marie@login.barnard$ srun hostname
n1184
n1184
n1223
n1223
marie@login.barnard$ exit # ending the resource allocation
```
After Slurm successfully allocated resources for the job, a new shell is created on the submit
host (cf. lines 9-10).
In order to use the allocated resources, you need to invoke your commands with `srun` (cf. lines
11 ff).
The command
`srun`
also creates an allocation, if it is running outside any
`sbatch`
or
`salloc`
allocation.
...
...
@@ -218,13 +232,6 @@ taurusi6604.taurus.hrsk.tu-dresden.de
shell, as shown in the example above. If you missed adding `-l` at submitting the interactive
session, no worry, you can source this files also later on manually (`source /etc/profile`).
!!! note "Partition
`interactive`
"
A dedicated partition `interactive` is reserved for short jobs (< 8h) with no more than one job
per user. An interactive partition is available for every regular partition, e.g.
`alpha-interactive` for `alpha`. Please check the availability of nodes there with
`sinfo |grep 'interactive\|AVAIL' |less`
### Interactive X11/GUI Jobs
Slurm will forward your X11 credentials to the first (or even all) node for a job with the
...
...
This diff is collapsed.
Click to expand it.
doc.zih.tu-dresden.de/docs/software/visualization.md
+
2
−
2
View file @
5860bc1f
...
...
@@ -158,7 +158,7 @@ processes.
```console
marie@login$ module ParaView/5.7.0-osmesa
marie@login$ srun --nodes=1 --ntasks=8 --mem-per-cpu=2500
--partition=interactive
--pty pvserver --force-offscreen-rendering
marie@login$ srun --nodes=1 --ntasks=8 --mem-per-cpu=2500 --pty pvserver --force-offscreen-rendering
srun: job 2744818 queued and waiting for resources
srun: job 2744818 has been allocated resources
Waiting for client...
...
...
@@ -254,5 +254,5 @@ it into thinking your provided GL rendering version is higher than what it actua
marie@login$ export MESA_GL_VERSION_OVERRIDE=3.2
# 3rd, start the ParaView GUI inside an interactive job. Don't forget the --x11 parameter for X forwarding:
marie@login$ srun --ntasks=1 --cpus-per-task=1
--partition=interactive
--mem-per-cpu=2500 --pty --x11=first paraview
marie@login$ srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=2500 --pty --x11=first paraview
```
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment