Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
hpc-compendium
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Deploy
Releases
Package Registry
Container Registry
Model registry
Operate
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
ZIH
hpcsupport
hpc-compendium
Commits
321fbba8
Commit
321fbba8
authored
5 months ago
by
Martin Schroschk
Browse files
Options
Downloads
Patches
Plain Diff
Review content
parent
aa6476bc
No related branches found
No related tags found
2 merge requests
!1138
Automated merge from preview to main
,
!1015
updated visualization.md (issue #545)
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
doc.zih.tu-dresden.de/docs/software/visualization.md
+25
-26
25 additions, 26 deletions
doc.zih.tu-dresden.de/docs/software/visualization.md
with
25 additions
and
26 deletions
doc.zih.tu-dresden.de/docs/software/visualization.md
+
25
−
26
View file @
321fbba8
...
...
@@ -10,16 +10,18 @@ ParaView can be used in [interactive mode](#interactive-mode) as well as in
[
batch mode
](
#batch-mode-pvbatch
)
. Both modes are documented in more details in the following
subsections.
!!!
note
WLOG ParaView module and
C
luster
!!!
warning
WLOG ParaView module and
c
luster
Without loss of generality, we stick to a certain ParaView module (from a certain module
release) in the following documentation and provided examples. Do not blind copy the examples.
release) in the following documentation and provided examples. **Do not blind copy the
examples.**
Furthermore, please **adopt the commands to your needs**, e.g., the concrete ParaView module you
want to use.
The same holds for the cluster. The documentation refers to
[`Barnard`](../jobs_and_resources/hardware_overview.md#barnard). If you need to use ParaView on one of the other
cluster, this documentation should hold too.
The same holds for the cluster used in the documentation and examples. The documentation refers
to the cluster [`Barnard`](../jobs_and_resources/hardware_overview.md#barnard). If you want to
use ParaView on [one of the other clusters](../jobs_and_resources/hardware_overview.md), this
documentation should hold too.
### ParaView Modules
...
...
@@ -42,7 +44,7 @@ The command `module spider <module-name>` will show you, how to load a certain P
??? example "Example on how to load a ParaView module"
For example, to obtain information on how to properly load the `ParaView/5.10.1-mpi`
module
, you
For example, to obtain information on how to properly load the
module
`ParaView/5.10.1-mpi`, you
need to invoke the `module spider` command as follows:
```console
...
...
@@ -55,17 +57,18 @@ The command `module spider <module-name>` will show you, how to load a certain P
[...]
```
Obvi
s
ouly, the `ParaView/5.10.1-mpi` module is available within two releases and depends in
Obviou
s
ly, the `ParaView/5.10.1-mpi` module is available within two releases and depends in
both cases on the two modules `GCC/11.3.0` and `OpenMPI/4.1.4`. Without loss of generality, a
valid command to load `ParaView/5.10.1-mpi` is
```console
marie@login$ module load release/23.10
GCC/11.3.0
OpenMPI/4.1.4
marie@login$ module load release/23.10 GCC/11.3.0 OpenMPI/4.1.4
```
### Interactive Mode
There are three different ways of using ParaView interactively on ZIH systems:
There are three different ways of using ParaView interactively on ZIH systems, which are described
in more details in the following subsections:
-
[
GUI via NICE DCV
](
#using-the-gui-via-nice-dcv
)
-
[
Client-Server mode with MPI-parallel off-screen-rendering
](
#using-client-server-mode-with-mpi-parallel-offscreen-rendering
)
...
...
@@ -79,7 +82,7 @@ handling. First, you need to open a DCV session on the Visualization cluster (us
profiles, then click on the
*DCV*
tile in the lower section named
*Other*
). Please
find further instructions on how to start DCV on the
[
virtual desktops page
](
virtual_desktops.md
)
.
In your virtual desktop session, start a terminal (right-click on desktop ->
Terminal), then load the ParaView module as usual and start the GUI:
Terminal
or
*Activities -> Terminal*
), then load the ParaView module as usual and start the GUI:
```
console
marie@dcv$
module load release/23.10 GCC/11.3.0 OpenMPI/4.1.4 ParaView/5.11.1-mpi
...
...
@@ -107,13 +110,13 @@ The *pvserver* can be run in parallel using MPI. To do so, load the desired Para
start the
`pvserver`
executable in offscreen rendering mode within an interactive allocation via
`srun`
.
???+ example "Start pvserver"
???+ example "Start
`
pvserver
`
"
Here, we ask for 8 MPI tasks on one node for 4 hours within an interactive allocation. Please
adopt the time limit and ressources to your needs.
```console
marie@login$ module load release/23.10 GCC/11.3.0 OpenMPI/4.1.4 ParaView/5.1
0
.1-mpi
marie@login$ module load release/23.10 GCC/11.3.0 OpenMPI/4.1.4 ParaView/5.1
1
.1-mpi
marie@login$ srun --nodes=1 --ntasks=8 --mem-per-cpu=2500 --time=04:00:00 --pty pvserver --force-offscreen-rendering
srun: job 1730359 queued and waiting for resources
srun: job 1730359 has been allocated resources
...
...
@@ -128,16 +131,16 @@ are printed.
!!! tip "Custom port"
If the default port `11111` is already in use, an alternative or custom port can be specified
via the commandline option `-sp=
port
` to `pvserver`.
via the commandline option `-sp=
<PORT>
` to `pvserver`.
The output from
`pvserver`
contains the node name which your job and server runs on. However, since
the node names of the cluster are not present in the public domain name system (only
cluster-internally), you cannot just use this line as-is for connection with your client. Instead,
you need to establish a so-called forward SSH tunnel to localhost. You first have to resolve
the
name to an IP address on ZIH systems using
`host`
in another SSH
session. Then, the SSH tunnel
can be created from your workstation. The following example will
depict both steps: Resolve the IP of the compute node and finaly create a
forward SSH tunnel to localhost on port 22222 (or what ever port is preferred).
you need to establish a so-called forward SSH tunnel to
your
local
host. You first have to resolve
the
name to an IP address on ZIH systems using
`host`
in another SSH
session. Then, the SSH tunnel
can be created from your workstation. The following example will
depict both steps: Resolve the IP of the compute node and final
l
y create a
forward SSH tunnel to local
host on port 22222 (or what ever port is preferred).
???+ example "SSH tunnel"
...
...
@@ -221,7 +224,7 @@ it into thinking your provided GL rendering version is higher than what it actua
marie@login$ srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=2500 --pty --x11=first paraview
```
### Batch Mode
-
pvbatch
### Batch Mode
(`
pvbatch
`)
ParaView can run in batch mode, i.e., without opening the ParaView GUI, executing a Python script.
This way, common visualization tasks can be automated. There are two Python interfaces:
`pvpython`
...
...
@@ -232,7 +235,7 @@ parallel, if it was built using MPI.
ParaView is shipped with a prebuild MPI library and **pvbatch has to be
invoked using this very mpiexec** command. Make sure to not use `srun`
or `mpiexec` from another MPI module,
e.g
., check what `mpiexec` is in
or `mpiexec` from another MPI module,
i.e
., check what `mpiexec` is in
the path:
```console
...
...
@@ -280,15 +283,13 @@ interactive allocation.
salloc: Pending job allocation 336202
salloc: job 336202 queued and waiting for resources
salloc: job 336202 has been allocated resources
salloc: Granted job allocation 336202
salloc: Waiting for resource configuration
salloc: Nodes taurusi6605 are ready for job
[...]
# Make sure to only use ParaView
marie@compute$ module purge
marie@compute$ module load release/23.10 GCC/11.3.0 OpenMPI/4.1.4 ParaView/5.11.1-mpi
# Go to working directory, e.g., workspace
# Go to working directory, e.g.,
your
workspace
marie@compute$ cd /path/to/workspace
# Execute pvbatch using 16 MPI processes in parallel on allocated resources
...
...
@@ -316,7 +317,5 @@ and pass the option `--displays $CUDA_VISIBLE_DEVICES` to `pvbatch`.
module purge
module load release/23.10 GCC/11.3.0 OpenMPI/4.1.4 ParaView/5.11.1-mpi
mpiexec -n $SLURM_CPUS_PER_TASK --bind-to core pvbatch --mpi --displays $CUDA_VISIBLE_DEVICES --force-offscreen-rendering pvbatch-script.py
#or
pvbatch --mpi --displays $CUDA_VISIBLE_DEVICES --force-offscreen-rendering pvbatch-script.py
```
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment