Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
hpc-compendium
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Deploy
Releases
Package Registry
Container Registry
Model registry
Operate
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
ZIH
hpcsupport
hpc-compendium
Commits
89cf746b
Commit
89cf746b
authored
4 months ago
by
Sebastian Döbel
Browse files
Options
Downloads
Patches
Plain Diff
Adjust capella page
parent
4e6d7ad5
No related branches found
No related tags found
2 merge requests
!1164
Automated merge from preview to main
,
!1152
Add page for Capella
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
doc.zih.tu-dresden.de/docs/jobs_and_resources/capella.md
+13
-52
13 additions, 52 deletions
doc.zih.tu-dresden.de/docs/jobs_and_resources/capella.md
with
13 additions
and
52 deletions
doc.zih.tu-dresden.de/docs/jobs_and_resources/capella.md
+
13
−
52
View file @
89cf746b
...
@@ -20,69 +20,30 @@ HPC simulations.
...
@@ -20,69 +20,30 @@ HPC simulations.
Capella has a fast WEKAio file system mounted on
`/data/cat`
. It is only mounted on Capella and the
Capella has a fast WEKAio file system mounted on
`/data/cat`
. It is only mounted on Capella and the
[
Datamover nodes
](
../data_transfer/datamover.md
)
.
[
Datamover nodes
](
../data_transfer/datamover.md
)
.
It should be used as the main working file system on Capella.
It should be used as the main working file system on Capella and has to used by
[
workspaces
](
../data_lifecycle/file_systems.md
)
.
Although all other
[
filesystems
](
../data_lifecycle/file_systems.md
)
Workspaces can only be created on Capella login and compute nodes, not on the other clusters.
(
`/home`
,
`/software`
,
`/data/horse`
,
`/data/walrus`
, etc.) are also available.
### Modules
The easiest way is using the
[
module system
](
../software/modules.md
)
.
All software available from the module system has been deliberately build for the cluster
`Alpha`
i.e., with optimization for Zen4 (Genoa) microarchitecture and CUDA-support enabled.
To check the available modules for
`Capella`
, use the command
```
console
marie@login.capella$
module spider <module_name>
```
??? example "Example: Searching and loading PyTorch"
For example, to check which `PyTorch` versions are available you can invoke
Although all other
[
filesystems
](
../data_lifecycle/workspaces.md
)
(
`/home`
,
`/software`
,
`/data/horse`
,
`/data/walrus`
, etc.) are also available.
```console
marie@login.capella$ module spider PyTorch
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
PyTorch: PyTorch/2.1.2-CUDA-12.1.1
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Description:
Tensors and Dynamic neural networks in Python with strong GPU acceleration. PyTorch is a deep learning framework that puts Python first.
!!!
You will need to load all module(s) on any one of the lines below before the "PyTorch/2.1.2-CUDA-12.1.1" module is available to load.
We recommend to store your data on `/data/walrus` in an archive file and only move your hot data via
[Datamover nodes](../data_transfer/datamover.md) into `/data/cat` which should be used as a fast
staging memory.
release/24.04 GCC/12.3.0 OpenMPI/4.1.5
### Modules
Help:
Description
===========
Tensors and Dynamic neural networks in Python with strong GPU acceleration.
PyTorch is a deep learning framework that puts Python first.
More information
================
-
Homepage: https://pytorch.org/
```
```
console
The easiest way using software is using the
[
module system
](
../software/modules.md
)
.
marie@login.capella$ python -c "import torch; print(torch.__version__); print(torch.cuda.is_available())"
All software available from the module system has been deliberately build for the cluster
`Capella`
2.
1.12
i.e., with optimization for Zen4 (Genoa) microarchitecture and CUDA-support enabled.
True
```
### Python Virtual Environments
### Python Virtual Environments
[
Virtual environments
](
../software/python_virtual_environments.md
)
allow you to install
[
Virtual environments
](
../software/python_virtual_environments.md
)
allow you to install
additional Python packages and create an isolated runtime environment. We recommend using
additional Python packages and create an isolated runtime environment. We recommend using
`virtualenv` for this purpose.
`venv`
for this purpose.
An example how to create an [python virtual environment with `torchvision` package](alpha_centauri.md#python-virtual-environments) is
described for the GPU alpha cluster and is identical if you are using the Capella cluster.
!!! hint
!!! hint
We recommend to use [workspaces](../data_lifecycle/workspaces.md) for your virtual environments.
We recommend to use [workspaces](../data_lifecycle/workspaces.md) for your virtual environments.
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment