From 89cf746bbd22e3abec9a83bca54d692c9a0f4e80 Mon Sep 17 00:00:00 2001
From: Sebastian Doebel <sebastian.doebel@tu-dresden.de>
Date: Tue, 5 Nov 2024 14:36:37 +0100
Subject: [PATCH] Adjust capella page

---
 .../docs/jobs_and_resources/capella.md        | 65 ++++---------------
 1 file changed, 13 insertions(+), 52 deletions(-)

diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/capella.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/capella.md
index 6b82a999c..3347343e7 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/capella.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/capella.md
@@ -20,69 +20,30 @@ HPC simulations.
 
 Capella has a fast WEKAio file system mounted on `/data/cat`. It is only mounted on Capella and the
 [Datamover nodes](../data_transfer/datamover.md).
-It should be used as the main working file system on Capella.
-Although all other [filesystems](../data_lifecycle/file_systems.md)
-(`/home`, `/software`, `/data/horse`, `/data/walrus`, etc.) are also available.
-
-### Modules
-
-The easiest way is using the [module system](../software/modules.md).
-All software available from the module system has been deliberately build for the cluster `Alpha`
-i.e., with optimization for Zen4 (Genoa) microarchitecture and CUDA-support enabled.
-
-To check the available modules for `Capella`, use the command
-
-```console
-marie@login.capella$ module spider <module_name>
-```
-
-??? example "Example: Searching and loading PyTorch"
+It should be used as the main working file system on Capella and has to used by [workspaces](../data_lifecycle/file_systems.md).
+Workspaces can only be created on Capella login and compute nodes, not on the other clusters.
 
-    For example, to check which `PyTorch` versions are available you can invoke
-
-    ```console
-    marie@login.capella$ module spider PyTorch
-
-    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-     PyTorch: PyTorch/2.1.2-CUDA-12.1.1
-    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-    Description:
-      Tensors and Dynamic neural networks in Python with strong GPU acceleration. PyTorch is a deep learning framework that puts Python first.
+Although all other [filesystems](../data_lifecycle/workspaces.md)
+(`/home`, `/software`, `/data/horse`, `/data/walrus`, etc.) are also available.
 
+!!!
 
-    You will need to load all module(s) on any one of the lines below before the "PyTorch/2.1.2-CUDA-12.1.1" module is available to load.
+    We recommend to store your data on `/data/walrus` in an archive file and only move your hot data via 
+    [Datamover nodes](../data_transfer/datamover.md) into `/data/cat` which should be used as a fast 
+    staging memory. 
 
-      release/24.04  GCC/12.3.0  OpenMPI/4.1.5
- 
-    Help:
-      Description
-      ===========
-      Tensors and Dynamic neural networks in Python with strong GPU acceleration.
-      PyTorch is a deep learning framework that puts Python first.
-      
-      
-      More information
-      ================
-       - Homepage: https://pytorch.org/
-    ```
+### Modules
 
-    ```console
-    marie@login.capella$ python -c "import torch; print(torch.__version__); print(torch.cuda.is_available())"
-    2.1.12
-    True
-    ```
+The easiest way using software is using the [module system](../software/modules.md).
+All software available from the module system has been deliberately build for the cluster `Capella`
+i.e., with optimization for Zen4 (Genoa) microarchitecture and CUDA-support enabled.
 
 ### Python Virtual Environments
 
 [Virtual environments](../software/python_virtual_environments.md) allow you to install
 additional Python packages and create an isolated runtime environment. We recommend using
-`virtualenv` for this purpose.
-
-An example how to create an [python virtual environment with `torchvision` package](alpha_centauri.md#python-virtual-environments) is
- described for the GPU alpha cluster and is identical if you are using the Capella cluster.
-
+`venv` for this purpose.
 
 !!! hint
 
     We recommend to use [workspaces](../data_lifecycle/workspaces.md) for your virtual environments.
-
-- 
GitLab