Skip to content
Snippets Groups Projects

fix number of extension

Merged Ghost User requested to merge workspaces.md into preview
@@ -4,13 +4,13 @@ Storage systems differ in terms of capacity, streaming bandwidth, IOPS rate, etc
efficiency don't allow to have it all in one. That is why fast parallel filesystems at ZIH have
restrictions with regards to **age of files** and [quota](quotas.md). The mechanism of workspaces
enables users to better manage their HPC data.
<!--Workspaces are primarily login-related.-->
The concept of "workspaces" is common and used at a large number of HPC centers.
The concept of workspaces is common and used at a large number of HPC centers.
!!! note
A workspace is a directory, with an associated expiration date, created on behalf of a user in a
certain storage system.
A **workspace** is a directory, with an associated expiration date, created on behalf of a user
in a certain filesystem.
Once the workspace has reached its expiration date, it gets moved to a hidden directory and enters a
grace period. Once the grace period ends, the workspace is deleted permanently. The maximum lifetime
@@ -30,8 +30,8 @@ times.
To list all available filesystems for using workspaces use:
```bash
zih$ ws_find -l
```console
marie@login$ ws_find -l
Available filesystems:
scratch
warm_archive
@@ -43,8 +43,8 @@ beegfs_global0
To list all workspaces you currently own, use:
```bash
zih$ ws_list
```console
marie@login$ ws_list
id: test-workspace
workspace directory : /scratch/ws/0/marie-test-workspace
remaining time : 89 days 23 hours
@@ -59,8 +59,8 @@ id: test-workspace
To create a workspace in one of the listed filesystems use `ws_allocate`. It is necessary to specify
a unique name and the duration of the workspace.
```bash
ws_allocate: [options] workspace_name duration
```console
marie@login$ ws_allocate: [options] workspace_name duration
Options:
-h [ --help] produce help message
@@ -74,13 +74,12 @@ Options:
-u [ --username ] arg username
-g [ --group ] group workspace
-c [ --comment ] arg comment
```
!!! example
```bash
zih$ ws_allocate -F scratch -r 7 -m marie.testuser@tu-dresden.de test-workspace 90
```console
marie@login$ ws_allocate -F scratch -r 7 -m marie.testuser@tu-dresden.de test-workspace 90
Info: creating workspace.
/scratch/ws/marie-test-workspace
remaining extensions : 10
@@ -95,22 +94,22 @@ days with an email reminder for 7 days before the expiration.
Setting the reminder to `7` means you will get a reminder email on every day starting `7` prior
to expiration date.
### Extention of a Workspace
### Extension of a Workspace
The lifetime of a workspace is finite. Different filesystems (storage systems) have different
maximum durations. A workspace can be extended multiple times, depending on the filesystem.
| Storage system (use with parameter -F ) | Duration, days | Extensions | Remarks |
|:------------------------------------------:|:----------:|:-------:|:---------------------------------------------------------------------------------------:|
| `ssd` | 30 | 10 | High-IOPS filesystem (`/lustre/ssd`) on SSDs. |
| `beegfs` | 30 | 2 | High-IOPS filesystem (`/lustre/ssd`) onNVMes. |
| `scratch` | 100 | 2 | Scratch filesystem (/scratch) with high streaming bandwidth, based on spinning disks |
| `warm_archive` | 365 | 2 | Capacity filesystem based on spinning disks |
| Filesystem (use with parameter `-F`) | Duration, days | Extensions | Remarks |
|:------------------------------------:|:----------:|:-------:|:-----------------------------------:|
| `ssd` | 30 | 2 | High-IOPS filesystem (`/lustre/ssd`) on SSDs. |
| `beegfs` | 30 | 2 | High-IOPS filesystem (`/lustre/ssd`) on NVMes. |
| `scratch` | 100 | 10 | Scratch filesystem (`/scratch`) with high streaming bandwidth, based on spinning disks |
| `warm_archive` | 365 | 2 | Capacity filesystem based on spinning disks |
To extend your workspace use the following command:
To extent your workspace use the following command:
```
zih$ ws_extend -F scratch test-workspace 100 #extend the workspace for 100 days
```console
marie@login$ ws_extend -F scratch test-workspace 100
Info: extending workspace.
/scratch/ws/marie-test-workspace
remaining extensions : 1
@@ -122,39 +121,46 @@ remaining time in days: 100
With the `ws_extend` command, a new duration for the workspace is set. The new duration is not
added!
This means when you extend a workspace that expires in 90 days with the `ws_extend -F scratch
my-workspace 40`, it will now expire in 40 days **not** 130 days.
This means when you extend a workspace that expires in 90 days with the command
```console
marie@login$ ws_extend -F scratch my-workspace 40
```
it will now expire in 40 days **not** 130 days.
### Deletion of a Workspace
To delete a workspace use the `ws_release` command. It is mandatory to specify the name of the
workspace and the filesystem in which it is located:
`ws_release -F <filesystem> <workspace name>`
```console
marie@login$ ws_release -F <filesystem> <workspace name>
```
### Restoring Expired Workspaces
At expiration time your workspace will be moved to a special, hidden directory. For a month (in
warm_archive: 2 months), you can still restore your data into an existing workspace.
!!!Warning
!!! warning
When you release a workspace **by hand**, it will not receive a grace period and be
**permanently deleted** the **next day**. The advantage of this design is that you can create
and release workspaces inside jobs and not swamp the filesystem with data no one needs anymore
in the hidden directories (when workspaces are in the grace period).
Use:
Use
```
ws_restore -l -F scratch
```console
marie@login$ ws_restore -l -F scratch
```
to get a list of your expired workspaces, and then restore them like that into an existing, active
workspace 'new_ws':
```
ws_restore -F scratch marie-test-workspace-1234567 new_ws
```console
marie@login$ ws_restore -F scratch marie-test-workspace-1234567 new_ws
```
The expired workspace has to be specified by its full name as listed by `ws_restore -l`, including
@@ -174,8 +180,8 @@ workspaces within in the directory `DIR`. Calling this command will do the follo
- The directory `DIR` will be created if necessary.
- Links to all personal workspaces will be managed:
- Create links to all available workspaces if not already present.
- Remove links to released workspaces.
- Create links to all available workspaces if not already present.
- Remove links to released workspaces.
**Remark**: An automatic update of the workspace links can be invoked by putting the command
`ws_register DIR` in your personal `shell` configuration file (e.g., `.bashrc`).
@@ -198,8 +204,9 @@ A batch job needs a directory for temporary data. This can be deleted afterwards
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=24
module load modenv/classic
module load gaussian
module purge
module load modenv/hiera
module load Gaussian
COMPUTE_DIR=gaussian_$SLURM_JOB_ID
export GAUSS_SCRDIR=$(ws_allocate -F ssd $COMPUTE_DIR 7)
@@ -218,8 +225,8 @@ Likewise, other jobs can use temporary workspaces.
For a series of jobs or calculations that work on the same data, you should allocate a workspace
once, e.g., in `scratch` for 100 days:
```
zih$ ws_allocate -F scratch my_scratchdata 100
```console
marie@login$ ws_allocate -F scratch my_scratchdata 100
Info: creating workspace.
/scratch/ws/marie-my_scratchdata
remaining extensions : 2
@@ -234,8 +241,8 @@ chmod g+wrx /scratch/ws/marie-my_scratchdata
And verify it with:
```
zih $ ls -la /scratch/ws/marie-my_scratchdata
```console
marie@login$ ls -la /scratch/ws/marie-my_scratchdata
total 8
drwxrwx--- 2 marie hpcsupport 4096 Jul 10 09:03 .
drwxr-xr-x 5 operator adm 4096 Jul 10 09:01 ..
@@ -247,8 +254,8 @@ For data that seldom changes but consumes a lot of space, the warm archive can b
this is mounted read-only on the compute nodes, so you cannot use it as a work directory for your
jobs!
```
zih$ ws_allocate -F warm_archive my_inputdata 365
```console
marie@login$ ws_allocate -F warm_archive my_inputdata 365
/warm_archive/ws/marie-my_inputdata
remaining extensions : 2
remaining time in days: 365
@@ -259,10 +266,10 @@ remaining time in days: 365
The warm archive is not built for billions of files. There
is a quota for 100.000 files per group. Please archive data.
To see your active quota use:
To see your active quota use
```
qinfo quota /warm_archive/ws/
```console
marie@login$ qinfo quota /warm_archive/ws/
```
Note that the workspaces reside under the mountpoint `/warm_archive/ws/` and not `/warm_archive`
Loading