Skip to content
Snippets Groups Projects
Commit 128ed234 authored by Martin Schroschk's avatar Martin Schroschk
Browse files

Merge branch 'local_disk' into 'preview'

Document the local_disk feature

See merge request !939
parents 2bf78780 fee96e85
No related branches found
No related tags found
2 merge requests!942Automated merge from preview to main,!939Document the local_disk feature
......@@ -106,7 +106,7 @@ of the old Taurus and new Barnard system. Do not use the datamovers from Taurus,
transfer need to be invoked from Barnard! Thus, the very first step is to
[login to Barnard](#login-to-barnard).
The command `dtinfo` will provide you the mountpoints of the old filesystems
The command `dtinfo` will provide you the mount points of the old filesystems
```console
marie@barnard$ dtinfo
......@@ -171,7 +171,7 @@ your data to the new `/home` filesystem, as well as the working filesystems `/da
Please be aware that there is **no synchronisation process** between your home directories
at Taurus and Barnard. Thus, after the very first transfer, they will become divergent.
Please follow this instructions for transferring you data from `ssd`, `beegfs` and `scratch` to the
Please follow these instructions for transferring you data from `ssd`, `beegfs` and `scratch` to the
new filesystems. The instructions and examples are divided by the target not the source filesystem.
This migration task requires a preliminary step: You need to allocate workspaces on the
......@@ -318,9 +318,9 @@ target filesystems.
```
When the last compute system will have been migrated the old file systems will be
set write-protected and we start a final synchronization (sratch+walrus).
set write-protected and we start a final synchronization (scratch+walrus).
The target directories for synchronization `/data/horse/lustre/scratch2/ws` and
`/data/walrus/warm_archive/ws/` will not be deleted automatically in the mean time.
`/data/walrus/warm_archive/ws/` will not be deleted automatically in the meantime.
## Software
......@@ -337,3 +337,7 @@ Please use `module spider` to identify the software modules you need to load.
* We are running the most recent Slurm version.
* You must not use the old partition names.
* Not all things are tested.
Note that most nodes on Barnard don't have a local disk and space in `/tmp` is **very** limited.
If you need a local disk request this with the
[Slurm feature](slurm.md#node-features-for-selective-job-submission) `--constraint=local_disk`.
......@@ -555,8 +555,12 @@ constraints, please refer to the [Slurm documentation](https://slurm.schedmd.com
### Filesystem Features
A feature `fs_*` is active if a certain filesystem is mounted and available on a node. Access to
these filesystems are tested every few minutes on each node and the Slurm features are set accordingly.
If you need a local disk (i.e. `/tmp`) on a diskless cluster (e.g. [Barnard](barnard.md))
use the feature `local_disk`.`
A feature `fs_*` is active if a certain (global) filesystem is mounted and available on a node.
Access to these filesystems is tested every few minutes on each node and the Slurm features are
set accordingly.
| Feature | Description | [Workspace Name](../data_lifecycle/workspaces.md#extension-of-a-workspace) |
|:---------------------|:-------------------------------------------------------------------|:---------------------------------------------------------------------------|
......
......@@ -72,6 +72,7 @@ ddl
DDP
DDR
DFG
diskless
distr
DistributedDataParallel
dmtcp
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment