Skip to content
Snippets Groups Projects
Commit e4fca74a authored by Martin Schroschk's avatar Martin Schroschk
Browse files

Merge branch 'main' into preview

parents 83f5884a cb845efa
No related branches found
No related tags found
2 merge requests!899Preview,!898Merge main preview
...@@ -57,7 +57,7 @@ These `/data/horse` and `/data/walrus` can be accesed via workspaces. Please ref ...@@ -57,7 +57,7 @@ These `/data/horse` and `/data/walrus` can be accesed via workspaces. Please ref
!!! Warning !!! Warning
All old filesystems fill be shutdown by the end of 2023. All old filesystems fill be shutdown by the end of 2023.
To work with your data from Taurus you might have to move/copy them to the new storages. To work with your data from Taurus you might have to move/copy them to the new storages.
For this, we have four new [datamover nodes](/data_transfer/datamover) that have mounted all storages For this, we have four new [datamover nodes](/data_transfer/datamover) that have mounted all storages
...@@ -110,7 +110,7 @@ of the old and new system. (Do not use the datamovers from Taurus!) ...@@ -110,7 +110,7 @@ of the old and new system. (Do not use the datamovers from Taurus!)
??? "Migration from `/lustre/ssd` or `/beegfs`" ??? "Migration from `/lustre/ssd` or `/beegfs`"
**You** are entirely responsible for the transfer of these data to the new location. **You** are entirely responsible for the transfer of these data to the new location.
Start the dtrsync process as soon as possible. (And maybe repeat it at a later time.) Start the dtrsync process as soon as possible. (And maybe repeat it at a later time.)
??? "Migration from `/lustre/scratch2` aka `/scratch`" ??? "Migration from `/lustre/scratch2` aka `/scratch`"
...@@ -120,7 +120,7 @@ of the old and new system. (Do not use the datamovers from Taurus!) ...@@ -120,7 +120,7 @@ of the old and new system. (Do not use the datamovers from Taurus!)
to `/data/walrus/warm_archive/ws`. to `/data/walrus/warm_archive/ws`.
In case you need to update this (Gigabytes, not Terabytes!) please run `dtrsync` like in In case you need to update this (Gigabytes, not Terabytes!) please run `dtrsync` like in
`dtrsync -a /data/old/lustre/scratch2/ws/0/my-workspace/newest/ /data/horse/lustre/scratch2/ws/0/my-workspace/newest/` `dtrsync -a /data/old/lustre/scratch2/ws/0/my-workspace/newest/ /data/horse/lustre/scratch2/ws/0/my-workspace/newest/`
??? "Migration from `/warm_archive`" ??? "Migration from `/warm_archive`"
...@@ -161,3 +161,4 @@ on Taurus. ...@@ -161,3 +161,4 @@ on Taurus.
* **ANSYS** now starts: please check if your specific use case works. * **ANSYS** now starts: please check if your specific use case works.
* **login1** is under construction, do not use it at the moment. Workspace creation does * **login1** is under construction, do not use it at the moment. Workspace creation does
not work there. not work there.
...@@ -28,7 +28,7 @@ or setting the option as argument, in case you invoke `mpirun` directly ...@@ -28,7 +28,7 @@ or setting the option as argument, in case you invoke `mpirun` directly
mpirun --mca io ^ompio ... mpirun --mca io ^ompio ...
``` ```
## Mpirun on partition `alpha`and `ml` ## Mpirun on partition `alpha` and `ml`
Using `mpirun` on partitions `alpha` and `ml` leads to wrong resource distribution when more than Using `mpirun` on partitions `alpha` and `ml` leads to wrong resource distribution when more than
one node is involved. This yields a strange distribution like e.g. `SLURM_NTASKS_PER_NODE=15,1` one node is involved. This yields a strange distribution like e.g. `SLURM_NTASKS_PER_NODE=15,1`
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment