Skip to content
Snippets Groups Projects
Commit be6cd3d2 authored by Ulf Markwardt's avatar Ulf Markwardt
Browse files

update

parent 384f0013
No related branches found
No related tags found
3 merge requests!899Preview,!897Draft: Preview,!894Barnard update
...@@ -6,33 +6,40 @@ and workflows for production there. For general hints please refer to these site ...@@ -6,33 +6,40 @@ and workflows for production there. For general hints please refer to these site
* [Details on architecture](/jobs_and_resources/architecture_2023), * [Details on architecture](/jobs_and_resources/architecture_2023),
* [Description of the migration](migration_2023.md). * [Description of the migration](migration_2023.md).
Please provide your feedback directly via our ticket system. For better processing, !!! "Feedback welcome"
please add "Barnard:" as a prefix to the subject of the [support ticket](../support/support). Please provide your feedback directly via our ticket system. For better processing,
please add "Barnard:" as a prefix to the subject of the [support ticket](../support/support).
Essential staff from Bull and ZIH is on (well-earned) vacation. Major fixes, or
adaptations might have to wait until mid of October.
Here, you can find few hints which might help you with the first steps. Here, you can find few hints which might help you with the first steps.
## Login to Barnard ## Login to Barnard
* All users and projects from Taurus now can work on Barnard. * All users and projects from Taurus now can work on Barnard.
* They can use `login[1-2].barnard.hpc.tu-dresden.de` to access the system * They can use `login[2-4].barnard.hpc.tu-dresden.de` to access the system
from campus (or VPN). [Fingerprints](/access/key_fingerprints/#barnard) from campus (or VPN). [Fingerprints](/access/key_fingerprints/#barnard)
* All users have *new* home file systems, this means: * All users have **new HOME** file systems, this means you have to do two things:
1. Install your public ssh key on the system
- Please create a new SSH keypair with ed25519 encryption, secured with - Please create a new SSH keypair with ed25519 encryption, secured with
a passphrase. Please refer to this a passphrase. Please refer to this
[page for instructions](../../access/ssh_login#before-your-first-connection). [page for instructions](../../access/ssh_login#before-your-first-connection).
- After login, add the public key to your `.ssh/authorized_keys` file - After login, add the public key to your `.ssh/authorized_keys` file
on Barnard. on Barnard.
1. "Transfer your data for HOME" -- see below.
## Data Management ## Data Management
* The `/project` filesystem is the same on Taurus and Barnard * The `/project` filesystem is the same on Taurus and Barnard
(mounted read-only on the compute nodes). (mounted read-only on the compute nodes).
* The **new work filesystem** is `/data/horse`. The slower `/data/walrus` can be used * The **new work filesystem** is `/data/horse`.
to store e.g. results. Both can be accesed via workspaces. Please refer to the * The slower `/data/walrus` shall substitute the old `/warm_archive` - mounted **read-only** on
the compute nodes. It can be used to store e.g. results.
These two (horse + walrus) can be accesed via workspaces. Please refer to the
[workspace page](../../data_lifecycle/workspaces/), if you are not familiar with workspaces. To list [workspace page](../../data_lifecycle/workspaces/), if you are not familiar with workspaces. To list
all available workspace filessystem, invoke the command `ws_list -l`. all available workspace filessystem, invoke the command `ws_list -l`.
...@@ -63,10 +70,11 @@ directory on datamover mounting clusters directory on cluster ...@@ -63,10 +70,11 @@ directory on datamover mounting clusters directory on cluster
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
``` ```
* In May (!) we have copied all workspaces from `/scratch/ws` data to * In May we have copied all workspaces from `/scratch/ws` data to
` /data/horse/lustre/scratch2/ws`. This replication took a **few weeks**. Ideally you ` /data/horse/lustre/scratch2/ws`. This replication took a **few weeks**. Ideally you
can now just *move* their *content* to a newly created workspace. - Of course, can now just **move** the content to a newly created workspace.
everything newer than May is not there. A second synchronization has started on **October, 18** and is nearly done.
* Please manually copy your needed data from your `beegfs` or `ssd` workspaces. These * Please manually copy your needed data from your `beegfs` or `ssd` workspaces. These
old storages will be purged, probably by the end of November. old storages will be purged, probably by the end of November.
...@@ -126,3 +134,12 @@ on Taurus. ...@@ -126,3 +134,12 @@ on Taurus.
* We are running the most recent Slurm version. * We are running the most recent Slurm version.
* You must not use the old partition names. * You must not use the old partition names.
* Not all things are tested. * Not all things are tested.
## Updates after your feedback
* A **second synchronization** from `/scratch` has started on **October, 18** and is nearly done.
* The **data tranfer tools** now work fine.
* After fixing too tight security restrictions, **all users can login** now.
* **ANSYS/2023R1** now starts after problems: please check if your specific use case works.
* **login1** is under observation, do not use it at the moment.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment