Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
hpc-compendium
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Deploy
Releases
Package registry
Container Registry
Model registry
Operate
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
ZIH
hpcsupport
hpc-compendium
Commits
be6cd3d2
Commit
be6cd3d2
authored
1 year ago
by
Ulf Markwardt
Browse files
Options
Downloads
Patches
Plain Diff
update
parent
384f0013
No related branches found
No related tags found
3 merge requests
!899
Preview
,
!897
Draft: Preview
,
!894
Barnard update
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
doc.zih.tu-dresden.de/docs/jobs_and_resources/barnard_test.md
+31
-14
31 additions, 14 deletions
...zih.tu-dresden.de/docs/jobs_and_resources/barnard_test.md
with
31 additions
and
14 deletions
doc.zih.tu-dresden.de/docs/jobs_and_resources/barnard_test.md
+
31
−
14
View file @
be6cd3d2
...
...
@@ -6,33 +6,40 @@ and workflows for production there. For general hints please refer to these site
*
[
Details on architecture
](
/jobs_and_resources/architecture_2023
)
,
*
[
Description of the migration
](
migration_2023.md
)
.
Please provide your feedback directly via our ticket system. For better processing,
please add "Barnard:" as a prefix to the subject of the
[
support ticket
](
../support/support
)
.
!!! "Feedback welcome"
Please provide your feedback directly via our ticket system. For better processing,
please add "Barnard:" as a prefix to the subject of the
[
support ticket
](
../support/support
)
.
Essential staff from Bull and ZIH is on (well-earned) vacation. Major fixes, or
adaptations might have to wait until mid of October.
Here, you can find few hints which might help you with the first steps.
## Login to Barnard
*
All users and projects from Taurus now can work on Barnard.
*
They can use
`login[
1-2
].barnard.hpc.tu-dresden.de`
to access the system
*
They can use
`login[
2-4
].barnard.hpc.tu-dresden.de`
to access the system
from campus (or VPN).
[
Fingerprints
](
/access/key_fingerprints/#barnard
)
*
All users have
*new*
home file systems, this means:
*
All users have
**new HOME**
file systems, this means you have to do two things:
1.
Install your public ssh key on the system
-
Please create a new SSH keypair with ed25519 encryption, secured with
a passphrase. Please refer to this
[
page for instructions
](
../../access/ssh_login#before-your-first-connection
)
.
a passphrase. Please refer to this
[
page for instructions
](
../../access/ssh_login#before-your-first-connection
)
.
-
After login, add the public key to your
`.ssh/authorized_keys`
file
on Barnard.
on Barnard.
1.
"Transfer your data for HOME" -- see below.
## Data Management
*
The
`/project`
filesystem is the same on Taurus and Barnard
(mounted read-only on the compute nodes).
*
The
**new work filesystem**
is
`/data/horse`
. The slower
`/data/walrus`
can be used
to store e.g. results. Both can be accesed via workspaces. Please refer to the
*
The
**new work filesystem**
is
`/data/horse`
.
*
The slower
`/data/walrus`
shall substitute the old
`/warm_archive`
- mounted
**read-only**
on
the compute nodes. It can be used to store e.g. results.
These two (horse + walrus) can be accesed via workspaces. Please refer to the
[
workspace page
](
../../data_lifecycle/workspaces/
)
, if you are not familiar with workspaces. To list
all available workspace filessystem, invoke the command
`ws_list -l`
.
...
...
@@ -63,10 +70,11 @@ directory on datamover mounting clusters directory on cluster
--------------------------------------------------------------------------------
```
*
In May
(!)
we have copied all workspaces from
`/scratch/ws`
data to
*
In May we have copied all workspaces from
`/scratch/ws`
data to
` /data/horse/lustre/scratch2/ws`
. This replication took a
**few weeks**
. Ideally you
can now just
*move*
their
*content*
to a newly created workspace. - Of course,
everything newer than May is not there.
can now just
**move**
the content to a newly created workspace.
A second synchronization has started on
**October, 18**
and is nearly done.
*
Please manually copy your needed data from your
`beegfs`
or
`ssd`
workspaces. These
old storages will be purged, probably by the end of November.
...
...
@@ -126,3 +134,12 @@ on Taurus.
*
We are running the most recent Slurm version.
*
You must not use the old partition names.
*
Not all things are tested.
## Updates after your feedback
*
A
**second synchronization**
from
`/scratch`
has started on
**October, 18**
and is nearly done.
*
The
**data tranfer tools**
now work fine.
*
After fixing too tight security restrictions,
**all users can login**
now.
*
**ANSYS/2023R1**
now starts after problems: please check if your specific use case works.
*
**login1**
is under observation, do not use it at the moment.
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment