Skip to content
Snippets Groups Projects
Commit 9fc90fc2 authored by Martin Schroschk's avatar Martin Schroschk
Browse files

WIP: Migration has finished. Remove corresponding documentation

parent 2eef4f6e
No related branches found
No related tags found
2 merge requests!1008Automated merge from preview to main,!1007Migration end
......@@ -3,11 +3,6 @@
All HPC users are cordially invited to migrate to our new HPC system **Barnard** and prepare your
software and workflows for production there.
!!! note "Migration Phase"
Please make sure to have read the details on the overall
[Architectural Re-Design 2023](hardware_overview.md) before further reading.
The migration from Taurus to Barnard comprises the following steps:
* [Prepare login to Barnard](#login-to-barnard)
......
......@@ -9,14 +9,6 @@ analytics, and artificial intelligence methods with extensive capabilities for e
and performance monitoring provides ideal conditions to achieve the ambitious research goals of the
users and the ZIH.
!!! danger "HPC Systems Migration Phase"
**On December 11 2023 Taurus will be decommissioned for good**.
With our new HPC system Barnard comes a significant change in HPC system landscape at ZIH: We
will have five homogeneous clusters with their own Slurm instances and with cluster specific
login nodes running on the same CPU.
With the installation and start of operation of the [new HPC system Barnard](#barnard),
quite significant changes w.r.t. HPC system landscape at ZIH follow. The former HPC system Taurus is
partly switched-off and partly split up into separate clusters. In the end, from the users'
......@@ -95,26 +87,8 @@ with a high frequency of changing files is a bad idea.
Please use our data mover mechanisms to transfer worthy data to permanent
storages or long-term archives.
### Migration Phase
For about one month, the new cluster Barnard, and the old cluster Taurus
will run side-by-side - both with their respective filesystems. We provide a comprehensive
[description of the migration to Barnard](barnard.md).
<! --
The follwing figure provides a graphical overview of the overall process (red: user action
required):
![Migration timeline 2023](../jobs_and_resources/misc/migration_2023.png)
{: align=center}
-->
## Login and Dataport Nodes
!!! danger "**On December 11 2023 Taurus will be decommissioned for good**."
Do not use Taurus for production anymore.
- Login-Nodes
- Individual for each cluster. See sections below.
- 2 Data-Transfer-Nodes
......@@ -188,29 +162,6 @@ architecture.
- Hostname: `julia.hpc.tu-dresden.de`
- Further information on the usage is documented on the site [SMP System Julia](julia.md)
??? note "Maintenance from November 27 to December 12"
The recabling will take place from November 27 to December 12. These works are planned:
* update the software stack (OS, firmware, software),
* change the ethernet access (new VLANs),
* complete integration of Romeo and Julia into the Barnard Infiniband network to get full
bandwidth access to all Barnard filesystems,
* configure and deploy stand-alone Slurm batch systems.
After the maintenance, the Julia system reappears as a stand-alone cluster that can be reached
via `julia.hpc.tu-dresden.de`.
**Changes w.r.t. filesystems:**
Your new `/home` directory (from Barnard) will become your `/home` on Romeo, *Julia*, Alpha
Centauri and the Power9 system. Thus, please [migrate your `/home` from Taurus to your **new**
`/home` on Barnard](barnard.md#data-management-and-data-transfer).
The old work filesystems `/lustre/scratch` and `/lustre/ssd will` be turned off on January 1
2024 for good (no data access afterwards!). The new work filesystem available on the Julia
system will be `/horse`. Please
[migrate your working data to `/horse`](barnard.md#data-migration-to-new-filesystems).
## Power9
The cluster `Power9` by IBM is based on Power9 CPUs and provides NVIDIA V100 GPUs.
......@@ -224,19 +175,3 @@ The cluster `Power9` by IBM is based on Power9 CPUs and provides NVIDIA V100 GPU
- Login nodes: `login[1-2].power9.hpc.tu-dresden.de`
- Hostnames: `ml[1-29].power9.hpc.tu-dresden.de` (after recabling phase; expected January '24)
- Further information on the usage is documented on the site [GPU Cluster Power9](power9.md)
??? note "Maintenance"
The recabling will take place from November 27 to December 12. After the maintenance, the Power9
system reappears as a stand-alone cluster that can be reached via
`ml[1-29].power9.hpc.tu-dresden.de`.
**Changes w.r.t. filesystems:**
Your new `/home` directory (from Barnard) will become your `/home` on Romeo, Julia, Alpha
Centauri and the *Power9* system. Thus, please [migrate your `/home` from Taurus to your **new**
`/home` on Barnard](barnard.md#data-management-and-data-transfer).
The old work filesystems `/lustre/scratch` and `/lustre/ssd will` be turned off on January 1
2024 for good (no data access afterwards!). The only work filesystem available on the Power9
system will be `/beegfs`. Please
[migrate your working data to `/horse`](barnard.md#data-migration-to-new-filesystems).
......@@ -13,16 +13,6 @@ of 2023, it was available as partition `romeo` within `Taurus`. With the decommi
* configure and deploy stand-alone Slurm batch system,
* newly build software within separate software and module system.
!!! note "Changes w.r.t. filesystems"
Your new `/home` directory (from `Barnard`) is now your `/home` on *Romeo*, too.
Thus, please
[migrate your `/home` from Taurus to your **new** `/home` on Barnard](barnard.md#data-management-and-data-transfer).
The old work filesystems `/lustre/scratch` and `/lustre/ssd will` be turned off on January 1
2024 for good (no data access afterwards!). The new work filesystem available on `Romeo` is
`horse`. Please [migrate your working data to `/horse`](barnard.md#data-migration-to-new-filesystems).
## Hardware Resources
The hardware specification is documented on the page
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment