Skip to content
Snippets Groups Projects
Commit 648cdfc6 authored by Martin Schroschk's avatar Martin Schroschk
Browse files

Doc: Delete page; move content

parent 568ec7b9
No related branches found
No related tags found
2 merge requests!938Automated merge from preview to main,!936Update to Five-Cluster-Operation
...@@ -20,6 +20,16 @@ performance and permanence. ...@@ -20,6 +20,16 @@ performance and permanence.
All others need to migrate your data to Barnard’s new file system `/horse`. Please follow these All others need to migrate your data to Barnard’s new file system `/horse`. Please follow these
detailed instruction on how to [migrate to Barnard](../jobs_and_resources/migration_to_barnard.md). detailed instruction on how to [migrate to Barnard](../jobs_and_resources/migration_to_barnard.md).
TODO Where to add this information:
All clusters will have access to these shared parallel filesystems:
| Filesystem | Usable directory | Type | Capacity | Purpose |
| --- | --- | --- | --- | --- |
| Home | `/home` | Lustre | quota per user: 20 GB | permanent user data |
| Project | `/projects` | Lustre | quota per project | permanent project data |
| Scratch for large data / streaming | `/data/horse` | Lustre | 20 PB | |
<!--end-->
| Filesystem | Usable directory | Capacity | Availability | Backup | Remarks | | Filesystem | Usable directory | Capacity | Availability | Backup | Remarks |
|:------------|:------------------|:---------|:-------------|:-------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------| |:------------|:------------------|:---------|:-------------|:-------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `Lustre` | `/scratch/` | 4 PB | global | No | Only accessible via [Workspaces](workspaces.md). Not made for billions of files! | | `Lustre` | `/scratch/` | 4 PB | global | No | Only accessible via [Workspaces](workspaces.md). Not made for billions of files! |
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
Over the last decade we have been running our HPC system of high heterogeneity with a single Over the last decade we have been running our HPC system of high heterogeneity with a single
Slurm batch system. This made things very complicated, especially to inexperienced users. Slurm batch system. This made things very complicated, especially to inexperienced users.
With the replacement of the Taurus system by the cluster With the replacement of the Taurus system by the cluster
[Barnard](hardware_overview_2023.md#barnard-intel-sapphire-rapids-cpus) [Barnard](hardware_overview.md#barnard)
we **now create homogeneous clusters with their own Slurm instances and with cluster specific login we **now create homogeneous clusters with their own Slurm instances and with cluster specific login
nodes** running on the same CPU. Job submission will be possible only from within the cluster nodes** running on the same CPU. Job submission will be possible only from within the cluster
(compute or login node). (compute or login node).
......
...@@ -18,10 +18,21 @@ users and the ZIH. ...@@ -18,10 +18,21 @@ users and the ZIH.
will have five homogeneous clusters with their own Slurm instances and with cluster specific will have five homogeneous clusters with their own Slurm instances and with cluster specific
login nodes running on the same CPU. login nodes running on the same CPU.
These changes will **outdate the information provided in this page**. Please refer to the With the installation and start of operation of the [new HPC system Barnard(#barnard),
[page Architectural Re-Design 2023](architecture_2023.md) for a overview of the changes. The quite significant changes w.r.t. HPC system landscape at ZIH follow. The former HPC system Taurus is
[page HPC Resources Overview 2023](hardware_overview_2023.md) provides detailed information on partly switched-off and partly split up into separate clusters. In the end, from the users'
the new clusters and filesystems. perspective, there will be **five separate clusters**:
| Name | Description | Year| DNS |
| --- | --- | --- | --- |
| **Barnard** | CPU cluster |2023| `n[1001-1630].barnard.hpc.tu-dresden.de` |
| **Romeo** | CPU cluster |2020| `i[8001-8190].romeo.hpc.tu-dresden.de` |
| **Alpha Centauri** | GPU cluster | 2021| `i[8001-8037].alpha.hpc.tu-dresden.de` |
| **Julia** | single SMP system |2021| `smp8.julia.hpc.tu-dresden.de` |
| **Power** | IBM Power/GPU system |2018| `ml[1-29].power9.hpc.tu-dresden.de` |
All clusters will run with their own [Slurm batch system](slurm.md) and job submission is possible
only from their respective login nodes.
## Login and Export Nodes ## Login and Export Nodes
......
# HPC Resources Overview 2023
TODO Move to other page
With the installation and start of operation of the [new HPC system Barnard](#barnard-intel-sapphire-rapids-cpus),
quite significant changes w.r.t. HPC system landscape at ZIH follow. The former HPC system Taurus is
partly switched-off and partly split up into separate clusters. In the end, from the users'
perspective, there will be **five separate clusters**:
| Name | Description | Year| DNS |
| --- | --- | --- | --- |
| **Barnard** | CPU cluster |2023| `n[1001-1630].barnard.hpc.tu-dresden.de` |
| **Romeo** | CPU cluster |2020| `i[8001-8190].romeo.hpc.tu-dresden.de` |
| **Alpha Centauri** | GPU cluster | 2021| `i[8001-8037].alpha.hpc.tu-dresden.de` |
| **Julia** | single SMP system |2021| `smp8.julia.hpc.tu-dresden.de` |
| **Power** | IBM Power/GPU system |2018| `ml[1-29].power9.hpc.tu-dresden.de` |
All clusters will run with their own [Slurm batch system](slurm.md) and job submission is possible
only from their respective login nodes.
All clusters will have access to these shared parallel filesystems:
| Filesystem | Usable directory | Type | Capacity | Purpose |
| --- | --- | --- | --- | --- |
| Home | `/home` | Lustre | quota per user: 20 GB | permanent user data |
| Project | `/projects` | Lustre | quota per project | permanent project data |
| Scratch for large data / streaming | `/data/horse` | Lustre | 20 PB | |
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment