From 648cdfc6b50b1f9460da52eb829c48edb68c0398 Mon Sep 17 00:00:00 2001
From: Martin Schroschk <martin.schroschk@tu-dresden.de>
Date: Thu, 23 Nov 2023 16:50:53 +0100
Subject: [PATCH] Doc: Delete page; move content

---
 .../docs/data_lifecycle/file_systems.md       | 10 +++++++
 .../jobs_and_resources/architecture_2023.md   |  2 +-
 .../jobs_and_resources/hardware_overview.md   | 19 ++++++++++---
 .../hardware_overview_2023.md                 | 28 -------------------
 4 files changed, 26 insertions(+), 33 deletions(-)
 delete mode 100644 doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview_2023.md

diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md
index afce75431..c22dca74d 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md
@@ -20,6 +20,16 @@ performance and permanence.
     All others need to migrate your data to Barnard’s new file system `/horse`. Please follow these
     detailed instruction on how to [migrate to Barnard](../jobs_and_resources/migration_to_barnard.md).
 
+TODO Where to add this information:
+All clusters will have access to these shared parallel filesystems:
+
+| Filesystem | Usable directory | Type | Capacity | Purpose |
+| --- | --- | --- | --- | --- |
+| Home | `/home` | Lustre | quota per user: 20 GB | permanent user data |
+| Project | `/projects` | Lustre | quota per project | permanent project data |
+| Scratch for large data / streaming | `/data/horse` | Lustre | 20 PB |  |
+<!--end-->
+
 | Filesystem  | Usable directory  | Capacity | Availability | Backup | Remarks                                                                                                                                                         |
 |:------------|:------------------|:---------|:-------------|:-------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------|
 | `Lustre`    | `/scratch/`       | 4 PB     | global       | No     | Only accessible via [Workspaces](workspaces.md). Not made for billions of files!                                                                                   |
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/architecture_2023.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/architecture_2023.md
index 23721a0a9..c0e992d5b 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/architecture_2023.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/architecture_2023.md
@@ -3,7 +3,7 @@
 Over the last decade we have been running our HPC system of high heterogeneity with a single
 Slurm batch system. This made things very complicated, especially to inexperienced users.
 With the replacement of the Taurus system by the cluster
-[Barnard](hardware_overview_2023.md#barnard-intel-sapphire-rapids-cpus)
+[Barnard](hardware_overview.md#barnard)
 we **now create homogeneous clusters with their own Slurm instances and with cluster specific login
 nodes** running on the same CPU.  Job submission will be possible only from within the cluster
 (compute or login node).
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md
index 1fa2cdbfb..9b3f3e8f9 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md
@@ -18,10 +18,21 @@ users and the ZIH.
     will have five homogeneous clusters with their own Slurm instances and with cluster specific
     login nodes running on the same CPU.
 
-    These changes will **outdate the information provided in this page**. Please refer to the
-    [page Architectural Re-Design 2023](architecture_2023.md) for a overview of the changes. The
-    [page HPC Resources Overview 2023](hardware_overview_2023.md) provides detailed information on
-    the new clusters and filesystems.
+With the installation and start of operation of the [new HPC system Barnard(#barnard),
+quite significant changes w.r.t. HPC system landscape at ZIH follow. The former HPC system Taurus is
+partly switched-off and partly split up into separate clusters. In the end, from the users'
+perspective, there will be **five separate clusters**:
+
+| Name | Description | Year| DNS |
+| --- | --- | --- | --- |
+| **Barnard** | CPU cluster |2023| `n[1001-1630].barnard.hpc.tu-dresden.de` |
+| **Romeo** | CPU cluster |2020| `i[8001-8190].romeo.hpc.tu-dresden.de` |
+| **Alpha Centauri** | GPU cluster | 2021| `i[8001-8037].alpha.hpc.tu-dresden.de` |
+| **Julia** | single SMP system |2021| `smp8.julia.hpc.tu-dresden.de` |
+| **Power** | IBM Power/GPU system |2018| `ml[1-29].power9.hpc.tu-dresden.de` |
+
+All clusters will run with their own [Slurm batch system](slurm.md) and job submission is possible
+only from their respective login nodes.
 
 ## Login and Export Nodes
 
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview_2023.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview_2023.md
deleted file mode 100644
index 075d753eb..000000000
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview_2023.md
+++ /dev/null
@@ -1,28 +0,0 @@
-# HPC Resources Overview 2023
-
-TODO Move to other page
-
-With the installation and start of operation of the [new HPC system Barnard](#barnard-intel-sapphire-rapids-cpus),
-quite significant changes w.r.t. HPC system landscape at ZIH follow. The former HPC system Taurus is
-partly switched-off and partly split up into separate clusters. In the end, from the users'
-perspective, there will be **five separate clusters**:
-
-| Name | Description | Year| DNS |
-| --- | --- | --- | --- |
-| **Barnard** | CPU cluster |2023| `n[1001-1630].barnard.hpc.tu-dresden.de` |
-| **Romeo** | CPU cluster |2020| `i[8001-8190].romeo.hpc.tu-dresden.de` |
-| **Alpha Centauri** | GPU cluster | 2021| `i[8001-8037].alpha.hpc.tu-dresden.de` |
-| **Julia** | single SMP system |2021| `smp8.julia.hpc.tu-dresden.de` |
-| **Power** | IBM Power/GPU system |2018| `ml[1-29].power9.hpc.tu-dresden.de` |
-
-All clusters will run with their own [Slurm batch system](slurm.md) and job submission is possible
-only from their respective login nodes.
-
-All clusters will have access to these shared parallel filesystems:
-
-| Filesystem | Usable directory | Type | Capacity | Purpose |
-| --- | --- | --- | --- | --- |
-| Home | `/home` | Lustre | quota per user: 20 GB | permanent user data |
-| Project | `/projects` | Lustre | quota per project | permanent project data |
-| Scratch for large data / streaming | `/data/horse` | Lustre | 20 PB |  |
-
-- 
GitLab