diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md
index afce7543169efcefa432d87f8c0a3ef51977c1fd..c22dca74d7cc84b7e100bd3ba2a40be0044ec119 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md
@@ -20,6 +20,16 @@ performance and permanence.
     All others need to migrate your data to Barnard’s new file system `/horse`. Please follow these
     detailed instruction on how to [migrate to Barnard](../jobs_and_resources/migration_to_barnard.md).
 
+TODO Where to add this information:
+All clusters will have access to these shared parallel filesystems:
+
+| Filesystem | Usable directory | Type | Capacity | Purpose |
+| --- | --- | --- | --- | --- |
+| Home | `/home` | Lustre | quota per user: 20 GB | permanent user data |
+| Project | `/projects` | Lustre | quota per project | permanent project data |
+| Scratch for large data / streaming | `/data/horse` | Lustre | 20 PB |  |
+<!--end-->
+
 | Filesystem  | Usable directory  | Capacity | Availability | Backup | Remarks                                                                                                                                                         |
 |:------------|:------------------|:---------|:-------------|:-------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------|
 | `Lustre`    | `/scratch/`       | 4 PB     | global       | No     | Only accessible via [Workspaces](workspaces.md). Not made for billions of files!                                                                                   |
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/architecture_2023.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/architecture_2023.md
index 23721a0a9808c56a3681c0114cc14e90f9c66ef2..c0e992d5b308384b3849e324dd1d6577cfd61e22 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/architecture_2023.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/architecture_2023.md
@@ -3,7 +3,7 @@
 Over the last decade we have been running our HPC system of high heterogeneity with a single
 Slurm batch system. This made things very complicated, especially to inexperienced users.
 With the replacement of the Taurus system by the cluster
-[Barnard](hardware_overview_2023.md#barnard-intel-sapphire-rapids-cpus)
+[Barnard](hardware_overview.md#barnard)
 we **now create homogeneous clusters with their own Slurm instances and with cluster specific login
 nodes** running on the same CPU.  Job submission will be possible only from within the cluster
 (compute or login node).
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md
index 1fa2cdbfb0581a65c99b1afecbebe0d28cfbdca4..9b3f3e8f96e5fadc35a6eacf2ce832d46ce71cb0 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md
@@ -18,10 +18,21 @@ users and the ZIH.
     will have five homogeneous clusters with their own Slurm instances and with cluster specific
     login nodes running on the same CPU.
 
-    These changes will **outdate the information provided in this page**. Please refer to the
-    [page Architectural Re-Design 2023](architecture_2023.md) for a overview of the changes. The
-    [page HPC Resources Overview 2023](hardware_overview_2023.md) provides detailed information on
-    the new clusters and filesystems.
+With the installation and start of operation of the [new HPC system Barnard(#barnard),
+quite significant changes w.r.t. HPC system landscape at ZIH follow. The former HPC system Taurus is
+partly switched-off and partly split up into separate clusters. In the end, from the users'
+perspective, there will be **five separate clusters**:
+
+| Name | Description | Year| DNS |
+| --- | --- | --- | --- |
+| **Barnard** | CPU cluster |2023| `n[1001-1630].barnard.hpc.tu-dresden.de` |
+| **Romeo** | CPU cluster |2020| `i[8001-8190].romeo.hpc.tu-dresden.de` |
+| **Alpha Centauri** | GPU cluster | 2021| `i[8001-8037].alpha.hpc.tu-dresden.de` |
+| **Julia** | single SMP system |2021| `smp8.julia.hpc.tu-dresden.de` |
+| **Power** | IBM Power/GPU system |2018| `ml[1-29].power9.hpc.tu-dresden.de` |
+
+All clusters will run with their own [Slurm batch system](slurm.md) and job submission is possible
+only from their respective login nodes.
 
 ## Login and Export Nodes
 
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview_2023.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview_2023.md
deleted file mode 100644
index 075d753eb441e9ab1d293046f6bc1f83c511ee07..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview_2023.md
+++ /dev/null
@@ -1,28 +0,0 @@
-# HPC Resources Overview 2023
-
-TODO Move to other page
-
-With the installation and start of operation of the [new HPC system Barnard](#barnard-intel-sapphire-rapids-cpus),
-quite significant changes w.r.t. HPC system landscape at ZIH follow. The former HPC system Taurus is
-partly switched-off and partly split up into separate clusters. In the end, from the users'
-perspective, there will be **five separate clusters**:
-
-| Name | Description | Year| DNS |
-| --- | --- | --- | --- |
-| **Barnard** | CPU cluster |2023| `n[1001-1630].barnard.hpc.tu-dresden.de` |
-| **Romeo** | CPU cluster |2020| `i[8001-8190].romeo.hpc.tu-dresden.de` |
-| **Alpha Centauri** | GPU cluster | 2021| `i[8001-8037].alpha.hpc.tu-dresden.de` |
-| **Julia** | single SMP system |2021| `smp8.julia.hpc.tu-dresden.de` |
-| **Power** | IBM Power/GPU system |2018| `ml[1-29].power9.hpc.tu-dresden.de` |
-
-All clusters will run with their own [Slurm batch system](slurm.md) and job submission is possible
-only from their respective login nodes.
-
-All clusters will have access to these shared parallel filesystems:
-
-| Filesystem | Usable directory | Type | Capacity | Purpose |
-| --- | --- | --- | --- | --- |
-| Home | `/home` | Lustre | quota per user: 20 GB | permanent user data |
-| Project | `/projects` | Lustre | quota per project | permanent project data |
-| Scratch for large data / streaming | `/data/horse` | Lustre | 20 PB |  |
-