From 1a33d7f0284749a266fdfc5056f95a18a82be8b8 Mon Sep 17 00:00:00 2001
From: Martin Schroschk <martin.schroschk@tu-dresden.de>
Date: Thu, 2 Nov 2023 09:33:39 +0100
Subject: [PATCH] Fix typos

---
 .../docs/jobs_and_resources/migration_2023.md         | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/migration_2023.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/migration_2023.md
index 3a6749cff..27ded5e3c 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/migration_2023.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/migration_2023.md
@@ -20,13 +20,12 @@ More details can be found in the [overview](/jobs_and_resources/hardware_overvie
 
 Over the last decade we have been running our HPC system of high heterogeneity with a single
 Slurm batch system. This made things very complicated, especially to inexperienced users.
-To lower this hurdle we now create homogenous clusters with their own Slurm instances and with
+To lower this hurdle we now create homogeneous clusters with their own Slurm instances and with
 cluster specific login nodes running on the same CPU. Job submission is possible only
 from within the cluster (compute or login node).
 
 All clusters will be integrated to the new Infiniband fabric and have then the same access to
-the shared filesystems. This recabling requires a brief downtime of a few days.
-
+the shared filesystems. This re-cabling requires a brief downtime of a few days.
 [Details on architecture](/jobs_and_resources/architecture_2023).
 
 ### New Software
@@ -36,7 +35,7 @@ all operating system will be updated to the same versions of OS, Mellanox and Lu
 With this all application software was re-built consequently using GIT and CI for handling
 the multitude of versions.
 
-We start with `release/23.10` which is based on software reqeusts from user feedbacks of our
+We start with `release/23.10` which is based on software requests from user feedbacks of our
 HPC users. Most major software versions exist on all hardware platforms.
 
 ## Migration Path
@@ -54,10 +53,10 @@ of the action items.
 | done (July 2023) | |install new software stack|tedious work |
 | ASAP | |adapt scripts|new Slurm version, new resources, no partitions|
 | August 2023 | |test new software stack on Barnard|new versions sometimes require different prerequisites|
-| August 2023| |test new software stack on other clusters|a few nodes will be made available with the new sw stack, but with the old filesystems|
+| August 2023| |test new software stack on other clusters|a few nodes will be made available with the new software stack, but with the old filesystems|
 | ASAP | |prepare data migration|The small filesystems `/beegfs` and `/lustre/ssd`, and `/home` are mounted on the old systems "until the end". They will *not* be migrated to the new system.|
 | July 2023 | sync `/warm_archive` to new hardware| |using datamover nodes with Slurm jobs |
-| September 2023 |prepare recabling of older hardware (Bull)| |integrate other clusters in the IB infrastructure |
+| September 2023 |prepare re-cabling of older hardware (Bull)| |integrate other clusters in the IB infrastructure |
 | Autumn 2023 |finalize integration of other clusters (Bull)| |**~2 days downtime**, final rsync and migration of `/projects`, `/warm_archive`|
 | Autumn 2023 ||transfer last data from old filesystems | `/beegfs`, `/lustre/scratch`, `/lustre/ssd` are no longer available on the new systems|
 
-- 
GitLab