From d180f03a4ddebe3a3a706f73fceb60342d6c8cac Mon Sep 17 00:00:00 2001
From: Martin Schroschk <martin.schroschk@tu-dresden.de>
Date: Thu, 2 Nov 2023 09:46:32 +0100
Subject: [PATCH] Spell: Make it InfiniBand (capital B)

---
 doc.zih.tu-dresden.de/docs/archive/no_ib_jobs.md              | 2 +-
 doc.zih.tu-dresden.de/docs/archive/scs5_software.md           | 2 +-
 doc.zih.tu-dresden.de/docs/archive/slurm_profiling.md         | 2 +-
 doc.zih.tu-dresden.de/docs/archive/system_atlas.md            | 2 +-
 doc.zih.tu-dresden.de/docs/archive/system_deimos.md           | 2 +-
 doc.zih.tu-dresden.de/docs/archive/system_phobos.md           | 2 +-
 .../docs/jobs_and_resources/architecture_2023.md              | 2 +-
 .../docs/jobs_and_resources/arm_hpc_devkit.md                 | 2 +-
 .../docs/jobs_and_resources/migration_2023.md                 | 4 ++--
 doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md | 2 +-
 doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md   | 2 +-
 doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md      | 4 ++--
 .../docs/software/singularity_recipe_hints.md                 | 2 +-
 doc.zih.tu-dresden.de/wordlist.aspell                         | 2 +-
 14 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/doc.zih.tu-dresden.de/docs/archive/no_ib_jobs.md b/doc.zih.tu-dresden.de/docs/archive/no_ib_jobs.md
index 93394d5b8..249f86aea 100644
--- a/doc.zih.tu-dresden.de/docs/archive/no_ib_jobs.md
+++ b/doc.zih.tu-dresden.de/docs/archive/no_ib_jobs.md
@@ -20,7 +20,7 @@ search:
 
 At the moment when parts of the IB stop we will start batch system plugins to parse for this batch
 system option: `--comment=NO_IB`. Jobs with this option set can run on nodes without
-Infiniband access if (and only if) they have set the `--tmp`-option as well:
+InfiniBand access if (and only if) they have set the `--tmp`-option as well:
 
 *From the Slurm documentation:*
 
diff --git a/doc.zih.tu-dresden.de/docs/archive/scs5_software.md b/doc.zih.tu-dresden.de/docs/archive/scs5_software.md
index f00c697d4..79c7ba16d 100644
--- a/doc.zih.tu-dresden.de/docs/archive/scs5_software.md
+++ b/doc.zih.tu-dresden.de/docs/archive/scs5_software.md
@@ -17,7 +17,7 @@ Here are the major changes from the user's perspective:
 | Red Hat Enterprise Linux (RHEL) | 6.x    | 7.x      |
 | Linux kernel                    | 2.26   | 3.10     |
 | glibc                           | 2.12   | 2.17     |
-| Infiniband stack                | OpenIB | Mellanox |
+| InfiniBand stack                | OpenIB | Mellanox |
 | Lustre client                   | 2.5    | 2.10     |
 
 ## Host Keys
diff --git a/doc.zih.tu-dresden.de/docs/archive/slurm_profiling.md b/doc.zih.tu-dresden.de/docs/archive/slurm_profiling.md
index 3ca0a8e2b..5f461ae29 100644
--- a/doc.zih.tu-dresden.de/docs/archive/slurm_profiling.md
+++ b/doc.zih.tu-dresden.de/docs/archive/slurm_profiling.md
@@ -14,7 +14,7 @@ The following data can be gathered:
 
 * Task data, such as CPU frequency, CPU utilization, memory consumption (RSS and VMSize), I/O
 * Energy consumption of the nodes
-* Infiniband data (currently deactivated)
+* InfiniBand data (currently deactivated)
 * Lustre filesystem data (currently deactivated)
 
 The data is sampled at a fixed rate (i.e. every 5 seconds) and is stored in a HDF5 file.
diff --git a/doc.zih.tu-dresden.de/docs/archive/system_atlas.md b/doc.zih.tu-dresden.de/docs/archive/system_atlas.md
index e8a63ab23..94e34d7cd 100644
--- a/doc.zih.tu-dresden.de/docs/archive/system_atlas.md
+++ b/doc.zih.tu-dresden.de/docs/archive/system_atlas.md
@@ -32,7 +32,7 @@ node has 180 GB local disk space for scratch mounted on `/tmp`. The jobs for the
 scheduled by the [Platform LSF](platform_lsf.md) batch system from the login nodes
 `atlas.hrsk.tu-dresden.de` .
 
-A QDR Infiniband interconnect provides the communication and I/O infrastructure for low latency /
+A QDR InfiniBand interconnect provides the communication and I/O infrastructure for low latency /
 high throughput data traffic.
 
 Users with a login on the [SGI Altix](system_altix.md) can access their home directory via NFS
diff --git a/doc.zih.tu-dresden.de/docs/archive/system_deimos.md b/doc.zih.tu-dresden.de/docs/archive/system_deimos.md
index b36a93481..50682072d 100644
--- a/doc.zih.tu-dresden.de/docs/archive/system_deimos.md
+++ b/doc.zih.tu-dresden.de/docs/archive/system_deimos.md
@@ -29,7 +29,7 @@ mounted on `/tmp`. The jobs for the compute nodes are scheduled by the
 [Platform LSF](platform_lsf.md)
 batch system from the login nodes `deimos.hrsk.tu-dresden.de` .
 
-Two separate Infiniband networks (10 Gb/s) with low cascading switches provide the communication and
+Two separate InfiniBand networks (10 Gb/s) with low cascading switches provide the communication and
 I/O infrastructure for low latency / high throughput data traffic. An additional gigabit Ethernet
 network is used for control and service purposes.
 
diff --git a/doc.zih.tu-dresden.de/docs/archive/system_phobos.md b/doc.zih.tu-dresden.de/docs/archive/system_phobos.md
index 3519c36b8..833c23d66 100644
--- a/doc.zih.tu-dresden.de/docs/archive/system_phobos.md
+++ b/doc.zih.tu-dresden.de/docs/archive/system_phobos.md
@@ -25,7 +25,7 @@ All nodes share a 4.4 TB SAN. Each node has additional local disk space mounted
 jobs for the compute nodes are scheduled by a [Platform LSF](platform_lsf.md) batch system running on
 the login node `phobos.hrsk.tu-dresden.de`.
 
-Two separate Infiniband networks (10 Gb/s) with low cascading switches provide the infrastructure
+Two separate InfiniBand networks (10 Gb/s) with low cascading switches provide the infrastructure
 for low latency / high throughput data traffic. An additional GB/Ethernetwork is used for control
 and service purposes.
 
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/architecture_2023.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/architecture_2023.md
index b0d23e2e7..f7c25a6d1 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/architecture_2023.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/architecture_2023.md
@@ -2,7 +2,7 @@
 
 With the replacement of the Taurus system by the cluster `Barnard` in 2023,
 the rest of the installed hardware had to be re-connected, both with
-Infiniband and with Ethernet.
+InfiniBand and with Ethernet.
 
 ![Architecture overview 2023](../jobs_and_resources/misc/architecture_2023.png)
 {: align=center}
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/arm_hpc_devkit.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/arm_hpc_devkit.md
index d2930351a..f0707536c 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/arm_hpc_devkit.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/arm_hpc_devkit.md
@@ -12,7 +12,7 @@ This Arm HPC Developer kit offers:
 * 512G DDR4 memory (8x 64G)
 * 6TB SAS/ SATA 3.5″
 * 2x NVIDIA A100 GPU
-* 2x NVIDIA BlueField-2 E-Series DPU: 200GbE/HDR single-port, both connected to the Infiniband network
+* 2x NVIDIA BlueField-2 E-Series DPU: 200GbE/HDR single-port, both connected to the InfiniBand network
 
 ## Further Information
 
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/migration_2023.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/migration_2023.md
index 3a6749cff..c10983616 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/migration_2023.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/migration_2023.md
@@ -10,7 +10,7 @@ The new HPC system "Barnard" from Bull comes with these main properties:
 
 * 630 compute nodes based on Intel Sapphire Rapids
 * new Lustre-based storage systems
-* HDR Infiniband network large enough to integrate existing and near-future non-Bull hardware
+* HDR InfiniBand network large enough to integrate existing and near-future non-Bull hardware
 * To help our users to find the best location for their data we now use the name of
 animals (size, speed) as mnemonics.
 
@@ -24,7 +24,7 @@ To lower this hurdle we now create homogenous clusters with their own Slurm inst
 cluster specific login nodes running on the same CPU. Job submission is possible only
 from within the cluster (compute or login node).
 
-All clusters will be integrated to the new Infiniband fabric and have then the same access to
+All clusters will be integrated to the new InfiniBand fabric and have then the same access to
 the shared filesystems. This recabling requires a brief downtime of a few days.
 
 [Details on architecture](/jobs_and_resources/architecture_2023).
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md
index 78b8175cc..4a72b115e 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md
@@ -4,7 +4,7 @@
 
 -   8x Intel NVMe Datacenter SSD P4610, 3.2 TB
 -   3.2 GB/s (8x 3.2 =25.6 GB/s)
--   2 Infiniband EDR links, Mellanox MT27800, ConnectX-5, PCIe x16, 100
+-   2 InfiniBand EDR links, Mellanox MT27800, ConnectX-5, PCIe x16, 100
     Gbit/s
 -   2 sockets Intel Xeon E5-2620 v4 (16 cores, 2.10GHz)
 -   64 GB RAM
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md
index f270f8f1d..59529f706 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md
@@ -102,6 +102,6 @@ case on Rome. You might want to try `-mavx2 -fma` instead.
 
 ### Intel MPI
 
-We have seen only half the theoretical peak bandwidth via Infiniband between two nodes, whereas
+We have seen only half the theoretical peak bandwidth via InfiniBand between two nodes, whereas
 Open MPI got close to the peak bandwidth, so you might want to avoid using Intel MPI on partition
 `rome` if your application heavily relies on MPI communication until this issue is resolved.
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
index 946cca8bc..544abeca7 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
@@ -22,7 +22,7 @@ project's quota can be increased or dedicated volumes of up to the full capacity
 - Granularity should be a socket (28 cores)
 - Can be used for OpenMP applications with large memory demands
 - To use Open MPI it is necessary to export the following environment
-  variables, so that Open MPI uses shared-memory instead of Infiniband
+  variables, so that Open MPI uses shared-memory instead of InfiniBand
   for message transport:
 
   ```
@@ -31,4 +31,4 @@ project's quota can be increased or dedicated volumes of up to the full capacity
   ```
 
 - Use `I_MPI_FABRICS=shm` so that Intel MPI doesn't even consider
-  using Infiniband devices itself, but only shared-memory instead
+  using InfiniBand devices itself, but only shared-memory instead
diff --git a/doc.zih.tu-dresden.de/docs/software/singularity_recipe_hints.md b/doc.zih.tu-dresden.de/docs/software/singularity_recipe_hints.md
index ab20fff41..1dc36a50a 100644
--- a/doc.zih.tu-dresden.de/docs/software/singularity_recipe_hints.md
+++ b/doc.zih.tu-dresden.de/docs/software/singularity_recipe_hints.md
@@ -128,7 +128,7 @@ marie@login$ srun -n 4 --ntasks-per-node 2 --time=00:10:00 singularity exec ubun
 * Chosen CUDA version depends on installed driver of host
 * Open MPI needs PMI for Slurm integration
 * Open MPI needs CUDA for GPU copy-support
-* Open MPI needs `ibverbs` library for Infiniband
+* Open MPI needs `ibverbs` library for InfiniBand
 * `openmpi-mca-params.conf` required to avoid warnings on fork (OK on ZIH systems)
 * Environment variables `SLURM_VERSION` and `OPENMPI_VERSION` can be set to  choose different
   version when building the container
diff --git a/doc.zih.tu-dresden.de/wordlist.aspell b/doc.zih.tu-dresden.de/wordlist.aspell
index 78bf541c6..76efc26e5 100644
--- a/doc.zih.tu-dresden.de/wordlist.aspell
+++ b/doc.zih.tu-dresden.de/wordlist.aspell
@@ -175,7 +175,7 @@ iDataPlex
 ifort
 ImageNet
 img
-Infiniband
+InfiniBand
 InfluxDB
 init
 inode
-- 
GitLab