diff --git a/doc.zih.tu-dresden.de/docs/archive/no_ib_jobs.md b/doc.zih.tu-dresden.de/docs/archive/no_ib_jobs.md
index 93394d5b8f14c12d8acfda5604066d14e39790ad..249f86aea7d879120941d1a48a2b8d418c5a0617 100644
--- a/doc.zih.tu-dresden.de/docs/archive/no_ib_jobs.md
+++ b/doc.zih.tu-dresden.de/docs/archive/no_ib_jobs.md
@@ -20,7 +20,7 @@ search:
 
 At the moment when parts of the IB stop we will start batch system plugins to parse for this batch
 system option: `--comment=NO_IB`. Jobs with this option set can run on nodes without
-Infiniband access if (and only if) they have set the `--tmp`-option as well:
+InfiniBand access if (and only if) they have set the `--tmp`-option as well:
 
 *From the Slurm documentation:*
 
diff --git a/doc.zih.tu-dresden.de/docs/archive/scs5_software.md b/doc.zih.tu-dresden.de/docs/archive/scs5_software.md
index f00c697d46e19c918116e332efd59a5508469962..79c7ba16d393f0f31963e9d6fe4e69dfcefcffd3 100644
--- a/doc.zih.tu-dresden.de/docs/archive/scs5_software.md
+++ b/doc.zih.tu-dresden.de/docs/archive/scs5_software.md
@@ -17,7 +17,7 @@ Here are the major changes from the user's perspective:
 | Red Hat Enterprise Linux (RHEL) | 6.x    | 7.x      |
 | Linux kernel                    | 2.26   | 3.10     |
 | glibc                           | 2.12   | 2.17     |
-| Infiniband stack                | OpenIB | Mellanox |
+| InfiniBand stack                | OpenIB | Mellanox |
 | Lustre client                   | 2.5    | 2.10     |
 
 ## Host Keys
diff --git a/doc.zih.tu-dresden.de/docs/archive/slurm_profiling.md b/doc.zih.tu-dresden.de/docs/archive/slurm_profiling.md
index 3ca0a8e2b6e4618923a379ed8bcec854256b7fbf..5f461ae29dde65c111506cfa5bcce8d1a35ad8b3 100644
--- a/doc.zih.tu-dresden.de/docs/archive/slurm_profiling.md
+++ b/doc.zih.tu-dresden.de/docs/archive/slurm_profiling.md
@@ -14,7 +14,7 @@ The following data can be gathered:
 
 * Task data, such as CPU frequency, CPU utilization, memory consumption (RSS and VMSize), I/O
 * Energy consumption of the nodes
-* Infiniband data (currently deactivated)
+* InfiniBand data (currently deactivated)
 * Lustre filesystem data (currently deactivated)
 
 The data is sampled at a fixed rate (i.e. every 5 seconds) and is stored in a HDF5 file.
diff --git a/doc.zih.tu-dresden.de/docs/archive/system_atlas.md b/doc.zih.tu-dresden.de/docs/archive/system_atlas.md
index e8a63ab236196b0853b9a1e6f90c809cfc567be5..94e34d7cdd918483b9392bea94dbf2809c8369d3 100644
--- a/doc.zih.tu-dresden.de/docs/archive/system_atlas.md
+++ b/doc.zih.tu-dresden.de/docs/archive/system_atlas.md
@@ -32,7 +32,7 @@ node has 180 GB local disk space for scratch mounted on `/tmp`. The jobs for the
 scheduled by the [Platform LSF](platform_lsf.md) batch system from the login nodes
 `atlas.hrsk.tu-dresden.de` .
 
-A QDR Infiniband interconnect provides the communication and I/O infrastructure for low latency /
+A QDR InfiniBand interconnect provides the communication and I/O infrastructure for low latency /
 high throughput data traffic.
 
 Users with a login on the [SGI Altix](system_altix.md) can access their home directory via NFS
diff --git a/doc.zih.tu-dresden.de/docs/archive/system_deimos.md b/doc.zih.tu-dresden.de/docs/archive/system_deimos.md
index b36a9348138dc808273c83501afe92c99872a155..50682072db8e3c3d3bfba88ec9dfce6897b55c7f 100644
--- a/doc.zih.tu-dresden.de/docs/archive/system_deimos.md
+++ b/doc.zih.tu-dresden.de/docs/archive/system_deimos.md
@@ -29,7 +29,7 @@ mounted on `/tmp`. The jobs for the compute nodes are scheduled by the
 [Platform LSF](platform_lsf.md)
 batch system from the login nodes `deimos.hrsk.tu-dresden.de` .
 
-Two separate Infiniband networks (10 Gb/s) with low cascading switches provide the communication and
+Two separate InfiniBand networks (10 Gb/s) with low cascading switches provide the communication and
 I/O infrastructure for low latency / high throughput data traffic. An additional gigabit Ethernet
 network is used for control and service purposes.
 
diff --git a/doc.zih.tu-dresden.de/docs/archive/system_phobos.md b/doc.zih.tu-dresden.de/docs/archive/system_phobos.md
index 3519c36b876b15ea8b57146f112207ad0b5dd9f7..833c23d66d7c365a9c90d27fc067ff20175b9b34 100644
--- a/doc.zih.tu-dresden.de/docs/archive/system_phobos.md
+++ b/doc.zih.tu-dresden.de/docs/archive/system_phobos.md
@@ -25,7 +25,7 @@ All nodes share a 4.4 TB SAN. Each node has additional local disk space mounted
 jobs for the compute nodes are scheduled by a [Platform LSF](platform_lsf.md) batch system running on
 the login node `phobos.hrsk.tu-dresden.de`.
 
-Two separate Infiniband networks (10 Gb/s) with low cascading switches provide the infrastructure
+Two separate InfiniBand networks (10 Gb/s) with low cascading switches provide the infrastructure
 for low latency / high throughput data traffic. An additional GB/Ethernetwork is used for control
 and service purposes.
 
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/architecture_2023.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/architecture_2023.md
index b0d23e2e789719ed0ff95a84f8f1056753cbb60c..f7c25a6d1b8a5fce09d6cc0b657f42d20b92b4cd 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/architecture_2023.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/architecture_2023.md
@@ -2,7 +2,7 @@
 
 With the replacement of the Taurus system by the cluster `Barnard` in 2023,
 the rest of the installed hardware had to be re-connected, both with
-Infiniband and with Ethernet.
+InfiniBand and with Ethernet.
 
 ![Architecture overview 2023](../jobs_and_resources/misc/architecture_2023.png)
 {: align=center}
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/arm_hpc_devkit.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/arm_hpc_devkit.md
index d2930351aa58fd4a111d386f817aef923613c08a..f0707536cd629aabd68f8c95618d5cbf344383a2 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/arm_hpc_devkit.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/arm_hpc_devkit.md
@@ -12,7 +12,7 @@ This Arm HPC Developer kit offers:
 * 512G DDR4 memory (8x 64G)
 * 6TB SAS/ SATA 3.5″
 * 2x NVIDIA A100 GPU
-* 2x NVIDIA BlueField-2 E-Series DPU: 200GbE/HDR single-port, both connected to the Infiniband network
+* 2x NVIDIA BlueField-2 E-Series DPU: 200GbE/HDR single-port, both connected to the InfiniBand network
 
 ## Further Information
 
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/migration_2023.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/migration_2023.md
index 3a6749cff0814d2dbf54d53288fbcaa7fcb85818..c1098361618300a996abccb614ba8ccabae41658 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/migration_2023.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/migration_2023.md
@@ -10,7 +10,7 @@ The new HPC system "Barnard" from Bull comes with these main properties:
 
 * 630 compute nodes based on Intel Sapphire Rapids
 * new Lustre-based storage systems
-* HDR Infiniband network large enough to integrate existing and near-future non-Bull hardware
+* HDR InfiniBand network large enough to integrate existing and near-future non-Bull hardware
 * To help our users to find the best location for their data we now use the name of
 animals (size, speed) as mnemonics.
 
@@ -24,7 +24,7 @@ To lower this hurdle we now create homogenous clusters with their own Slurm inst
 cluster specific login nodes running on the same CPU. Job submission is possible only
 from within the cluster (compute or login node).
 
-All clusters will be integrated to the new Infiniband fabric and have then the same access to
+All clusters will be integrated to the new InfiniBand fabric and have then the same access to
 the shared filesystems. This recabling requires a brief downtime of a few days.
 
 [Details on architecture](/jobs_and_resources/architecture_2023).
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md
index 78b8175ccbba3fb0eee8be7b946ebe2bee31219b..4a72b115e9b6433c889f28802ccf685209396d98 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md
@@ -4,7 +4,7 @@
 
 -   8x Intel NVMe Datacenter SSD P4610, 3.2 TB
 -   3.2 GB/s (8x 3.2 =25.6 GB/s)
--   2 Infiniband EDR links, Mellanox MT27800, ConnectX-5, PCIe x16, 100
+-   2 InfiniBand EDR links, Mellanox MT27800, ConnectX-5, PCIe x16, 100
     Gbit/s
 -   2 sockets Intel Xeon E5-2620 v4 (16 cores, 2.10GHz)
 -   64 GB RAM
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md
index f270f8f1da6100ab3989c0358a473c09a9cf3194..59529f7069227c5eae117d5dc6868f70ceb570c8 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md
@@ -102,6 +102,6 @@ case on Rome. You might want to try `-mavx2 -fma` instead.
 
 ### Intel MPI
 
-We have seen only half the theoretical peak bandwidth via Infiniband between two nodes, whereas
+We have seen only half the theoretical peak bandwidth via InfiniBand between two nodes, whereas
 Open MPI got close to the peak bandwidth, so you might want to avoid using Intel MPI on partition
 `rome` if your application heavily relies on MPI communication until this issue is resolved.
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
index 946cca8bc4b988cd311b635e7fe78d569b6f15d0..544abeca7bd4df3b469582f69ce0c0f8874552fa 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
@@ -22,7 +22,7 @@ project's quota can be increased or dedicated volumes of up to the full capacity
 - Granularity should be a socket (28 cores)
 - Can be used for OpenMP applications with large memory demands
 - To use Open MPI it is necessary to export the following environment
-  variables, so that Open MPI uses shared-memory instead of Infiniband
+  variables, so that Open MPI uses shared-memory instead of InfiniBand
   for message transport:
 
   ```
@@ -31,4 +31,4 @@ project's quota can be increased or dedicated volumes of up to the full capacity
   ```
 
 - Use `I_MPI_FABRICS=shm` so that Intel MPI doesn't even consider
-  using Infiniband devices itself, but only shared-memory instead
+  using InfiniBand devices itself, but only shared-memory instead
diff --git a/doc.zih.tu-dresden.de/docs/software/singularity_recipe_hints.md b/doc.zih.tu-dresden.de/docs/software/singularity_recipe_hints.md
index ab20fff41902d2bd4ba1a94c0333689daa8ed303..1dc36a50a8bd17556e08aea9458e6db31cf47d59 100644
--- a/doc.zih.tu-dresden.de/docs/software/singularity_recipe_hints.md
+++ b/doc.zih.tu-dresden.de/docs/software/singularity_recipe_hints.md
@@ -128,7 +128,7 @@ marie@login$ srun -n 4 --ntasks-per-node 2 --time=00:10:00 singularity exec ubun
 * Chosen CUDA version depends on installed driver of host
 * Open MPI needs PMI for Slurm integration
 * Open MPI needs CUDA for GPU copy-support
-* Open MPI needs `ibverbs` library for Infiniband
+* Open MPI needs `ibverbs` library for InfiniBand
 * `openmpi-mca-params.conf` required to avoid warnings on fork (OK on ZIH systems)
 * Environment variables `SLURM_VERSION` and `OPENMPI_VERSION` can be set to  choose different
   version when building the container
diff --git a/doc.zih.tu-dresden.de/wordlist.aspell b/doc.zih.tu-dresden.de/wordlist.aspell
index 78bf541c6dfea01baedbb3735f94dfe05b7d4341..76efc26e567d2b95af52379c7d267cf845bf7957 100644
--- a/doc.zih.tu-dresden.de/wordlist.aspell
+++ b/doc.zih.tu-dresden.de/wordlist.aspell
@@ -175,7 +175,7 @@ iDataPlex
 ifort
 ImageNet
 img
-Infiniband
+InfiniBand
 InfluxDB
 init
 inode