diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/lustre.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/lustre.md
index 0c980b79f9135a775d0135ce943d3e2e8c494a8e..615282e59c23aa93844116a5a58939274bf5f12f 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/lustre.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/lustre.md
@@ -33,7 +33,7 @@ In this sense, you should minimize the usage of system calls querying or modifyi
 and directory attributes, e.g. `stat()`, `statx()`, `open()`, `openat()` etc.
 
 Please, also avoid commands basing on the above mentioned system calls such as `ls -l` and
-`ls --color`. Instead, you should invoke `ls` or `ls -l <filename` to reduce metadata operations.
+`ls --color`. Instead, you should invoke `ls` or `ls -l <filename>` to reduce metadata operations.
 This also holds for commands walking the filesystems recursively performing massive metadata
 operations such as `ls -R`, `find`, `locate`, `du` and `df`.
 
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/partitions_and_limits.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/partitions_and_limits.md
index 71702293bbb83f3f51959418d616a80a743342f0..f887a766218fb2003d8e1e4226cf7b74706c8484 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/partitions_and_limits.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/partitions_and_limits.md
@@ -74,8 +74,8 @@ The available compute nodes are grouped into logical (possibly overlapping) sets
 
 Some partitions/nodes have Simultaneous Multithreading (SMT) enabled. You request for this
 additional threads using the Slurm option `--hint=multithread` or by setting the environment
-varibale `SLURM_HINT=multithread`. Besides the usage of the threads to speed up the computations,
-the memory of the other threads is allocated implicitly, too, and you will allways get
+variable `SLURM_HINT=multithread`. Besides the usage of the threads to speed up the computations,
+the memory of the other threads is allocated implicitly, too, and you will always get
 `Memory per Core`*`number of threads` as memory pledge.
 
 Some partitions have a *interactive* counterpart for interactive jobs. The corresponding partitions
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md
index 0322c2ce97f2801b4250d409718b36ab9b64f9fb..e35bf836d0dbd56a0a03a0c87eb12fa1064f38e0 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md
@@ -9,10 +9,10 @@ depend on the type of parallelization and architecture.
 
 An SMP-parallel job can only run within a node, so it is necessary to include the options `--node=1`
 and `--ntasks=1`. The maximum number of processors for an SMP-parallel program is 896 and 56 on
-partition `taurussmp8` and  `smp2`, respectively.  Please refer to the
-[partitions section](partitions_and_limits.md#memory-limits) for up-to-date information. Using the
-option `--cpus-per-task=<N>` Slurm will start one task and you will have `N` CPUs available for your
-job. An example job file would look like:
+partition `taurussmp8` and  `smp2`, respectively, as described in the
+[section on memory limits](partitions_and_limits.md#memory-limits). Using the option
+`--cpus-per-task=<N>` Slurm will start one task and you will have `N` CPUs available for your job.
+An example job file would look like:
 
 !!! example "Job file for OpenMP application"
 
@@ -341,7 +341,7 @@ Please read the Slurm documentation at https://slurm.schedmd.com/sbatch.html for
 
 You can use chain jobs to **create dependencies between jobs**. This is often useful if a job
 relies on the result of one or more preceding jobs. Chain jobs can also be used to split a long
-runnning job exceeding the batch queues limits into parts and chain these parts. Slurm has an option
+running job exceeding the batch queues limits into parts and chain these parts. Slurm has an option
 `-d, --dependency=<dependency_list>` that allows to specify that a job is only allowed to start if
 another job finished.