diff --git a/doc.zih.tu-dresden.de/README.md b/doc.zih.tu-dresden.de/README.md
index e4e8683732a51ac78aa9b9e1b1be63854db4dd6b..ec57a866eda7f60cf51a394a87e88afda39aa779 100644
--- a/doc.zih.tu-dresden.de/README.md
+++ b/doc.zih.tu-dresden.de/README.md
@@ -40,7 +40,7 @@ Now, create a local clone of your fork
 
 #### Install Dependencies
 
-**TODO:** Describtion
+**TODO:** Description
 
 ```Shell Session
 ~ cd hpc-compendium/doc.zih.tu-dresden.de
@@ -61,7 +61,7 @@ editor are invoked: Do your changes, add a meaningful commit message and commit
 The more sophisticated integrated Web IDE is reached from the top level menu of the repository or
 by selecting any source file.
 
-Other git services might have an aquivivalent web interface to interact with the repository. Please
+Other git services might have an equivalent web interface to interact with the repository. Please
 refer to the corresponding documentation for further information.
 
 <!--This option of contributing is only available for users of-->
@@ -236,7 +236,7 @@ new branch (a so-called feature branch) basing on the `main` branch and commit y
 ```
 
 The last command pushes the changes to your remote at branch `FEATUREBRANCH`. Now, it is time to
-incoporate the changes and improvements into the HPC Compendium. For this, create a
+incorporate the changes and improvements into the HPC Compendium. For this, create a
 [merge request](https://gitlab.hrz.tu-chemnitz.de/zih/hpc-compendium/hpc-compendium/-/merge_requests/new)
 to the `main` branch.
 
diff --git a/doc.zih.tu-dresden.de/docs/application/access.md b/doc.zih.tu-dresden.de/docs/application/access.md
index b396ad42a0946c22647ff7240ee156f20f2376ec..54aa7c531aaf8eb774a573a4323e4c526af0e331 100644
--- a/doc.zih.tu-dresden.de/docs/application/access.md
+++ b/doc.zih.tu-dresden.de/docs/application/access.md
@@ -13,7 +13,7 @@ project manager is called to inform the ZIH about any changes according the staf
 also trial accounts have to fill in the application form.)\<br />**
 
 It is invariably possible to apply for more/different resources. Whether additional resources are
-granted or not depends on the current allocations and on the availablility of the installed systems.
+granted or not depends on the current allocations and on the availability of the installed systems.
 
 The terms of use of the HPC systems are only [available in German](terms_of_use.md) - at the
 moment.
@@ -39,13 +39,13 @@ For obtaining access to the machines, the following forms have to be filled in:
 
 ### Subsequent applications / view for project leader
 
-Subsequent applications will be neccessary,
+Subsequent applications will be necessary,
 
 - if the end of project is reached
 - if the applied resources won't be sufficient
 
 The project leader and one person instructed by him, the project administrator, should use
-[this website](https://hpcprojekte.zih.tu-dresden.de/managers/) (ZIH-login neccessary). At this
+[this website](https://hpcprojekte.zih.tu-dresden.de/managers/) (ZIH-login necessary). At this
 website you have an overview of your projects, the usage of resources, you can submit subsequent
 applications, and you are able to add staff members to your project.
 
@@ -77,8 +77,8 @@ LaTeX-template(
 
 If you plan to publish a paper with results based on the used CPU hours of our machines, please
 insert in the acknowledgement an small part with thank for the support by the machines of the
-ZIH/TUD. (see example below) Please send us a link/reference to the paper if it was puplished.  It
-will be very helpfull for the next acquirement of compute power.  Thank you very much.
+ZIH/TUD. (see example below) Please send us a link/reference to the paper if it was published. It
+will be very helpful for the next acquirement of compute power.  Thank you very much.
 
 Two examples:
 
diff --git a/doc.zih.tu-dresden.de/docs/software/deep_learning.md b/doc.zih.tu-dresden.de/docs/software/deep_learning.md
index 6439c1dc234d4cc0476c4966edf53a33d17480be..2f27e95e0f8abda52dc880673a6f902ae817f372 100644
--- a/doc.zih.tu-dresden.de/docs/software/deep_learning.md
+++ b/doc.zih.tu-dresden.de/docs/software/deep_learning.md
@@ -1,7 +1,7 @@
 # Deep learning
 
 **Prerequisites**: To work with Deep Learning tools you obviously need [Login](../access/login.md)
-for the Taurus system and basic knowledge about Python, SLURM manager.
+for the Taurus system and basic knowledge about Python, Slurm manager.
 
 **Aim** of this page is to introduce users on how to start working with Deep learning software on
 both the ml environment and the scs5 environment of the Taurus system.
@@ -26,12 +26,12 @@ There are numerous different possibilities on how to work with [TensorFlow](tens
 Taurus. On this page, for all examples default, scs5 partition is used. Generally, the easiest way
 is using the [modules system](modules.md)
 and Python virtual environment (test case). However, in some cases, you may need directly installed
-Tensorflow stable or night releases. For this purpose use the
+TensorFlow stable or night releases. For this purpose use the
 [EasyBuild](custom_easy_build_environment.md), [Containers](tensorflow_container_on_hpcda.md) and see
 [the example](https://www.tensorflow.org/install/pip). For examples of using TensorFlow for ml partition
 with module system see [TensorFlow page for HPC-DA](tensorflow.md).
 
-Note: If you are going used manually installed Tensorflow release we recommend use only stable
+Note: If you are going used manually installed TensorFlow release we recommend use only stable
 versions.
 
 ## Keras
@@ -44,7 +44,7 @@ name "Keras".
 On this page for all examples default scs5 partition used. There are numerous different
 possibilities on how to work with [TensorFlow](tensorflow.md) and Keras
 on Taurus. Generally, the easiest way is using the [module system](modules.md) and Python
-virtual environment (test case) to see Tensorflow part above.
+virtual environment (test case) to see TensorFlow part above.
 For examples of using Keras for ml partition with the module system see the
 [Keras page for HPC-DA](keras.md).
 
@@ -71,7 +71,7 @@ Job-file (schedule job with sbatch, check the status with 'squeue -u \<Username>
 #!/bin/bash
 #SBATCH --gres=gpu:1                         # 1 - using one gpu, 2 - for using 2 gpus
 #SBATCH --mem=8000
-#SBATCH -p gpu2                              # select the type of nodes (opitions: haswell, smp, sandy, west,gpu, ml) K80 GPUs on Haswell node
+#SBATCH -p gpu2                              # select the type of nodes (options: haswell, smp, sandy, west, gpu, ml) K80 GPUs on Haswell node
 #SBATCH --time=00:30:00
 #SBATCH -o HLR_&lt;name_of_your_script&gt;.out     # save output under HLR_${SLURMJOBID}.out
 #SBATCH -e HLR_&lt;name_of_your_script&gt;.err     # save error messages under HLR_${SLURMJOBID}.err
@@ -128,7 +128,7 @@ The [ImageNet](http://www.image-net.org/) project is a large visual database des
 visual object recognition software research. In order to save space in the file system by avoiding
 to have multiple duplicates of this lying around, we have put a copy of the ImageNet database
 (ILSVRC2012 and ILSVR2017) under `/scratch/imagenet` which you can use without having to download it
-again. For the future, the Imagenet dataset will be available in `/warm_archive`. ILSVR2017 also
+again. For the future, the ImageNet dataset will be available in `/warm_archive`. ILSVR2017 also
 includes a dataset for recognition objects from a video. Please respect the corresponding
 [Terms of Use](https://image-net.org/download.php).
 
@@ -138,21 +138,19 @@ Jupyter notebooks are a great way for interactive computing in your web browser.
 working with data cleaning and transformation, numerical simulation, statistical modelling, data
 visualization and of course with machine learning.
 
-There are two general options on how to work Jupyter notebooks using HPC: remote jupyter server and
-jupyterhub.
+There are two general options on how to work Jupyter notebooks using HPC: remote Jupyter server and
+JupyterHub.
 
-These sections show how to run and set up a remote jupyter server within a sbatch GPU job and which
+These sections show how to run and set up a remote Jupyter server within a sbatch GPU job and which
 modules and packages you need for that.
 
 **Note:** On Taurus, there is a [JupyterHub](../access/jupyterhub.md), where you do not need the
 manual server setup described below and can simply run your Jupyter notebook on HPC nodes. Keep in
-mind that with Jupyterhub you can't work with some special instruments. However general data
+mind, that, with JupyterHub, you can't work with some special instruments. However, general data
 analytics tools are available.
 
 The remote Jupyter server is able to offer more freedom with settings and approaches.
 
-Note: Jupyterhub is could be under construction
-
 ### Preparation phase (optional)
 
 On Taurus, start an interactive session for setting up the
@@ -184,7 +182,7 @@ executable script and run the installation script:
 wget https://repo.continuum.io/archive/Anaconda3-2019.03-Linux-x86_64.sh chmod 744
 Anaconda3-2019.03-Linux-x86_64.sh ./Anaconda3-2019.03-Linux-x86_64.sh
 
-(during installation you have to confirm the licence agreement)
+(during installation you have to confirm the license agreement)
 ```
 
 Next step will install the anaconda environment into the home
@@ -197,14 +195,14 @@ conda create --name jnb
 ### Set environmental variables on Taurus
 
 In shell activate previously created python environment (you can
-deactivate it also manually) and Install jupyter packages for this python environment:
+deactivate it also manually) and install Jupyter packages for this python environment:
 
 ```Bash
 source activate jnb conda install jupyter
 ```
 
-If you need to adjust the config, you should create the template.  Generate config files for jupyter
-notebook server:
+If you need to adjust the configuration, you should create the template. Generate config files for
+Jupyter notebook server:
 
 ```Bash
 jupyter notebook --generate-config
@@ -220,7 +218,7 @@ in browser session:
 jupyter notebook password Enter password: Verify password:
 ```
 
-you will get a message like that:
+You get a message like that:
 
 ```Bash
 [NotebookPasswordApp] Wrote *hashed password* to
@@ -234,9 +232,9 @@ certificate:
 openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mykey.key -out mycert.pem
 ```
 
-fill in the form with decent values.
+Fill in the form with decent values.
 
-Possible entries for your jupyter config (`.jupyter/jupyter_notebook*config.py*`). Uncomment below
+Possible entries for your Jupyter config (`.jupyter/jupyter_notebook*config.py*`). Uncomment below
 lines:
 
 ```Bash
@@ -253,11 +251,11 @@ hashed password here>' c.NotebookApp.port = 9999 c.NotebookApp.allow_remote_acce
 Note: `<path-to-cert>` - path to key and certificate files, for example:
 (`/home/\<username>/mycert.pem`)
 
-### SLURM job file to run the jupyter server on Taurus with GPU (1x K80) (also works on K20)
+### Slurm job file to run the Jupyter server on Taurus with GPU (1x K80) (also works on K20)
 
 ```Bash
 #!/bin/bash -l #SBATCH --gres=gpu:1 # request GPU #SBATCH --partition=gpu2 # use GPU partition
-SBATCH --output=notebok_output.txt #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --time=02:30:00
+SBATCH --output=notebook_output.txt #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --time=02:30:00
 SBATCH --mem=4000M #SBATCH -J "jupyter-notebook" # job-name #SBATCH -A <name_of_your_project>
 
 unset XDG_RUNTIME_DIR   # might be required when interactive instead of sbatch to avoid
@@ -287,7 +285,7 @@ There are two options on how to connect to the server:
 
 1. You can create an ssh tunnel if you have problems with the
 solution above. Open the other terminal and configure ssh
-tunnel: (look up connection values in the output file of slurm job, e.g.) (recommended):
+tunnel: (look up connection values in the output file of Slurm job, e.g.) (recommended):
 
 ```Bash
 node=taurusi2092                      #see the name of the node with squeue -u <your_login>
@@ -310,11 +308,11 @@ IP to your browser or call on local terminal e.g.  local$> firefox https://<IP>:
 important to use SSL cert
 ```
 
-To login into the jupyter notebook site, you have to enter the **token**.
+To login into the Jupyter notebook site, you have to enter the **token**.
 (`https://localhost:8887`). Now you can create and execute notebooks on Taurus with GPU support.
 
-If you would like to use [JupyterHub](../access/jupyterhub.md) after using a remote manually configurated
-jupyter server (example above) you need to change the name of the configuration file
+If you would like to use [JupyterHub](../access/jupyterhub.md) after using a remote manually configured
+Jupyter server (example above) you need to change the name of the configuration file
 (`/home//.jupyter/jupyter_notebook_config.py`) to any other.
 
 ### F.A.Q
@@ -322,7 +320,7 @@ jupyter server (example above) you need to change the name of the configuration
 **Q:** - I have an error to connect to the Jupyter server (e.g. "open failed: administratively
 prohibited: open failed")
 
-**A:** - Check the settings of your jupyter config file. Is it all necessary lines uncommented, the
+**A:** - Check the settings of your Jupyter config file. Is it all necessary lines uncommented, the
 right path to cert and key files, right hashed password from .json file? Check is the used local
 port [available](https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers)
 Check local settings e.g. (`/etc/ssh/sshd_config`, `/etc/hosts`).
diff --git a/doc.zih.tu-dresden.de/docs/software/singularity_recipe_hints.md b/doc.zih.tu-dresden.de/docs/software/singularity_recipe_hints.md
index e2831fc2ead270710f9e8d192d8fc51c31a33927..5e4388fcf95ed06370d7d633544ee685113df1a7 100644
--- a/doc.zih.tu-dresden.de/docs/software/singularity_recipe_hints.md
+++ b/doc.zih.tu-dresden.de/docs/software/singularity_recipe_hints.md
@@ -20,7 +20,7 @@ singularity exec xeyes.sif xeyes.
 ```
 
 This works because all the magic is done by singularity already like setting $DISPLAY to the outside
-display and mounting $HOME so $HOME/.Xauthority (X11 authentification cookie) is found. When you are
+display and mounting $HOME so $HOME/.Xauthority (X11 authentication cookie) is found. When you are
 using \`--contain\` or \`--no-home\` you have to set that cookie yourself or mount/copy it inside
 the container. Similar for \`--cleanenv\` you have to set $DISPLAY e.g. via