diff --git a/doc.zih.tu-dresden.de/docs/archive/deep_learning.md b/doc.zih.tu-dresden.de/docs/archive/deep_learning.md
index da8c9c461fddc3c870ef418bb7db2b1ed493abe8..f00b82d4df2caf3a066b229517c3bdfe3c57455e 100644
--- a/doc.zih.tu-dresden.de/docs/archive/deep_learning.md
+++ b/doc.zih.tu-dresden.de/docs/archive/deep_learning.md
@@ -1,10 +1,10 @@
 # Deep learning
 
 **Prerequisites**: To work with Deep Learning tools you obviously need [Login](../access/ssh_login.md)
-for the Taurus system and basic knowledge about Python, Slurm manager.
+for the ZIH system system and basic knowledge about Python, Slurm manager.
 
 **Aim** of this page is to introduce users on how to start working with Deep learning software on
-both the ml environment and the scs5 environment of the Taurus system.
+both the ml environment and the scs5 environment of the system.
 
 ## Deep Learning Software
 
@@ -13,23 +13,21 @@ both the ml environment and the scs5 environment of the Taurus system.
 [TensorFlow](https://www.tensorflow.org/guide/) is a free end-to-end open-source software library
 for dataflow and differentiable programming across a range of tasks.
 
-TensorFlow is available in both main partitions
-[ml environment and scs5 environment](modules.md#module-environments)
-under the module name "TensorFlow". However, for purposes of machine learning and deep learning, we
-recommend using Ml partition [HPC-DA](../jobs_and_resources/hpcda.md). For example:
+TensorFlow is available in both [ml environment and scs5 environment](modules.md#module-environments)
+under the module name "TensorFlow". For example:
 
 ```Bash
 module load TensorFlow
 ```
 
 There are numerous different possibilities on how to work with [TensorFlow](tensorflow.md) on
-Taurus. On this page, for all examples default, scs5 partition is used. Generally, the easiest way
+ZIH system. On this page, for all examples default, scs5 partition is used. Generally, the easiest way
 is using the [modules system](modules.md)
 and Python virtual environment (test case). However, in some cases, you may need directly installed
 TensorFlow stable or night releases. For this purpose use the
 [EasyBuild](custom_easy_build_environment.md), [Containers](tensorflow_container_on_hpcda.md) and see
 [the example](https://www.tensorflow.org/install/pip). For examples of using TensorFlow for ml partition
-with module system see [TensorFlow page for HPC-DA](tensorflow.md).
+with module system see [TensorFlow page](../software/tensorflow.md).
 
 Note: If you are going used manually installed TensorFlow release we recommend use only stable
 versions.
@@ -42,11 +40,11 @@ environments [ml environment and scs5 environment](modules.md#module-environment
 name "Keras".
 
 On this page for all examples default scs5 partition used. There are numerous different
-possibilities on how to work with [TensorFlow](tensorflow.md) and Keras
-on Taurus. Generally, the easiest way is using the [module system](modules.md) and Python
+possibilities on how to work with [TensorFlow](../software/tensorflow.md) and Keras
+on ZIH system. Generally, the easiest way is using the [module system](modules.md) and Python
 virtual environment (test case) to see TensorFlow part above.
 For examples of using Keras for ml partition with the module system see the
-[Keras page for HPC-DA](keras.md).
+[Keras page](../software/keras.md).
 
 It can either use TensorFlow as its backend. As mentioned in Keras documentation Keras capable of
 running on Theano backend. However, due to the fact that Theano has been abandoned by the
@@ -56,7 +54,7 @@ TensorFlow module. TensorFlow should be loaded automatically as a dependency.
 
 Test case: Keras with TensorFlow on MNIST data
 
-Go to a directory on Taurus, get Keras for the examples and go to the examples:
+Go to a directory on ZIH system, get Keras for the examples and go to the examples:
 
 ```Bash
 git clone https://github.com/fchollet/keras.git'>https://github.com/fchollet/keras.git
@@ -125,7 +123,7 @@ allocate massive files (more than one terabyte) please contact the support befor
 ### The ImageNet dataset
 
 The [ImageNet](http://www.image-net.org/) project is a large visual database designed for use in
-visual object recognition software research. In order to save space in the file system by avoiding
+visual object recognition software research. In order to save space in the filesystem by avoiding
 to have multiple duplicates of this lying around, we have put a copy of the ImageNet database
 (ILSVRC2012 and ILSVR2017) under `/scratch/imagenet` which you can use without having to download it
 again. For the future, the ImageNet dataset will be available in `/warm_archive`. ILSVR2017 also
@@ -144,7 +142,7 @@ JupyterHub.
 These sections show how to run and set up a remote Jupyter server within a sbatch GPU job and which
 modules and packages you need for that.
 
-**Note:** On Taurus, there is a [JupyterHub](../access/jupyterhub.md), where you do not need the
+**Note:** On ZIH system, there is a [JupyterHub](../access/jupyterhub.md), where you do not need the
 manual server setup described below and can simply run your Jupyter notebook on HPC nodes. Keep in
 mind, that, with JupyterHub, you can't work with some special instruments. However, general data
 analytics tools are available.
@@ -153,7 +151,7 @@ The remote Jupyter server is able to offer more freedom with settings and approa
 
 ### Preparation phase (optional)
 
-On Taurus, start an interactive session for setting up the
+On ZIH system, start an interactive session for setting up the
 environment:
 
 ```Bash
@@ -192,7 +190,7 @@ directory (/home/userxx/anaconda3). Create a new anaconda environment with the n
 conda create --name jnb
 ```
 
-### Set environmental variables on Taurus
+### Set environmental variables
 
 In shell activate previously created python environment (you can
 deactivate it also manually) and install Jupyter packages for this python environment:
@@ -251,7 +249,7 @@ hashed password here>' c.NotebookApp.port = 9999 c.NotebookApp.allow_remote_acce
 Note: `<path-to-cert>` - path to key and certificate files, for example:
 (`/home/\<username>/mycert.pem`)
 
-### Slurm job file to run the Jupyter server on Taurus with GPU (1x K80) (also works on K20)
+### Slurm job file to run the Jupyter server on ZIH system with GPU (1x K80) (also works on K20)
 
 ```Bash
 #!/bin/bash -l #SBATCH --gres=gpu:1 # request GPU #SBATCH --partition=gpu2 # use GPU partition
@@ -300,7 +298,7 @@ of the ssh tunnel for connection to your remote server pgrep -f "ssh -fNL ${loca
    hostname**, the **port** of the server and the **token** to login (see paragraph above).
 
 You can connect directly if you know the IP address (just ping the node's hostname while logged on
-Taurus).
+ZIH system).
 
 ```Bash
 #comand on remote terminal taurusi2092$> host taurusi2092 # copy IP address from output # paste
@@ -309,7 +307,7 @@ important to use SSL cert
 ```
 
 To login into the Jupyter notebook site, you have to enter the **token**.
-(`https://localhost:8887`). Now you can create and execute notebooks on Taurus with GPU support.
+(`https://localhost:8887`). Now you can create and execute notebooks on ZIH system with GPU support.
 
 If you would like to use [JupyterHub](../access/jupyterhub.md) after using a remote manually configured
 Jupyter server (example above) you need to change the name of the configuration file
diff --git a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md
index 3fda99a5acfc67b6117dd4caac2943cd35ede33c..6ede1221eb298c306ec663af3f4dc335a7ae8dc4 100644
--- a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md
+++ b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md
@@ -14,7 +14,7 @@ marie@login$ module av Spark
 ```
 
 The **aim** of this page is to introduce users on how to start working with
-these frameworks on ZIH systems, e. g. on the [HPC-DA](../jobs_and_resources/hpcda.md) system.
+these frameworks on ZIH systems.
 
 **Prerequisites:** To work with the frameworks, you need [access](../access/ssh_login.md) to ZIH
 systems and basic knowledge about data analysis and the batch system
@@ -127,7 +127,7 @@ in an interactive job with:
 marie@compute$ source framework-configure.sh spark my-config-template
 ```
 
-### Using Hadoop Distributed File System (HDFS)
+### Using Hadoop Distributed Filesystem (HDFS)
 
 If you want to use Spark and HDFS together (or in general more than one
 framework), a scheme similar to the following can be used:
diff --git a/doc.zih.tu-dresden.de/docs/software/data_analytics_with_python.md b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_python.md
index 5749669a3ee70c091c1ca79f745104dea9e3b8ea..907df266cdadfc6e5d2cac86c053167cb2e56efe 100644
--- a/doc.zih.tu-dresden.de/docs/software/data_analytics_with_python.md
+++ b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_python.md
@@ -24,7 +24,7 @@ browser. They allow working with data cleaning and transformation,
 numerical simulation, statistical modeling, data visualization and machine learning.
 
 On ZIH system a [JupyterHub](../access/jupyterhub.md) is available, which can be used to run
-a Jupyter notebook on an HPC node, as well using a GPU when needed.  
+a Jupyter notebook on a node, as well using a GPU when needed.  
 
 ## Parallel Computing with Python
 
@@ -81,7 +81,7 @@ marie@compute$ python -c "import dask; print(dask.__version__)"
 2021.08.1
 ```
 
-The preferred and simplest way to run Dask on HPC system is using
+The preferred and simplest way to run Dask on ZIH system is using
 [dask-jobqueue](https://jobqueue.dask.org/).
 
 **TODO** create better example with jobqueue
@@ -111,7 +111,7 @@ community. Operations are primarily methods of communicator objects. It
 supports communication of pickle-able Python objects. mpi4py provides
 optimized communication of NumPy arrays.
 
-mpi4py is included as an extension of the SciPy-bundle modules on an HPC system
+mpi4py is included as an extension of the SciPy-bundle modules on a ZIH system
 
 ```console
 marie@compute$ module load SciPy-bundle/2020.11-foss-2020b
diff --git a/doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md
index 0bb17d179eeea559a8f4e1b85a0f68a32cfdd03e..73a0da60473e0f25d2614fb129829fcb8e293719 100644
--- a/doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md
+++ b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md
@@ -9,7 +9,7 @@ graphing.
 R possesses an extensive catalog of statistical and graphical methods.  It includes machine
 learning algorithms, linear regression, time series, statistical inference.
 
-We recommend using **Haswell** and/or **Romeo** partitions to work with R. For more details
+We recommend using `haswell` and/or `rome` partitions to work with R. For more details
 see [here](../jobs_and_resources/hardware_taurus.md).
 
 ## R Console
@@ -18,20 +18,13 @@ In the following example the `srun` command is used to submit a real-time execut
 designed for interactive use with monitoring the output. Please check
 [the Slurm page](../jobs_and_resources/slurm.md) for details.
 
-```Bash
-# job submission on haswell nodes with allocating: 1 task, 1 node, 4 CPUs per task with 2541 mb per CPU(core) for 1 hour
-tauruslogin$ srun --partition=haswell --ntasks=1 --nodes=1 --cpus-per-task=4 --mem-per-cpu=2541 --time=01:00:00 --pty bash
-
-# Ensure that you are using the scs5 environment
-module load modenv/scs5
-# Check all available modules for R with version 3.6
-module available R/3.6
-# Load default R module
-module load R
-# Checking the current R version
-which R
-# Start R console
-R
+```console
+marie@login$ srun --partition=haswell --ntasks=1 --nodes=1 --cpus-per-task=4 --mem-per-cpu=2541 --time=01:00:00 --pty bash
+marie@compute$ module load modenv/scs5
+marie@compute$ module available R/3.6
+marie@compute$ module load R
+marie@compute$ which R
+marie@compute$ R
 ```
 
 Using `srun` is recommended only for short test runs, while for larger runs batch jobs should be
@@ -41,13 +34,12 @@ used. The examples can be found [here](get_started_with_hpcda.md) or
 It is also possible to run `Rscript` command directly (after loading the module):
 
 ```Bash
-# Run Rscript directly. For instance: Rscript /scratch/ws/0/marie-study_project/my_r_script.R
-Rscript /path/to/script/your_script.R param1 param2
+Rscript /path/to/script/your_script.R <param1> <param2>
 ```
 
 ## R in JupyterHub
 
-In addition to using interactive and batch jobs, it is possible to work with **R** using
+In addition to using interactive and batch jobs, it is possible to work with R using
 [JupyterHub](../access/jupyterhub.md).
 
 The production and test [environments](../access/jupyterhub.md#standard-environments) of
@@ -60,16 +52,14 @@ For using R with RStudio please refer to [Data Analytics with RStudio](data_anal
 ## Install Packages in R
 
 By default, user-installed packages are saved in the users home in a folder depending on
-the architecture (x86 or PowerPC). Therefore the packages should be installed using interactive
+the architecture (`x86` or `PowerPC`). Therefore the packages should be installed using interactive
 jobs on the compute node:
 
-```Bash
-srun -p haswell --ntasks=1 --nodes=1 --cpus-per-task=4 --mem-per-cpu=2541 --time=01:00:00 --pty bash
-
-module purge
-module load modenv/scs5
-module load R
-R -e 'install.packages("package_name")'  #For instance: 'install.packages("ggplot2")'
+```console
+marie@compute$ module load R
+Module R/3.6.0-foss-2019a and 56 dependencies loaded.
+marie@compute$ R -e 'install.packages("ggplot2")'
+[...]
 ```
 
 ## Deep Learning with R
@@ -84,26 +74,18 @@ The ["TensorFlow" R package](https://tensorflow.rstudio.com/) provides R users a
 TensorFlow framework. [TensorFlow](https://www.tensorflow.org/) is an open-source software library
 for numerical computation using data flow graphs.
 
-```Bash
-srun --partition=ml --ntasks=1 --nodes=1 --cpus-per-task=7 --mem-per-cpu=5772 --gres=gpu:1 --time=04:00:00 --pty bash
+The respective modules can be loaded with the following
 
-module purge
-ml modenv/ml
-ml TensorFlow
-ml R
-
-which python
-mkdir python-virtual-environments  # Create a folder for virtual environments
-cd python-virtual-environments
-python3 -m venv --system-site-packages R-TensorFlow        #create python virtual environment
-source R-TensorFlow/bin/activate                           #activate environment
-module list
-which R
+```console
+marie@compute$ module load R/3.6.2-fosscuda-2019b
+Module R/3.6.2-fosscuda-2019b and 63 dependencies loaded.
+marie@compute$ module load TensorFlow/2.3.1-fosscuda-2019b-Python-3.7.4
+Module TensorFlow/2.3.1-fosscuda-2019b-Python-3.7.4 and 15 dependencies loaded.
 ```
 
-Please allocate the job with respect to
-[hardware specification](../jobs_and_resources/hardware_taurus.md)! Note that the nodes on `ml`
-partition have 4way-SMT, so for every physical core allocated, you will always get 4\*1443Mb=5772mb.
+!!! warning
+    Be aware that for compatibility reasons it is important to choose modules with
+    the same toolchain version (in this case `fosscuda/2019b`). For reference see [here](modules.md)
 
 In order to interact with Python-based frameworks (like TensorFlow) `reticulate` R library is used.
 To configure it to point to the correct Python executable in your virtual environment, create
@@ -111,18 +93,34 @@ a file named `.Rprofile` in your project directory (e.g. R-TensorFlow) with the
 contents:
 
 ```R
-Sys.setenv(RETICULATE_PYTHON = "/sw/installed/Anaconda3/2019.03/bin/python")    #assign the output of the 'which python' from above to RETICULATE_PYTHON
+Sys.setenv(RETICULATE_PYTHON = "/sw/installed/Python/3.7.4-GCCcore-8.3.0/bin/python")    #assign the output of the 'which python' from above to RETICULATE_PYTHON
 ```
 
 Let's start R, install some libraries and evaluate the result:
 
-```R
-install.packages("reticulate")
-library(reticulate)
-reticulate::py_config()
-install.packages("tensorflow")
-library(tensorflow)
-tf$constant("Hello TensorFlow")         #In the output 'Tesla V100-SXM2-32GB' should be mentioned
+```rconsole
+> install.packages(c("reticulate", "tensorflow"))
+Installing packages into ‘~/R/x86_64-pc-linux-gnu-library/3.6’
+(as ‘lib’ is unspecified)
+> reticulate::py_config()
+python:         /software/rome/Python/3.7.4-GCCcore-8.3.0/bin/python
+libpython:      /sw/installed/Python/3.7.4-GCCcore-8.3.0/lib/libpython3.7m.so
+pythonhome:     /software/rome/Python/3.7.4-GCCcore-8.3.0:/software/rome/Python/3.7.4-GCCcore-8.3.0
+version:        3.7.4 (default, Mar 25 2020, 13:46:43)  [GCC 8.3.0]
+numpy:          /software/rome/SciPy-bundle/2019.10-fosscuda-2019b-Python-3.7.4/lib/python3.7/site-packages/numpy
+numpy_version:  1.17.3
+
+NOTE: Python version was forced by RETICULATE_PYTHON
+
+> library(tensorflow)
+2021-08-26 16:11:47.110548: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
+> tf$constant("Hello TensorFlow")
+2021-08-26 16:14:00.269248: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
+2021-08-26 16:14:00.674878: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
+pciBusID: 0000:0b:00.0 name: A100-SXM4-40GB computeCapability: 8.0
+coreClock: 1.41GHz coreCount: 108 deviceMemorySize: 39.59GiB deviceMemoryBandwidth: 1.41TiB/s
+[...]
+tf.Tensor(b'Hello TensorFlow', shape=(), dtype=string)
 ```
 
 ??? example
@@ -203,13 +201,7 @@ tf$constant("Hello TensorFlow")         #In the output 'Tesla V100-SXM2-32GB' sh
 ## Parallel Computing with R
 
 Generally, the R code is serial. However, many computations in R can be made faster by the use of
-parallel computations. Taurus allows a vast number of options for parallel computations. Large
-amounts of data and/or use of complex models are indications to use parallelization.
-
-### General Information about the R Parallelism
-
-There are various techniques and packages in R that allow parallelization. This section
-concentrates on most general methods and examples. The Information here is Taurus-specific.
+parallel computations. This section concentrates on most general methods and examples.
 The [parallel](https://www.rdocumentation.org/packages/parallel/versions/3.6.2) library
 will be used below.
 
@@ -297,7 +289,8 @@ This way of the R parallelism uses the
 [MPI](https://en.wikipedia.org/wiki/Message_Passing_Interface) (Message Passing Interface) as a
 "back-end" for its parallel operations. The MPI-based job in R is very similar to submitting an
 [MPI Job](../jobs_and_resources/slurm.md#binding-and-distribution-of-tasks) since both are running
-multicore jobs on multiple nodes. Below is an example of running R script with the Rmpi on Taurus:
+multicore jobs on multiple nodes. Below is an example of running R script with the Rmpi on
+ZIH system:
 
 ```Bash
 #!/bin/bash
@@ -305,8 +298,8 @@ multicore jobs on multiple nodes. Below is an example of running R script with t
 #SBATCH --ntasks=32              # this parameter determines how many processes will be spawned, please use >=8
 #SBATCH --cpus-per-task=1
 #SBATCH --time=01:00:00
-#SBATCH -o test_Rmpi.out
-#SBATCH -e test_Rmpi.err
+#SBATCH --output=test_Rmpi.out
+#SBATCH --error=test_Rmpi.err
 
 module purge
 module load modenv/scs5
@@ -323,10 +316,10 @@ However, in some specific cases, you can specify the number of nodes and the num
 tasks per node explicitly:
 
 ```Bash
-#!/bin/bash
 #SBATCH --nodes=2
 #SBATCH --tasks-per-node=16
 #SBATCH --cpus-per-task=1
+
 module purge
 module load modenv/scs5
 module load R
@@ -395,7 +388,7 @@ Another example:
     #snow::stopCluster(cl)  # usually it hangs over here with OpenMPI > 2.0. In this case this command may be avoided, Slurm will clean up after the job finishes
     ```
 
-To use Rmpi and MPI please use one of these partitions: **haswell**, **broadwell** or **rome**.
+To use Rmpi and MPI please use one of these partitions: `haswell`, `broadwell` or `rome`.
 
 Use `mpirun` command to start the R script. It is a wrapper that enables the communication
 between processes running on different nodes. It is important to use `-np 1` (the number of spawned
diff --git a/doc.zih.tu-dresden.de/docs/software/data_analytics_with_rstudio.md b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_rstudio.md
index d199e8eb792c74adcef5fa0a3f550f8fb02c9c57..7fd89780ade18eb15b7cc116ff89b1a778d876f2 100644
--- a/doc.zih.tu-dresden.de/docs/software/data_analytics_with_rstudio.md
+++ b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_rstudio.md
@@ -14,5 +14,5 @@ similarly to a new kernel from [JupyterLab](../access/jupyterhub.md#jupyterlab)
     If an error "could not start RStudio in time" occurs, try reloading the web page with F5.
 
 ???note
-    Please note that it is currently not recommended to use an interactive x11 job with the
-    desktop version of RStudio, as described, for example, in introduction to HPC-DA slides.
+    Please note that it is currently not recommended to use an interactive `x11` job with the
+    desktop version of RStudio as described in the introductory slides.
diff --git a/doc.zih.tu-dresden.de/docs/software/distributed_training.md b/doc.zih.tu-dresden.de/docs/software/distributed_training.md
index 8d2e523942513a65752b654d0053938c8203c067..ae707cc2b7be066e01bde4ebc13adf425979cd4a 100644
--- a/doc.zih.tu-dresden.de/docs/software/distributed_training.md
+++ b/doc.zih.tu-dresden.de/docs/software/distributed_training.md
@@ -60,7 +60,7 @@ package to synchronize gradients and buffers.
 
 The tutorial could be found [here](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html).
 
-To use distributed data parallelization on Taurus please use following
+To use distributed data parallelization on ZIH system please use following
 parameters: `--ntasks-per-node` -parameter to the number of GPUs you use
 per node. Also, it could be useful to increase `memomy/cpu` parameters
 if you run larger models. Memory can be set up to:
@@ -93,7 +93,7 @@ in some cases better results than pure TensorFlow and PyTorch.
 
 Horovod is available as a module with **TensorFlow** or **PyTorch**for **all** module environments.
 Please check the [software module list](modules.md) for the current version of the software.
-Horovod can be loaded like other software on the Taurus:
+Horovod can be loaded like other software on ZIH system:
 
 ```Bash
 ml av Horovod            #Check available modules with Python
diff --git a/doc.zih.tu-dresden.de/docs/software/machine_learning.md b/doc.zih.tu-dresden.de/docs/software/machine_learning.md
index 60a8288a84874c9993813af02c6990f590a5fb73..8225aa0a5528157a70a684ef99eb18041d6eaa04 100644
--- a/doc.zih.tu-dresden.de/docs/software/machine_learning.md
+++ b/doc.zih.tu-dresden.de/docs/software/machine_learning.md
@@ -90,7 +90,7 @@ virtual environment.
 
 The [Jupyter Notebook](https://jupyter.org/) is an open-source web application that allows you to
 create documents containing live code, equations, visualizations, and narrative text. [JupyterHub](../access/jupyterhub.md)
-allows to work with machine learning frameworks (e.g. TensorFlow or PyTorch) on Taurus and to run
+allows to work with machine learning frameworks (e.g. TensorFlow or PyTorch) on ZIH system and to run
 your Jupyter notebooks on HPC nodes.
 
 After accessing JupyterHub, you can start a new session and configure it. For machine learning
@@ -109,7 +109,6 @@ TensorFlow and PyTorch on the board:
   Community-supported `ppc64le` docker container for TensorFlow.
 * [PowerAI container](https://hub.docker.com/r/ibmcom/powerai/):
   Official Docker container with TensorFlow, PyTorch and many other packages.
-  Heavy container. It requires a lot of space. Could be found on Taurus.
 
 Note: You could find other versions of software in the container on the "tag" tab on the docker web
 page of the container.
diff --git a/doc.zih.tu-dresden.de/docs/software/python_virtual_environments.md b/doc.zih.tu-dresden.de/docs/software/python_virtual_environments.md
index aed3f90eb07b75ce4147fe95f50b2cb96f06ba86..7772f01834147d0ef51c4241add5fdbef041f22e 100644
--- a/doc.zih.tu-dresden.de/docs/software/python_virtual_environments.md
+++ b/doc.zih.tu-dresden.de/docs/software/python_virtual_environments.md
@@ -44,14 +44,14 @@ Successfully installed torchvision-0.10.0
     clear up the following. Maybe leave only conda stuff...
 
 There are two methods of how to work with virtual environments on
-Taurus:
+ZIH system:
 
 1. **virtualenv** is a standard Python tool to create isolated Python environments.
    It is the preferred interface for
-   managing installations and virtual environments on Taurus and part of the Python modules.
+   managing installations and virtual environments on ZIH system and part of the Python modules.
 
 2. **conda** is an alternative method for managing installations and
-virtual environments on Taurus. conda is an open-source package
+virtual environments on ZIH system. conda is an open-source package
 management system and environment management system from Anaconda. The
 conda manager is included in all versions of Anaconda and Miniconda.