diff --git a/doc.zih.tu-dresden.de/docs/archive/beegfs_on_demand.md b/doc.zih.tu-dresden.de/docs/archive/beegfs_on_demand.md
new file mode 100644
index 0000000000000000000000000000000000000000..ce009ace4bdcfc58fc20009eafbc6faf6c4fd553
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/archive/beegfs_on_demand.md
@@ -0,0 +1,158 @@
+# BeeGFS Filesystem
+
+!!! warning
+
+    This documentation page is outdated.
+    The up-to date documentation on BeeGFS can be found [here](../data_lifecycle/beegfs.md).
+
+**Prerequisites:** To work with TensorFlow you obviously need a [login](../application/overview.md) to
+the ZIH systems and basic knowledge about Linux, mounting, and batch system Slurm.
+
+**Aim** of this page is to introduce
+users how to start working with the BeeGFS filesystem - a high-performance parallel filesystem.
+
+## Mount Point
+
+Understanding of mounting and the concept of the mount point is important for using filesystems and
+object storage. A mount point is a directory (typically an empty one) in the currently accessible
+filesystem on which an additional filesystem is mounted (i.e., logically attached).  The default
+mount points for a system are the directories in which filesystems will be automatically mounted
+unless told by the user to do otherwise.  All partitions are attached to the system via a mount
+point. The mount point defines the place of a particular data set in the filesystem. Usually, all
+partitions are connected through the root partition. On this partition, which is indicated with the
+slash (/), directories are created.
+
+## BeeGFS Introduction
+
+[BeeGFS](https://www.beegfs.io/content/) is the parallel cluster filesystem.  BeeGFS spreads data
+across multiple servers to aggregate capacity and performance of all servers to provide a highly
+scalable shared network filesystem with striped file contents. This is made possible by the
+separation of metadata and file contents.
+
+BeeGFS is fast, flexible, and easy to manage storage if for your issue
+filesystem plays an important role use BeeGFS. It addresses everyone,
+who needs large and/or fast file storage.
+
+## Create BeeGFS Filesystem
+
+To reserve nodes for creating BeeGFS filesystem you need to create a
+[batch](../jobs_and_resources/slurm.md) job
+
+```Bash
+#!/bin/bash
+#SBATCH -p nvme
+#SBATCH -N 4
+#SBATCH --exclusive
+#SBATCH --time=1-00:00:00
+#SBATCH --beegfs-create=yes
+
+srun sleep 1d  # sleep for one day
+
+## when finished writing, submit with:  sbatch <script_name>
+```
+
+Example output with job id:
+
+```Bash
+Submitted batch job 11047414   #Job id n.1
+```
+
+Check the status of the job with `squeue -u \<username>`.
+
+## Mount BeeGFS Filesystem
+
+You can mount BeeGFS filesystem on the ML partition (PowerPC architecture) or on the Haswell
+[partition](../jobs_and_resources/system_taurus.md) (x86_64 architecture)
+
+### Mount BeeGFS Filesystem on the Partition `ml`
+
+Job submission can be done with the command (use job id (n.1) from batch job used for creating
+BeeGFS system):
+
+```console
+srun -p ml --beegfs-mount=yes --beegfs-jobid=11047414 --pty bash                #Job submission on ml nodes
+```console
+
+Example output:
+
+```console
+srun: job 11054579 queued and waiting for resources         #Job id n.2
+srun: job 11054579 has been allocated resources
+```
+
+### Mount BeeGFS Filesystem on the Haswell Nodes (x86_64)
+
+Job submission can be done with the command (use job id (n.1) from batch
+job used for creating BeeGFS system):
+
+```console
+srun --constrain=DA --beegfs-mount=yes --beegfs-jobid=11047414 --pty bash       #Job submission on the Haswell nodes
+```
+
+Example output:
+
+```console
+srun: job 11054580 queued and waiting for resources          #Job id n.2
+srun: job 11054580 has been allocated resources
+```
+
+## Working with BeeGFS files for both types of nodes
+
+Show contents of the previously created file, for example,
+`beegfs_11054579` (where 11054579 - job id **n.2** of srun job):
+
+```console
+cat .beegfs_11054579
+```
+
+Note: don't forget to go over to your `home` directory where the file located
+
+Example output:
+
+```Bash
+#!/bin/bash
+
+export BEEGFS_USER_DIR="/mnt/beegfs/<your_id>_<name_of_your_job>/<your_id>"
+export BEEGFS_PROJECT_DIR="/mnt/beegfs/<your_id>_<name_of_your_job>/<name of your project>"
+```
+
+Execute the content of the file:
+
+```console
+source .beegfs_11054579
+```
+
+Show content of user's BeeGFS directory with the command:
+
+```console
+ls -la ${BEEGFS_USER_DIR}
+```
+
+Example output:
+
+```console
+total 0
+drwx--S--- 2 <username> swtest  6 21. Jun 10:54 .
+drwxr-xr-x 4 root        root  36 21. Jun 10:54 ..
+```
+
+Show content of the user's project BeeGFS directory with the command:
+
+```console
+ls -la ${BEEGFS_PROJECT_DIR}
+```
+
+Example output:
+
+```console
+total 0
+drwxrws--T 2 root swtest  6 21. Jun 10:54 .
+drwxr-xr-x 4 root root   36 21. Jun 10:54 ..
+```
+
+!!! note
+
+    If you want to mount the BeeGFS filesystem on an x86 instead of an ML (power) node, you can
+    either choose the partition "interactive" or the partition `haswell64`, but for the partition
+    `haswell64` you have to add the parameter `--exclude=taurusi[4001-4104,5001-5612]` to your job.
+    This is necessary because the BeeGFS client is only installed on the 6000 island.
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/bee_gfs.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/bee_gfs.md
deleted file mode 100644
index 14354286e9793d85f92f8456e733187cb826e854..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/bee_gfs.md
+++ /dev/null
@@ -1,144 +0,0 @@
-# BeeGFS file system
-
-%RED%Note: This page is under construction. %ENDCOLOR%%RED%The pipeline
-will be changed soon%ENDCOLOR%
-
-**Prerequisites:** To work with Tensorflow you obviously need \<a
-href="Login" target="\_blank">access\</a> for the Taurus system and
-basic knowledge about Linux, mounting, SLURM system.
-
-**Aim** \<span style="font-size: 1em;"> of this page is to introduce
-users how to start working with the BeeGFS file\</span>\<span
-style="font-size: 1em;"> system - a high-performance parallel file
-system.\</span>
-
-## Mount point
-
-Understanding of mounting and the concept of the mount point is
-important for using file systems and object storage. A mount point is a
-directory (typically an empty one) in the currently accessible file
-system on which an additional file system is mounted (i.e., logically
-attached). \<span style="font-size: 1em;">The default mount points for a
-system are the directories in which file systems will be automatically
-mounted unless told by the user to do otherwise. \</span>\<span
-style="font-size: 1em;">All partitions are attached to the system via a
-mount point. The mount point defines the place of a particular data set
-in the file system. Usually, all partitions are connected through the
-root partition. On this partition, which is indicated with the slash
-(/), directories are created. \</span>
-
-## BeeGFS introduction
-
-\<span style="font-size: 1em;"> [BeeGFS](https://www.beegfs.io/content/)
-is the parallel cluster file system. \</span>\<span style="font-size:
-1em;">BeeGFS spreads data \</span>\<span style="font-size: 1em;">across
-multiple \</span>\<span style="font-size: 1em;">servers to aggregate
-\</span>\<span style="font-size: 1em;">capacity and \</span>\<span
-style="font-size: 1em;">performance of all \</span>\<span
-style="font-size: 1em;">servers to provide a highly scalable shared
-network file system with striped file contents. This is made possible by
-the separation of metadata and file contents. \</span>
-
-BeeGFS is fast, flexible, and easy to manage storage if for your issue
-filesystem plays an important role use BeeGFS. It addresses everyone,
-who needs large and/or fast file storage
-
-## Create BeeGFS file system
-
-To reserve nodes for creating BeeGFS file system you need to create a
-[batch](../jobs_and_resources/slurm.md) job
-
-    #!/bin/bash
-    #SBATCH -p nvme
-    #SBATCH -N 4
-    #SBATCH --exclusive
-    #SBATCH --time=1-00:00:00
-    #SBATCH --beegfs-create=yes
-
-    srun sleep 1d  # sleep for one day
-
-    ## when finished writing, submit with:  sbatch <script_name>
-
-Example output with job id:
-
-    Submitted batch job 11047414   #Job id n.1
-
-Check the status of the job with 'squeue -u \<username>'
-
-## Mount BeeGFS file system
-
-You can mount BeeGFS file system on the ML partition (ppc64
-architecture) or on the Haswell [partition](../jobs_and_resources/system_taurus.md) (x86_64
-architecture)
-
-### Mount BeeGFS file system on the ML
-
-Job submission can be done with the command (use job id (n.1) from batch
-job used for creating BeeGFS system):
-
-    srun -p ml --beegfs-mount=yes --beegfs-jobid=11047414 --pty bash                #Job submission on ml nodes
-
-Example output:
-
-    srun: job 11054579 queued and waiting for resources         #Job id n.2
-    srun: job 11054579 has been allocated resources
-
-### Mount BeeGFS file system on the Haswell nodes (x86_64)
-
-Job submission can be done with the command (use job id (n.1) from batch
-job used for creating BeeGFS system):
-
-    srun --constrain=DA --beegfs-mount=yes --beegfs-jobid=11047414 --pty bash       #Job submission on the Haswell nodes
-
-Example output:
-
-    srun: job 11054580 queued and waiting for resources          #Job id n.2
-    srun: job 11054580 has been allocated resources
-
-## Working with BeeGFS files for both types of nodes
-
-Show contents of the previously created file, for example,
-beegfs_11054579 (where 11054579 - job id **n.2** of srun job):
-
-    cat .beegfs_11054579
-
-Note: don't forget to go over to your home directory where the file
-located
-
-Example output:
-
-    #!/bin/bash
-
-    export BEEGFS_USER_DIR="/mnt/beegfs/<your_id>_<name_of_your_job>/<your_id>"
-    export BEEGFS_PROJECT_DIR="/mnt/beegfs/<your_id>_<name_of_your_job>/<name of your project>" 
-
-Execute the content of the file:
-
-    source .beegfs_11054579
-
-Show content of user's BeeGFS directory with the command:
-
-    ls -la ${BEEGFS_USER_DIR}
-
-Example output:
-
-    total 0
-    drwx--S--- 2 <username> swtest  6 21. Jun 10:54 .
-    drwxr-xr-x 4 root        root  36 21. Jun 10:54 ..
-
-Show content of the user's project BeeGFS directory with the command:
-
-    ls -la ${BEEGFS_PROJECT_DIR}
-
-Example output:
-
-    total 0
-    drwxrws--T 2 root swtest  6 21. Jun 10:54 .
-    drwxr-xr-x 4 root root   36 21. Jun 10:54 ..
-
-Note: If you want to mount the BeeGFS file system on an x86 instead of
-an ML (power) node, you can either choose the partition "interactive" or
-the partition "haswell64", but for the partition "haswell64" you have to
-add the parameter "--exclude=taurusi\[4001-4104,5001- 5612\]" to your
-job. This is necessary because the BeeGFS client is only installed on
-the 6000 island.
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/beegfs.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/beegfs.md
new file mode 100644
index 0000000000000000000000000000000000000000..1e2460c3852ffc2a59c8f3a1b8f7c6fcc66b5efb
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/beegfs.md
@@ -0,0 +1,73 @@
+# BeeGFS
+
+Commands to work with the BeeGFS filesystem.
+
+## Capacity and Filesystem Health
+
+View storage and inode capacity and utilization for metadata and storage targets.
+
+```console
+marie@login$ beegfs-df -p /beegfs/global0
+```
+
+The `-p` parameter needs to be the mountpoint of the filesystem and is mandatory.
+
+List storage and inode capacity, reachability and consistency information of each storage target.
+
+```console
+marie@login$ beegfs-ctl --listtargets --nodetype=storage --spaceinfo --longnodes --state --mount=/beegfs/global0
+```
+
+To check the capacity of the metadata server, just toggle the `--nodetype` argument.
+
+```console
+marie@login$ beegfs-ctl --listtargets --nodetype=meta --spaceinfo --longnodes --state --mount=/beegfs/global0
+```
+
+## Striping
+
+Show the stripe information of a given file on the filesystem and on which storage target the
+file is stored.
+
+```console
+marie@login$ beegfs-ctl --getentryinfo /beegfs/global0/my-workspace/myfile --mount=/beegfs/global0
+```
+
+Set the stripe pattern for a directory. In BeeGFS, the stripe pattern will be inherited from a
+directory to its children.
+
+```console
+marie@login$ beegfs-ctl --setpattern --chunksize=1m --numtargets=16 /beegfs/global0/my-workspace/ --mount=/beegfs/global0
+```
+
+This will set the stripe pattern for `/beegfs/global0/path/to/mydir/` to a chunk size of 1 MiB
+distributed over 16 storage targets.
+
+Find files located on certain server or targets. The following command searches all files that are
+stored on the storage targets with id 4 or 30 and my-workspace directory.
+
+```console
+marie@login$ beegfs-ctl --find /beegfs/global0/my-workspace/ --targetid=4 --targetid=30 --mount=/beegfs/global0
+```
+
+## Network
+
+View the network addresses of the filesystem servers.
+
+```console
+marie@login$ beegfs-ctl --listnodes --nodetype=meta --nicdetails --mount=/beegfs/global0
+marie@login$ beegfs-ctl --listnodes --nodetype=storage --nicdetails --mount=/beegfs/global0
+marie@login$ beegfs-ctl --listnodes --nodetype=client --nicdetails --mount=/beegfs/global0
+```
+
+Display connections the client is actually using
+
+```console
+marie@login$ beegfs-net
+```
+
+Display possible connectivity of the services
+
+```console
+marie@login$ beegfs-check-servers -p /beegfs/global0
+```
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md
index 5365ac4f3cfed5c4bc6a7051802bfbfe1eb7b17d..4174e2b46c0ff69b3fd6d9a12b0cf626e296bd88 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md
@@ -1,46 +1,23 @@
 # Overview
 
-As soon as you have access to ZIH systems you have to manage your data. Several file systems are
-available. Each file system serves for special purpose according to their respective capacity,
+As soon as you have access to ZIH systems, you have to manage your data. Several filesystems are
+available. Each filesystem serves for special purpose according to their respective capacity,
 performance and permanence.
 
 ## Work Directories
 
-| File system | Usable directory  | Capacity | Availability | Backup | Remarks                                                                                                                                                         |
+| Filesystem  | Usable directory  | Capacity | Availability | Backup | Remarks                                                                                                                                                         |
 |:------------|:------------------|:---------|:-------------|:-------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------|
 | `Lustre`    | `/scratch/`       | 4 PB     | global       | No     | Only accessible via [Workspaces](workspaces.md). Not made for billions of files!                                                                                   |
 | `Lustre`    | `/lustre/ssd`     | 40 TB    | global       | No     | Only accessible via [Workspaces](workspaces.md). For small I/O operations                                                                                          |
-| `BeeGFS`    | `/beegfs/global0` | 232 TB   | global       | No     | Only accessible via [Workspaces](workspaces.md). Fastest available file system, only for large parallel applications running with millions of small I/O operations |
+| `BeeGFS`    | `/beegfs/global0` | 232 TB   | global       | No     | Only accessible via [Workspaces](workspaces.md). Fastest available filesystem, only for large parallel applications running with millions of small I/O operations |
 | `ext4`      | `/tmp`            | 95 GB    | local        | No     | is cleaned up after the job automatically  |
 
-## Warm Archive
-
-!!! warning
-    This is under construction. The functionality is not there, yet.
-
-The warm archive is intended a storage space for the duration of a running HPC-DA project. It can
-NOT substitute a long-term archive. It consists of 20 storage nodes with a net capacity of 10 PB.
-Within Taurus (including the HPC-DA nodes), the management software "Quobyte" enables access via
-
-- native quobyte client - read-only from compute nodes, read-write
-  from login and nvme nodes
-- S3 - read-write from all nodes,
-- Cinder (from OpenStack cluster).
-
-For external access, you can use:
-
-- S3 to `<bucket>.s3.taurusexport.hrsk.tu-dresden.de`
-- or normal file transfer via our taurusexport nodes (see [DataManagement](overview.md)).
-
-An HPC-DA project can apply for storage space in the warm archive. This is limited in capacity and
-duration.
-TODO
-
-## Recommendations for File System Usage
+## Recommendations for Filesystem Usage
 
 To work as efficient as possible, consider the following points
 
-- Save source code etc. in `/home` or /projects/...
+- Save source code etc. in `/home` or `/projects/...`
 - Store checkpoints and other temporary data in `/scratch/ws/...`
 - Compilation in `/dev/shm` or `/tmp`
 
@@ -50,102 +27,30 @@ Getting high I/O-bandwidth
 - Use many processes (writing in the same file at the same time is possible)
 - Use large I/O transfer blocks
 
-## Cheat Sheet for Debugging File System Issues
+## Cheat Sheet for Debugging Filesystem Issues
 
-Every Taurus-User should normally be able to perform the following commands to get some intel about
+Users can select from the following commands to get some idea about
 their data.
 
 ### General
 
-For the first view, you can easily use the "df-command".
-
-```Bash
-df
-```
-
-Alternatively, you can use the "findmnt"-command, which is also able to perform an `df` by adding the
-"-D"-parameter.
-
-```Bash
-findmnt -D
-```
-
-Optional you can use the `-t`-parameter to specify the fs-type or the `-o`-parameter to alter the
-output.
-
-We do **not recommend** the usage of the "du"-command for this purpose.  It is able to cause issues
-for other users, while reading data from the filesystem.
-
-### BeeGFS
-
-Commands to work with the BeeGFS file system.
-
-#### Capacity and file system health
-
-View storage and inode capacity and utilization for metadata and storage targets.
-
-```Bash
-beegfs-df -p /beegfs/global0
-```
-
-The `-p` parameter needs to be the mountpoint of the file system and is mandatory.
-
-List storage and inode capacity, reachability and consistency information of each storage target.
-
-```Bash
-beegfs-ctl --listtargets --nodetype=storage --spaceinfo --longnodes --state --mount=/beegfs/global0
-```
-
-To check the capacity of the metadata server just toggle the `--nodetype` argument.
-
-```Bash
-beegfs-ctl --listtargets --nodetype=meta --spaceinfo --longnodes --state --mount=/beegfs/global0
-```
-
-#### Striping
-
-View the stripe information of a given file on the file system and shows on which storage target the
-file is stored.
-
-```Bash
-beegfs-ctl --getentryinfo /beegfs/global0/my-workspace/myfile --mount=/beegfs/global0
-```
-
-Set the stripe pattern for an directory. In BeeGFS the stripe pattern will be inherited form a
-directory to its children.
-
-```Bash
-beegfs-ctl --setpattern --chunksize=1m --numtargets=16 /beegfs/global0/my-workspace/ --mount=/beegfs/global0
-```
-
-This will set the stripe pattern for `/beegfs/global0/path/to/mydir/` to a chunksize of 1M
-distributed over 16 storage targets.
+For the first view, you can use the command `df`.
 
-Find files located on certain server or targets. The following command searches all files that are
-stored on the storage targets with id 4 or 30 and my-workspace directory.
-
-```Bash
-beegfs-ctl --find /beegfs/global0/my-workspace/ --targetid=4 --targetid=30 --mount=/beegfs/global0
+```console
+marie@login$ df
 ```
 
-#### Network
-
-View the network addresses of the file system servers.
+Alternatively, you can use the command `findmnt`, which is also able to report space usage
+by adding the parameter `-D`:
 
-```Bash
-beegfs-ctl --listnodes --nodetype=meta --nicdetails --mount=/beegfs/global0
-beegfs-ctl --listnodes --nodetype=storage --nicdetails --mount=/beegfs/global0
-beegfs-ctl --listnodes --nodetype=client --nicdetails --mount=/beegfs/global0
+```console
+marie@login$ findmnt -D
 ```
 
-Display connections the client is actually using
+Optionally, you can use the parameter `-t` to specify the filesystem type or the parameter `-o` to
+alter the output.
 
-```Bash
-beegfs-net
-```
+!!! important
 
-Display possible connectivity of the services
-
-```Bash
-beegfs-check-servers -p /beegfs/global0
-```
+    Do **not** use the `du`-command for this purpose. It is able to cause issues
+    for other users, while reading data from the filesystem.
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/hpc_storage_concept2019.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/hpc_storage_concept2019.md
deleted file mode 100644
index 998699215481e1318a3b5aa036eac8b56fa7d94e..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/hpc_storage_concept2019.md
+++ /dev/null
@@ -1,71 +0,0 @@
-# HPC Storage Changes 2019
-
-## Hardware changes require new approach**
-
-\<font face="Open Sans, sans-serif">At the moment we are preparing to
-remove our old hardware from 2013. This comes with a shrinking of our
-/scratch from 5 to 4 PB. At the same time we have now our "warm archive"
-operational for HPC with a capacity of 5 PB for now. \</font>
-
-\<font face="Open Sans, sans-serif">The tool concept of "workspaces" is
-common in a large number of HPC centers. The idea is to allocate a
-workspace directory in a certain storage system - connected with an
-expiry date. After a grace period the data is deleted automatically. The
-validity of a workspace can be extended twice. \</font>
-
-## \<font face="Open Sans, sans-serif"> **How to use workspaces?** \</font>
-
-\<font face="Open Sans, sans-serif">We have prepared a few examples at
-<https://doc.zih.tu-dresden.de/hpc-wiki/bin/view/Compendium/WorkSpaces>\</font>
-
--   \<p>\<font face="Open Sans, sans-serif">For transient data, allocate
-    a workspace, run your job, remove data, and release the workspace
-    from with\</font>\<font face="Open Sans, sans-serif">i\</font>\<font
-    face="Open Sans, sans-serif">n your job file.\</font>\</p>
--   \<p>\<font face="Open Sans, sans-serif">If you are working on a set
-    of data for weeks you might use workspaces in scratch and share them
-    with your groups by setting the file access attributes.\</font>\</p>
--   \<p>\<font face="Open Sans, sans-serif">For \</font>\<font
-    face="Open Sans, sans-serif">mid-term storage (max 3 years), use our
-    "warm archive" which is large but slow. It is available read-only on
-    the compute hosts and read-write an login and export nodes. To move
-    in your data, you might want to use the
-    [datamover nodes](../data_transfer/data_mover.md).\</font>\</p>
-
-## \<font face="Open Sans, sans-serif">Moving Data from /scratch and /lustre/ssd to your workspaces\</font>
-
-We are now mounting /lustre/ssd and /scratch read-only on the compute
-nodes. As soon as the non-workspace /scratch directories are mounted
-read-only on the login nodes as well, you won't be able to remove your
-old data from there in the usual way. So you will have to use the
-DataMover commands and ideally just move your data to your prepared
-workspace:
-
-```Shell Session
-dtmv /scratch/p_myproject/some_data /scratch/ws/myuser-mynewworkspace
-#or:
-dtmv /scratch/p_myproject/some_data /warm_archive/ws/myuser-mynewworkspace
-```
-
-Obsolete data can also be deleted like this:
-
-```Shell Session
-dtrm -rf /scratch/p_myproject/some_old_data
-```
-
-**%RED%At the end of the year we will delete all data on /scratch and
-/lsuter/ssd outside the workspaces.%ENDCOLOR%**
-
-## Data life cycle management
-
-\<font face="Open Sans, sans-serif">Please be aware: \</font>\<font
-face="Open Sans, sans-serif">Data in workspaces will be deleted
-automatically after the grace period.\</font>\<font face="Open Sans,
-sans-serif"> This is especially true for the warm archive. If you want
-to keep your data for a longer time please use our options for
-[long-term storage](preservation_research_data.md).\</font>
-
-\<font face="Open Sans, sans-serif">To \</font>\<font face="Open Sans,
-sans-serif">help you with that, you can attach your email address for
-notification or simply create an ICAL entry for your calendar
-(tu-dresden.de mailboxes only). \</font>
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md
index 2d20726755cf07c9d4a4f9f87d3ae4d2b5825dbc..e63f3f2876f98aeaa8c6a08e41fd21cc8eab7869 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md
@@ -1,8 +1,8 @@
 # Intermediate Archive
 
-With the "Intermediate Archive", ZIH is closing the gap between a normal disk-based file system and
-[Longterm Archive](preservation_research_data.md). The Intermediate Archive is a hierarchical file
-system with disks for buffering and tapes for storing research data.
+With the "Intermediate Archive", ZIH is closing the gap between a normal disk-based filesystem and
+[Longterm Archive](preservation_research_data.md). The Intermediate Archive is a hierarchical
+filesystem with disks for buffering and tapes for storing research data.
 
 Its intended use is the storage of research data for a maximal duration of 3 years. For storing the
 data after exceeding this time, the user has to supply essential metadata and migrate the files to
@@ -14,32 +14,31 @@ Some more information:
 - Maximum file size in the archive is 500 GB (split up your files, see
   [Datamover](../data_transfer/data_mover.md))
 - Data will be stored in two copies on tape.
-- The bandwidth to this data is very limited. Hence, this file system
+- The bandwidth to this data is very limited. Hence, this filesystem
   must not be used directly as input or output for HPC jobs.
 
-## How to access the "Intermediate Archive"
+## Access the Intermediate Archive
 
 For storing and restoring your data in/from the "Intermediate Archive" you can use the tool
-[Datamover](../data_transfer/data_mover.md). To use the DataMover you have to login to Taurus
-(taurus.hrsk.tu-dresden.de).
+[Datamover](../data_transfer/data_mover.md). To use the DataMover you have to login to ZIH systems.
 
-### Store data
+### Store Data
 
-```Shell Session
-dtcp -r /<directory> /archiv/<project or user>/<directory> # or
-dtrsync -av /<directory> /archiv/<project or user>/<directory>
+```console
+marie@login$ dtcp -r /<directory> /archiv/<project or user>/<directory> # or
+marie@login$ dtrsync -av /<directory> /archiv/<project or user>/<directory>
 ```
 
-### Restore data
+### Restore Data
 
-```Shell Session
-dtcp -r /archiv/<project or user>/<directory> /<directory> # or
-dtrsync -av /archiv/<project or user>/<directory> /<directory>
+```console
+marie@login$ dtcp -r /archiv/<project or user>/<directory> /<directory> # or
+marie@login$ dtrsync -av /archiv/<project or user>/<directory> /<directory>
 ```
 
 ### Examples
 
-```Shell Session
-dtcp -r /scratch/rotscher/results /archiv/rotscher/ # or
-dtrsync -av /scratch/rotscher/results /archiv/rotscher/results
+```console
+marie@login$ dtcp -r /scratch/rotscher/results /archiv/rotscher/ # or
+marie@login$ dtrsync -av /scratch/rotscher/results /archiv/rotscher/results
 ```
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/lustre.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/lustre.md
index 891808543974bb9ad92ed9897762f0d6d66bdbe2..d08a5d5f59490a8236fb6710b28d24d9a01fcfe6 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/lustre.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/lustre.md
@@ -1,11 +1,11 @@
-# Lustre File System(s)
+# Lustre Filesystems
 
 ## Large Files in /scratch
 
 The data containers in [Lustre](https://www.lustre.org) are called object storage targets (OST). The
 capacity of one OST is about 21 TB. All files are striped over a certain number of these OSTs. For
 small and medium files, the default number is 2. As soon as a file grows above ~1 TB it makes sense
-to spread it over a higher number of OSTs, e.g. 16. Once the file system is used >75%, the average
+to spread it over a higher number of OSTs, e.g. 16. Once the filesystem is used >75%, the average
 space per OST is only 5 GB. So, it is essential to split your larger files so that the chunks can be
 saved!
 
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/overview.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/overview.md
index e1b5fca65e562a243590c8fb55f92242b2265b4a..e20e2ace134dad1c4fbbb94b2fc3d0a0f1401df1 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/overview.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/overview.md
@@ -4,9 +4,9 @@ Correct organization of the structure of an HPC project is a straightforward way
 work of the whole team. There have to be rules and regulations that every member should follow. The
 uniformity of the project can be achieved by taking into account and setting up correctly
 
-  * the same **set of software** (modules, compiler, packages, libraries, etc),
-  * a defined **data life cycle management** including the same **data storage** or set of them,
-  * and **access rights** to project data.
+* the same **set of software** (modules, compiler, packages, libraries, etc),
+* a defined **data life cycle management** including the same **data storage** or set of them,
+* and **access rights** to project data.
 
 The used set of software within an HPC project can be management with environments on different
 levels either defined by [modules](../software/modules.md), [containers](../software/containers.md)
@@ -19,28 +19,26 @@ The main concept of working with data on ZIH systems bases on [Workspaces](works
 properly:
 
   * use a `/home` directory for the limited amount of personal data, simple examples and the results
-    of calculations. The home directory is not a working directory! However, `/home` file system is
+    of calculations. The home directory is not a working directory! However, `/home` filesystem is
     [backed up](#backup) using snapshots;
   * use `workspaces` as a place for working data (i.e. datasets); Recommendations of choosing the
     correct storage system for workspace presented below.
 
-### Taxonomy of File Systems
+### Taxonomy of Filesystems
 
 It is important to design your data workflow according to characteristics, like I/O footprint
 (bandwidth/IOPS) of the application, size of the data, (number of files,) and duration of the
-storage to efficiently use the provided storage and file systems.
-The page [file systems](file_systems.md) holds a comprehensive documentation on the different file
-systems.
-<!--In general, the mechanisms of
-so-called--> <!--[Workspaces](workspaces.md) are compulsory for all HPC users to store data for a
-defined duration ---> <!--depending on the requirements and the storage system this time span might
-range from days to a few--> <!--years.-->
-<!--- [HPC file systems](file_systems.md)-->
-<!--- [Intermediate Archive](intermediate_archive.md)-->
-<!--- [Special data containers] **todo** Special data containers (was no valid link in old compendium)-->
-<!--- [Move data between file systems](../data_transfer/data_mover.md)-->
-<!--- [Move data to/from ZIH's file systems](../data_transfer/export_nodes.md)-->
-<!--- [Longterm Preservation for ResearchData](preservation_research_data.md)-->
+storage to efficiently use the provided storage and filesystems.
+The page [filesystems](file_systems.md) holds a comprehensive documentation on the different
+filesystems.  <!--In general, the mechanisms of so-called--> <!--[Workspaces](workspaces.md) are
+compulsory for all HPC users to store data for a defined duration ---> <!--depending on the
+requirements and the storage system this time span might range from days to a few--> <!--years.-->
+<!--- [HPC filesystems](file_systems.md)--> <!--- [Intermediate
+Archive](intermediate_archive.md)--> <!--- [Special data containers] **todo** Special data
+containers (was no valid link in old compendium)--> <!--- [Move data between filesystems]
+(../data_transfer/data_mover.md)--> <!--- [Move data to/from ZIH's filesystems]
+(../data_transfer/export_nodes.md)--> <!--- [Longterm Preservation for
+ResearchData](preservation_research_data.md)-->
 
 !!! hint "Recommendations to choose of storage system"
 
@@ -48,7 +46,7 @@ range from days to a few--> <!--years.-->
       [warm_archive](file_systems.md#warm_archive) can be used.
       (Note that this is mounted **read-only** on the compute nodes).
     * For a series of calculations that works on the same data please use a `scratch` based [workspace](workspaces.md).
-    * **SSD**, in its turn, is the fastest available file system made only for large parallel
+    * **SSD**, in its turn, is the fastest available filesystem made only for large parallel
       applications running with millions of small I/O (input, output operations).
     * If the batch job needs a directory for temporary data then **SSD** is a good choice as well.
       The data can be deleted afterwards.
@@ -60,17 +58,17 @@ otherwise it could vanish. The core data of your project should be [backed up](#
 ### Backup
 
 The backup is a crucial part of any project. Organize it at the beginning of the project. The
-backup mechanism on ZIH systems covers **only** the `/home` and `/projects` file systems. Backed up
+backup mechanism on ZIH systems covers **only** the `/home` and `/projects` filesystems. Backed up
 files can be restored directly by the users. Details can be found
 [here](file_systems.md#backup-and-snapshots-of-the-file-system).
 
 !!! warning
 
-    If you accidentally delete your data in the "no backup" file systems it **can not be restored**!
+    If you accidentally delete your data in the "no backup" filesystems it **can not be restored**!
 
 ### Folder Structure and Organizing Data
 
-Organizing of living data using the file system helps for consistency and structuredness of the
+Organizing of living data using the filesystem helps for consistency and structuredness of the
 project. We recommend following the rules for your work regarding:
 
   * Organizing the data: Never change the original data; Automatize the organizing the data; Clearly
@@ -130,7 +128,7 @@ you don’t need throughout its life cycle.
 
 <!--## Software Packages-->
 
-<!--As was written before the module concept is the basic concept for using software on Taurus.-->
+<!--As was written before the module concept is the basic concept for using software on ZIH systems.-->
 <!--Uniformity of the project has to be achieved by using the same set of software on different levels.-->
 <!--It could be done by using environments. There are two types of environments should be distinguished:-->
 <!--runtime environment (the project level, use scripts to load [modules]**todo link**), Python virtual-->
@@ -144,16 +142,16 @@ you don’t need throughout its life cycle.
 
 <!--### Python Virtual Environment-->
 
-<!--If you are working with the Python then it is crucial to use the virtual environment on Taurus. The-->
+<!--If you are working with the Python then it is crucial to use the virtual environment on ZIH Systems. The-->
 <!--main purpose of Python virtual environments (don't mess with the software environment for modules)-->
 <!--is to create an isolated environment for Python projects (self-contained directory tree that-->
 <!--contains a Python installation for a particular version of Python, plus a number of additional-->
 <!--packages).-->
 
 <!--**Vitualenv (venv)** is a standard Python tool to create isolated Python environments. We-->
-<!--recommend using venv to work with Tensorflow and Pytorch on Taurus. It has been integrated into the-->
+<!--recommend using venv to work with Tensorflow and Pytorch on ZIH systems. It has been integrated into the-->
 <!--standard library under the [venv module]**todo link**. **Conda** is the second way to use a virtual-->
-<!--environment on the Taurus. Conda is an open-source package management system and environment-->
+<!--environment on the ZIH systems. Conda is an open-source package management system and environment-->
 <!--management system from the Anaconda.-->
 
 <!--[Detailed information]**todo link** about using the virtual environment.-->
@@ -168,9 +166,10 @@ you don’t need throughout its life cycle.
 
 The concept of **permissions** and **ownership** is crucial in Linux. See the
 [HPC-introduction]**todo link** slides for the understanding of the main concept. Standard Linux
-changing permission command (i.e `chmod`) valid for Taurus as well. The **group** access level
+changing permission command (i.e `chmod`) valid for ZIH systems as well. The **group** access level
 contains members of your project group. Be careful with 'write' permission and never allow to change
 the original data.
 
-Useful links: [Data Management]**todo link**, [File Systems]**todo link**, [Get Started with
-HPC-DA]**todo link**, [Project Management]**todo link**, [Preservation research data[**todo link**
+Useful links: [Data Management]**todo link**, [Filesystems]**todo link**, [Get Started with
+HPC Data Analytics]**todo link**, [Project Management]**todo link**, [Preservation research
+data[**todo link**
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/permanent.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/permanent.md
index 98e64e7f56c81b811e5455d785239a40d340ced5..14d7fc3e5e74819d568410340825934cb55d9960 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/permanent.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/permanent.md
@@ -1,69 +1,96 @@
-# Permanent File Systems
+# Permanent Filesystems
 
-## Global /home File System
+!!! hint
+
+    Do not use permanent filesystems as work directories:
+
+    - Even temporary files are kept in the snapshots and in the backup tapes over a long time,
+    senselessly filling the disks,
+    - By the sheer number and volume of work files, they may keep the backup from working efficiently.
+
+## Global /home Filesystem
+
+Each user has 50 GiB in a `/home` directory independent of the granted capacity for the project.
+The home directory is mounted with read-write permissions on all nodes of the ZIH system.
 
-Each user has 50 GB in a `/home` directory independent of the granted capacity for the project.
 Hints for the usage of the global home directory:
 
-- Do not use your `/home` as work directory: Frequent changes (like temporary output from a
-  running job) would fill snapshots and backups (see below).
 - If you need distinct `.bashrc` files for each machine, you should
   create separate files for them, named `.bashrc_<machine_name>`
-- Further, you may use private module files to simplify the process of
-  loading the right installation directories, see
-  **todo link: private modules - AnchorPrivateModule**.
 
-## Global /projects File System
+If a user exceeds her/his quota (total size OR total number of files) she/he cannot
+submit jobs into the batch system. Running jobs are not affected.
+
+!!! note
+
+     We have no feasible way to get the contribution of
+     a single user to a project's disk usage.
+
+## Global /projects Filesystem
 
 For project data, we have a global project directory, that allows better collaboration between the
-members of an HPC project. However, for compute nodes /projects is mounted as read-only, because it
-is not a filesystem for parallel I/O.
-
-## Backup and Snapshots of the File System
-
-- Backup is **only** available in the `/home` and the `/projects` file systems!
-- Files are backed up using snapshots of the NFS server and can be restored by the user
-- A changed file can always be recovered as it was at the time of the snapshot
-- Snapshots are taken:
-  - From Monday through Saturday between 06:00 and 18:00 every two hours and kept for one day
-    (7 snapshots)
-  - From Monday through Saturday at 23:30 and kept for two weeks (12 snapshots)
-  - Every Sunday st 23:45 and kept for 26 weeks
-- To restore a previous version of a file:
-  - Go into the directory of the file you want to restore
-  - Run `cd .snapshot` (this subdirectory exists in every directory on the `/home` file system
-    although it is not visible with `ls -a`)
-  - In the .snapshot-directory are all available snapshots listed
-  - Just `cd` into the directory of the point in time you wish to restore and copy the file you
-    wish to restore to where you want it
-  - **Attention** The `.snapshot` directory is not only hidden from normal view (`ls -a`), it is
-    also embedded in a different directory structure. An `ls ../..` will not list the directory
-    where you came from. Thus, we recommend to copy the file from the location where it
-    originally resided:
-    `pwd /home/username/directory_a % cp .snapshot/timestamp/lostfile lostfile.backup`
-- `/home` and `/projects/` are definitely NOT made as a work directory:
-  since all files are kept in the snapshots and in the backup tapes over a long time, they
-  - Senseless fill the disks and
-  - Prevent the backup process by their sheer number and volume from working efficiently.
-
-## Group Quotas for the File System
-
-The quotas of the home file system are meant to help the users to keep in touch with their data.
+members of an HPC project.
+Typically, all members of the project have read/write access to that directory.
+It can only be written to on the login and export nodes.
+
+!!! note
+
+    On compute nodes, `/projects` is mounted as read-only, because it must not be used as
+    work directory and heavy I/O.
+
+## Snapshots
+
+A changed file can always be recovered as it was at the time of the snapshot.
+These snapshots are taken (subject to changes):
+
+- from Monday through Saturday between 06:00 and 18:00 every two hours and kept for one day
+  (7 snapshots)
+- from Monday through Saturday at 23:30 and kept for two weeks (12 snapshots)
+- every Sunday st 23:45 and kept for 26 weeks.
+
+To restore a previous version of a file:
+
+1. Go to the parent directory of the file you want to restore.
+1. Run `cd .snapshot` (this subdirectory exists in every directory on the `/home` filesystem
+  although it is not visible with `ls -a`).
+1. List the snapshots with `ls -l`.
+1. Just `cd` into the directory of the point in time you wish to restore and copy the file you
+  wish to restore to where you want it.
+
+!!! note
+
+    The `.snapshot` directory is embedded in a different directory structure. An `ls ../..` will not
+    show the directory where you came from. Thus, for your `cp`, you should *use an absolute path*
+    as destination.
+
+## Backup
+
+Just for the eventuality of a major filesystem crash, we keep tape-based backups of our
+permanent filesystems for 180 days.
+
+## Quotas
+
+The quotas of the permanent filesystem are meant to help users to keep only data that is necessary.
 Especially in HPC, it happens that millions of temporary files are created within hours. This is the
-main reason for performance degradation of the file system. If a project exceeds its quota (total
-size OR total number of files) it cannot submit jobs into the batch system. The following commands
-can be used for monitoring:
+main reason for performance degradation of the filesystem.
+
+!!! note
+
+    If a quota is exceeded - project or home - (total size OR total number of files)
+    job submission is forbidden. Running jobs are not affected.
+
+The following commands can be used for monitoring:
 
-- `showquota` shows your projects' usage of the file system.
-- `quota -s -f /home` shows the user's usage of the file system.
+- `showquota` shows your projects' usage of the filesystem.
+- `quota -s -f /home` shows the user's usage of the filesystem.
 
-In case a project is above it's limits please ...
+In case a quota is above its limits:
 
-- Remove core dumps, temporary data
-- Talk with your colleagues to identify the hotspots,
-- Check your workflow and use /tmp or the scratch file systems for temporary files
+- Remove core dumps and temporary data
+- Talk with your colleagues to identify unused or unnecessarily stored data
+- Check your workflow and use `/tmp` or the scratch filesystems for temporary files
 - *Systematically* handle your important data:
-  - For later use (weeks...months) at the HPC systems, build tar
-    archives with meaningful names or IDs and store e.g. them in an
-    [archive](intermediate_archive.md).
-  - Refer to the hints for [long term preservation for research data](preservation_research_data.md)
+    - For later use (weeks...months) at the ZIH systems, build and zip tar
+      archives with meaningful names or IDs and store them, e.g., in a workspace in the
+      [warm archive](warm_archive.md) or an [archive](intermediate_archive.md)
+    - Refer to the hints for [long term preservation for research data](preservation_research_data.md)
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/quotas.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/quotas.md
index 24665aa573549b6290fae90523450c98fc9d9240..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/quotas.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/quotas.md
@@ -1,56 +0,0 @@
-# Quotas for the home file system
-
-The quotas of the home file system are meant to help the users to keep in touch with their data.
-Especially in HPC, millions of temporary files can be created within hours. We have identified this
-as a main reason for performance degradation of the HOME file system. To stay in operation with out
-HPC systems we regrettably have to fall back to this unpopular technique.
-
-Based on a balance between the allotted disk space and the usage over the time, reasonable quotas
-(mostly above current used space) for the projects have been defined. The will be activated by the
-end of April 2012.
-
-If a project exceeds its quota (total size OR total number of files) it cannot submit jobs into the
-batch system. Running jobs are not affected.  The following commands can be used for monitoring:
-
--   `quota -s -g` shows the file system usage of all groups the user is
-    a member of.
--   `showquota` displays a more convenient output. Use `showquota -h` to
-    read about its usage. It is not yet available on all machines but we
-    are working on it.
-
-**Please mark:** We have no quotas for the single accounts, but for the
-project as a whole. There is no feasible way to get the contribution of
-a single user to a project's disk usage.
-
-## Alternatives
-
-In case a project is above its limits, please
-
--   remove core dumps, temporary data,
--   talk with your colleagues to identify the hotspots,
--   check your workflow and use /fastfs for temporary files,
--   *systematically* handle your important data:
-    -   for later use (weeks...months) at the HPC systems, build tar
-        archives with meaningful names or IDs and store them in the
-        [DMF system](#AnchorDataMigration). Avoid using this system
-        (`/hpc_fastfs`) for files < 1 MB!
-    -   refer to the hints for
-        [long term preservation for research data](../data_lifecycle/preservation_research_data.md).
-
-## No Alternatives
-
-The current situation is this:
-
--   `/home` provides about 50 TB of disk space for all systems. Rapidly
-    changing files (temporary data) decrease the size of usable disk
-    space since we keep all files in multiple snapshots for 26 weeks. If
-    the *number* of files comes into the range of a million the backup
-    has problems handling them.
--   The work file system for the clusters is `/fastfs`. Here, we have 60
-    TB disk space (without backup). This is the file system of choice
-    for temporary data.
--   About 180 projects have to share our resources, so it makes no sense
-    at all to simply move the data from `/home` to `/fastfs` or to
-    `/hpc_fastfs`.
-
-In case of problems don't hesitate to ask for support.
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/warm_archive.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/warm_archive.md
new file mode 100644
index 0000000000000000000000000000000000000000..01c6e319ea575ca971cd52bc7c9dca3f5fd85ff3
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/warm_archive.md
@@ -0,0 +1,30 @@
+# Warm Archive
+
+The warm archive is intended as a storage space for the duration of a running HPC project.
+It does **not** substitute a long-term archive, though.
+
+This storage is best suited for large files (like `tgz`s of input data data or intermediate results).
+
+The hardware consists of 20 storage nodes with a net capacity of 10 PiB on spinning disks.
+We have seen an total data rate of 50 GiB/s under benchmark conditions.
+
+A project can apply for storage space in the warm archive.
+This is limited in capacity and
+duration.
+
+## Access
+
+### As Filesystem
+
+On ZIH systems, users can access the warm archive via [workspaces](workspaces.md)).
+Although the lifetime is considerable long, please be aware that the data will be
+deleted as soon as the user's login expires.
+
+!!! attention
+
+    These workspaces can **only** be written to from the login or export nodes.
+    On all compute nodes, the warm archive is mounted read-only.
+
+### S3
+
+A limited S3 functionality is available.
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/workspaces.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/workspaces.md
index 8443727ab896a13da8d76684e3524c1e21cca936..f5e217de6b34e861004b54de3fb4d6cb5004a2ce 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/workspaces.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/workspaces.md
@@ -1,7 +1,7 @@
 # Workspaces
 
 Storage systems differ in terms of capacity, streaming bandwidth, IOPS rate, etc. Price and
-efficiency don't allow to have it all in one. That is why fast parallel file systems at ZIH have
+efficiency don't allow to have it all in one. That is why fast parallel filesystems at ZIH have
 restrictions with regards to **age of files** and [quota](quotas.md). The mechanism of workspaces
 enables users to better manage their HPC data.
 <!--Workspaces are primarily login-related.-->
@@ -19,16 +19,16 @@ times.
 
 !!! tip
 
-    Use the faster file systems if you need to write temporary data in your computations, and use
-    the capacity oriented file systems if you only need to read data for your computations. Please
+    Use the faster filesystems if you need to write temporary data in your computations, and use
+    the capacity oriented filesystems if you only need to read data for your computations. Please
     keep track of your data and move it to a capacity oriented filesystem after the end of your
     computations.
 
 ## Workspace Management
 
-### List Available File Systems
+### List Available Filesystems
 
-To list all available file systems for using workspaces use:
+To list all available filesystems for using workspaces use:
 
 ```bash
 zih$ ws_find -l
@@ -87,7 +87,7 @@ Options:
     remaining time in days: 90
     ```
 
-This will create a workspace with the name `test-workspace` on the `/scratch` file system for 90
+This will create a workspace with the name `test-workspace` on the `/scratch` filesystem for 90
 days with an email reminder for 7 days before the expiration.
 
 !!! Note
@@ -97,15 +97,15 @@ days with an email reminder for 7 days before the expiration.
 
 ### Extention of a Workspace
 
-The lifetime of a workspace is finite. Different file systems (storage systems) have different
-maximum durations. A workspace can be extended multiple times, depending on the file system.
+The lifetime of a workspace is finite. Different filesystems (storage systems) have different
+maximum durations. A workspace can be extended multiple times, depending on the filesystem.
 
 | Storage system (use with parameter -F ) | Duration, days | Extensions | Remarks |
 |:------------------------------------------:|:----------:|:-------:|:---------------------------------------------------------------------------------------:|
-| `ssd`                                       | 30 | 10 | High-IOPS file system (`/lustre/ssd`) on SSDs.                                          |
-| `beegfs`                                     | 30 | 2 | High-IOPS file system (`/lustre/ssd`) onNVMes.                                          |
-| `scratch`                                    | 100 | 2 | Scratch file system (/scratch) with high streaming bandwidth, based on spinning disks |
-| `warm_archive`                               | 365 | 2 | Capacity file system based on spinning disks                                          |
+| `ssd`                                       | 30 | 10 | High-IOPS filesystem (`/lustre/ssd`) on SSDs.                                          |
+| `beegfs`                                     | 30 | 2 | High-IOPS filesystem (`/lustre/ssd`) onNVMes.                                          |
+| `scratch`                                    | 100 | 2 | Scratch filesystem (/scratch) with high streaming bandwidth, based on spinning disks |
+| `warm_archive`                               | 365 | 2 | Capacity filesystem based on spinning disks                                          |
 
 To extend your workspace use the following command:
 
@@ -128,9 +128,9 @@ my-workspace 40`, it will now expire in 40 days **not** 130 days.
 ### Deletion of a Workspace
 
 To delete a workspace use the `ws_release` command. It is mandatory to specify the name of the
-workspace and the file system in which it is located:
+workspace and the filesystem in which it is located:
 
-`ws_release -F <file system> <workspace name>`
+`ws_release -F <filesystem> <workspace name>`
 
 ### Restoring Expired Workspaces
 
@@ -141,7 +141,7 @@ warm_archive: 2 months), you can still restore your data into an existing worksp
 
     When you release a workspace **by hand**, it will not receive a grace period and be
     **permanently deleted** the **next day**. The advantage of this design is that you can create
-    and release workspaces inside jobs and not swamp the file system with data no one needs anymore
+    and release workspaces inside jobs and not swamp the filesystem with data no one needs anymore
     in the hidden directories (when workspaces are in the grace period).
 
 Use:
@@ -162,7 +162,7 @@ username prefix and timestamp suffix (otherwise, it cannot be uniquely identifie
 workspace, on the other hand, must be given with just its short name, as listed by `ws_list`,
 without the username prefix.
 
-Both workspaces must be on the same file system. The data from the old workspace will be moved into
+Both workspaces must be on the same filesystem. The data from the old workspace will be moved into
 a directory in the new workspace with the name of the old one. This means a fresh workspace works as
 well as a workspace that already contains data.
 
@@ -282,5 +282,5 @@ Avoid "iso" codepages!
 **Q**: I am getting the error `Error: target workspace does not exist!`  when trying to restore my
 workspace.
 
-**A**: The workspace you want to restore into is either not on the same file system or you used the
+**A**: The workspace you want to restore into is either not on the same filesystem or you used the
 wrong name. Use only the short name that is listed after `id:` when using `ws_list`
diff --git a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md
index 9db357468d91c5c84850add970b9fc6f0d2007ad..59aa75e842e3875f99d458caec785c6bf9645a81 100644
--- a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md
+++ b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md
@@ -127,7 +127,7 @@ in an interactive job with:
 marie@compute$ source framework-configure.sh spark my-config-template
 ```
 
-### Using Hadoop Distributed File System (HDFS)
+### Using Hadoop Distributed Filesystem (HDFS)
 
 If you want to use Spark and HDFS together (or in general more than one
 framework), a scheme similar to the following can be used:
@@ -214,7 +214,7 @@ for convenience: [SparkExample.ipynb](misc/SparkExample.ipynb)
 !!! note
 
     You could work with simple examples in your home directory but according to the
-    [storage concept](../data_lifecycle/hpc_storage_concept2019.md)
+    [storage concept](../data_lifecycle/overview.md)
     **please use [workspaces](../data_lifecycle/workspaces.md) for
     your study and work projects**. For this reason, you have to use
     advanced options of Jupyterhub and put "/" in "Workspace scope" field.
diff --git a/doc.zih.tu-dresden.de/docs/software/dask.md b/doc.zih.tu-dresden.de/docs/software/dask.md
index d6f7d087e8f39fb884a85834f807a4a91d236216..316aefe2395e077bec611fdbd0c080cce2af1940 100644
--- a/doc.zih.tu-dresden.de/docs/software/dask.md
+++ b/doc.zih.tu-dresden.de/docs/software/dask.md
@@ -49,7 +49,7 @@ Create a conda virtual environment. We would recommend using a workspace. See th
 
 **Note:** You could work with simple examples in your home directory (where you are loading by
 default). However, in accordance with the
-[HPC storage concept](../data_lifecycle/hpc_storage_concept2019.md) please use a
+[HPC storage concept](../data_lifecycle/overview.md) please use a
 [workspaces](../data_lifecycle/workspaces.md) for your study and work projects.
 
 ```Bash
diff --git a/doc.zih.tu-dresden.de/docs/software/get_started_with_hpcda.md b/doc.zih.tu-dresden.de/docs/software/get_started_with_hpcda.md
index ac90455f91a13a74023d9e767aa9f7bce538cf69..850493f6d4a86b6d3220b03bf17a445dc2061979 100644
--- a/doc.zih.tu-dresden.de/docs/software/get_started_with_hpcda.md
+++ b/doc.zih.tu-dresden.de/docs/software/get_started_with_hpcda.md
@@ -68,7 +68,7 @@ details check the [login page](../access/ssh_login.md).
 As soon as you have access to HPC-DA you have to manage your data. The main method of working with
 data on Taurus is using Workspaces.  You could work with simple examples in your home directory
 (where you are loading by default). However, in accordance with the
-[storage concept](../data_lifecycle/hpc_storage_concept2019.md)
+[storage concept](../data_lifecycle/overview.md)
 **please use** a [workspace](../data_lifecycle/workspaces.md)
 for your study and work projects.
 
diff --git a/doc.zih.tu-dresden.de/docs/software/python.md b/doc.zih.tu-dresden.de/docs/software/python.md
index 281d1fd99f175805d36fd5ba9d78776f92ea8b50..b9bde2e2324d2d413c65f1cb4a6b34d45f5225bf 100644
--- a/doc.zih.tu-dresden.de/docs/software/python.md
+++ b/doc.zih.tu-dresden.de/docs/software/python.md
@@ -17,7 +17,7 @@ There are three main options on how to work with Keras and Tensorflow on the HPC
 the [Modules system](modules.md) and Python virtual environment.
 
 Note: You could work with simple examples in your home directory but according to
-[HPCStorageConcept2019](../data_lifecycle/hpc_storage_concept2019.md) please use **workspaces**
+[HPCStorageConcept2019](../data_lifecycle/overview.md) please use **workspaces**
 for your study and work projects.
 
 ## Virtual environment
diff --git a/doc.zih.tu-dresden.de/docs/software/pytorch.md b/doc.zih.tu-dresden.de/docs/software/pytorch.md
index 043320376fe184f7477b19b37f0f39625d8424a9..cd476d7296e271e6f7eecf3a84b4af1f80c4ee84 100644
--- a/doc.zih.tu-dresden.de/docs/software/pytorch.md
+++ b/doc.zih.tu-dresden.de/docs/software/pytorch.md
@@ -118,7 +118,7 @@ Please put the file into your previously created virtual environment in your wor
 use the kernel for your notebook [see Jupyterhub page](../access/jupyterhub.md).
 
 Note: You could work with simple examples in your home directory but according to
-[HPCStorageConcept2019](../data_lifecycle/hpc_storage_concept2019.md) please use **workspaces**
+[HPCStorageConcept2019](../data_lifecycle/overview.md) please use **workspaces**
 for your study and work projects.
 For this reason, you have to use advanced options of Jupyterhub and put "/" in "Workspace scope" field.
 
diff --git a/doc.zih.tu-dresden.de/docs/software/tensorflow_on_jupyter_notebook.md b/doc.zih.tu-dresden.de/docs/software/tensorflow_on_jupyter_notebook.md
index a8dee14a25a9e7c82ed1977ad3e573defd4e791a..e011dfd2dc35d7dc5ef1576d7a5dbefa5d52f6d4 100644
--- a/doc.zih.tu-dresden.de/docs/software/tensorflow_on_jupyter_notebook.md
+++ b/doc.zih.tu-dresden.de/docs/software/tensorflow_on_jupyter_notebook.md
@@ -16,7 +16,7 @@ with HPC or Linux. \</span>
 
 **Prerequisites:** To work with Tensorflow and jupyter notebook you need
 \<a href="Login" target="\_blank">access\</a> for the Taurus system and
-basic knowledge about Python, SLURM system and the Jupyter notebook.
+basic knowledge about Python, Slurm system and the Jupyter notebook.
 
 \<span style="font-size: 1em;"> **This page aims** to introduce users on
 how to start working with TensorFlow on the [HPCDA](../jobs_and_resources/hpcda.md) system - part
@@ -168,7 +168,7 @@ into your previously created virtual environment in your working
 directory or use the kernel for your notebook.
 
 Note: You could work with simple examples in your home directory but according to
-[new storage concept](../data_lifecycle/hpc_storage_concept2019.md) please use
+[new storage concept](../data_lifecycle/overview.md) please use
 [workspaces](../data_lifecycle/workspaces.md) for your study and work projects**.
 For this reason, you have to use advanced options and put "/" in "Workspace scope" field.
 
diff --git a/doc.zih.tu-dresden.de/mkdocs.yml b/doc.zih.tu-dresden.de/mkdocs.yml
index 2fe845d678cd08df9a9ad25cce3d88d646efd513..4874b8f291845716cd3114af5e06159d84310bac 100644
--- a/doc.zih.tu-dresden.de/mkdocs.yml
+++ b/doc.zih.tu-dresden.de/mkdocs.yml
@@ -74,11 +74,11 @@ nav:
       - Overview: data_lifecycle/file_systems.md
       - Permanent File Systems: data_lifecycle/permanent.md
       - Lustre: data_lifecycle/lustre.md
-      - BeeGFS: data_lifecycle/bee_gfs.md
+      - BeeGFS: data_lifecycle/beegfs.md
+      - Warm Archive: data_lifecycle/warm_archive.md
       - Intermediate Archive: data_lifecycle/intermediate_archive.md
       - Quotas: data_lifecycle/quotas.md
     - Workspaces: data_lifecycle/workspaces.md
-    - HPC Storage Concept 2019: data_lifecycle/hpc_storage_concept2019.md
     - Preservation of Research Data: data_lifecycle/preservation_research_data.md
     - Structuring Experiments: data_lifecycle/experiments.md
   - Jobs and Resources:
@@ -116,6 +116,7 @@ nav:
     - No IB Jobs: archive/no_ib_jobs.md
     - Phase2 Migration: archive/phase2_migration.md
     - Platform LSF: archive/platform_lsf.md
+    - BeeGFS on Demand: archive/beegfs_on_demand.md
     - Switched-Off Systems:
       - Overview: archive/systems_switched_off.md
       - From Deimos to Atlas: archive/migrate_to_atlas.md
diff --git a/doc.zih.tu-dresden.de/util/grep-forbidden-words.sh b/doc.zih.tu-dresden.de/util/grep-forbidden-words.sh
index b6d586220052a2bf362aec3c4736c876e4901da6..aa20c5a06de665a4420d8c6d41061ee0d6459015 100755
--- a/doc.zih.tu-dresden.de/util/grep-forbidden-words.sh
+++ b/doc.zih.tu-dresden.de/util/grep-forbidden-words.sh
@@ -37,6 +37,7 @@ function usage () {
   echo ""
   echo "Options:"
   echo "  -a     Search in all markdown files (default: git-changed files)" 
+  echo "  -f     Search in a specific markdown file" 
   echo "  -s     Silent mode"
   echo "  -h     Show help message"
 }
@@ -44,11 +45,16 @@ function usage () {
 # Options
 all_files=false
 silent=false
-while getopts ":ahs" option; do
+file=""
+while getopts ":ahsf:" option; do
  case $option in
    a)
      all_files=true
      ;;
+   f)
+     file=$2
+     shift
+     ;;
    s)
      silent=true
      ;;
@@ -67,11 +73,14 @@ branch="origin/${CI_MERGE_REQUEST_TARGET_BRANCH_NAME:-preview}"
 if [ $all_files = true ]; then
   echo "Search in all markdown files."
   files=$(git ls-tree --full-tree -r --name-only HEAD $basedir/docs/ | grep .md)
+elif [[ ! -z $file ]]; then
+  files=$file
 else
   echo "Search in git-changed files."
   files=`git diff --name-only "$(git merge-base HEAD "$branch")"`
 fi
 
+echo "... $files ..."
 cnt=0
 for f in $files; do
   if [ "$f" != doc.zih.tu-dresden.de/README.md -a "${f: -3}" == ".md" -a -f "$f" ]; then
diff --git a/doc.zih.tu-dresden.de/wordlist.aspell b/doc.zih.tu-dresden.de/wordlist.aspell
index 85d636de2a499289feb5e545d0e158785537e2ff..30eaee21e2befa638eefe67e87a591f7dbc6c708 100644
--- a/doc.zih.tu-dresden.de/wordlist.aspell
+++ b/doc.zih.tu-dresden.de/wordlist.aspell
@@ -23,6 +23,7 @@ fastfs
 FFT
 FFTW
 filesystem
+filesystems
 Filesystem
 Flink
 Fortran
@@ -43,6 +44,7 @@ icpc
 ifort
 ImageNet
 Infiniband
+inode
 Itanium
 jpg
 Jupyter
@@ -57,6 +59,7 @@ lsf
 LSF
 lustre
 MEGWARE
+MiB
 MIMD
 MKL
 Montecito
@@ -88,10 +91,12 @@ PAPI
 parallelization
 pdf
 Perf
+PiB
 Pika
 pipelining
 png
 Pthreads
+reachability
 rome
 romeo
 RSA
@@ -100,6 +105,7 @@ Saxonid
 sbatch
 ScaDS
 ScaLAPACK
+scalable
 Scalasca
 scancel
 scontrol
@@ -112,7 +118,7 @@ SHMEM
 SLES
 Slurm
 SMP
-squeue
+queue
 srun
 ssd
 SSD
@@ -131,5 +137,6 @@ Vampir
 VampirTrace
 VampirTrace's
 WebVNC
+workspaces
 Xeon
 ZIH