diff --git a/doc.zih.tu-dresden.de/docs/data_management/FileSystems.md b/doc.zih.tu-dresden.de/docs/data_management/FileSystems.md
index 3b6dbafd3f8d13108bbd1e38374c0361c376e4bb..5c54797a48eae3f74c8d7435318ba6f16a5d6eef 100644
--- a/doc.zih.tu-dresden.de/docs/data_management/FileSystems.md
+++ b/doc.zih.tu-dresden.de/docs/data_management/FileSystems.md
@@ -15,20 +15,18 @@ directory:
     `.bashrc_mars` to `$HOME/.bash_history_<machine_name>`. Setting
     HISTSIZE and HISTFILESIZE to 10000 helps as well.
 -   Further, you may use private module files to simplify the process of
-    loading the right installation directories, see [private
-    modules](#AnchorPrivateModule).
+    loading the right installation directories, see
+    **todo link: private modules - AnchorPrivateModule**.
 
 ### Global /projects file system
 
 For project data, we have a global project directory, that allows better
 collaboration between the members of an HPC project. However, for
 compute nodes /projects is mounted as read-only, because it is not a
-filesystem for parallel I/O. See below and also check the [HPC
-introduction](%PUBURL%/Compendium/WebHome/HPC-Introduction.pdf) for more
+filesystem for parallel I/O. See below and also check the
+**todo link: HPC introduction - %PUBURL%/Compendium/WebHome/HPC-Introduction.pdf** for more
 details.
 
-#AnchorBackup
-
 ### Backup and snapshots of the file system
 
 -   Backup is **only** available in the `/home` and the `/projects` file
@@ -87,17 +85,17 @@ In case a project is above it's limits please...
 -   *systematically*handle your important data:
     -   For later use (weeks...months) at the HPC systems, build tar
         archives with meaningful names or IDs and store e.g. them in an
-        [archive](IntermediateArchive).
+        [archive](IntermediateArchive.md).
     -   refer to the hints for [long term preservation for research
-        data](PreservationResearchData).
+        data](PreservationResearchData.md).
 
 ## Work directories
 
 | File system | Usable directory  | Capacity | Availability | Backup | Remarks                                                                                                                                                         |
 |:------------|:------------------|:---------|:-------------|:-------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `Lustre`    | `/scratch/`       | 4 PB     | global       | No     | Only accessible via [workspaces](WorkSpaces). Not made for billions of files!                                                                                   |
-| `Lustre`    | `/lustre/ssd`     | 40 TB    | global       | No     | Only accessible via [workspaces](WorkSpaces). For small I/O operations                                                                                          |
-| `BeeGFS`    | `/beegfs/global0` | 232 TB   | global       | No     | Only accessible via [workspaces](WorkSpaces). Fastest available file system, only for large parallel applications running with millions of small I/O operations |
+| `Lustre`    | `/scratch/`       | 4 PB     | global       | No     | Only accessible via **todo link: workspaces - WorkSpaces**. Not made for billions of files!                                                                                   |
+| `Lustre`    | `/lustre/ssd`     | 40 TB    | global       | No     | Only accessible via **todo link: workspaces - WorkSpaces**. For small I/O operations                                                                                          |
+| `BeeGFS`    | `/beegfs/global0` | 232 TB   | global       | No     | Only accessible via **todo link: workspaces - WorkSpaces**. Fastest available file system, only for large parallel applications running with millions of small I/O operations |
 | `ext4`      | `/tmp`            | 95.0 GB  | local        | No     | is cleaned up after the job automatically                                                                                                                       |
 
 ### Large files in /scratch
@@ -120,6 +118,10 @@ number in this directory with:
 affect existing files. But all files that **will be created** in this
 directory will be distributed over 20 OSTs.
 
+## Warm archive
+
+TODO
+
 ## Recommendations for file system usage
 
 To work as efficient as possible, consider the following points