diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md
index 8ff1e9564df0f6ef2762d1cdd0065f8863b4264e..509fde615fecee33e172d521fd989163e6837a13 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md
@@ -4,7 +4,6 @@ As soon as you have access to ZIH systems you have to manage your data. Several
 available. Each file system serves for special purpose according to their respective capacity,
 performance and permanence.
 
-
 ## Work Directories
 
 | File system | Usable directory  | Capacity | Availability | Backup | Remarks                                                                                                                                                         |
@@ -12,8 +11,7 @@ performance and permanence.
 | `Lustre`    | `/scratch/`       | 4 PB     | global       | No     | Only accessible via **todo link: workspaces - WorkSpaces**. Not made for billions of files!                                                                                   |
 | `Lustre`    | `/lustre/ssd`     | 40 TB    | global       | No     | Only accessible via **todo link: workspaces - WorkSpaces**. For small I/O operations                                                                                          |
 | `BeeGFS`    | `/beegfs/global0` | 232 TB   | global       | No     | Only accessible via **todo link: workspaces - WorkSpaces**. Fastest available file system, only for large parallel applications running with millions of small I/O operations |
-| `ext4`      | `/tmp`            | 95.0 GB  | local        | No     | is cleaned up after the job automatically                                                                                                                       |
-
+| `ext4`      | `/tmp`            | 95.0 GB  | local        | No     | is cleaned up after the job automatically  |
 
 ## Warm Archive
 
@@ -78,8 +76,6 @@ output.
 We do **not recommend** the usage of the "du"-command for this purpose.  It is able to cause issues
 for other users, while reading data from the filesystem.
 
-
-
 ### BeeGFS
 
 Commands to work with the BeeGFS file system.
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/lustre.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/lustre.md
index 88891bf92213c011a0dda088632581a164660330..a666ba6a32221fc5476963e2e50a4c5238156d74 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/lustre.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/lustre.md
@@ -1,8 +1,6 @@
 # Lustre File System(s)
 
-
-
-### Large Files in /scratch
+## Large Files in /scratch
 
 The data containers in Lustre are called object storage targets (OST).  The capacity of one OST is
 about 21 TB. All files are striped over a certain number of these OSTs. For small and medium files,
@@ -20,8 +18,8 @@ lfs setstripe -c 20  /scratch/ws/mark-stripe20/tar
 **Note:** This does not affect existing files. But all files that **will be created** in this
 directory will be distributed over 20 OSTs.
 
-
 ## Useful Commands for Lustre
+
 These commands work for `/scratch` and `/ssd`.
 
 ### Listing Disk Usages per OST and MDT
@@ -57,5 +55,4 @@ lfs getstripe myfile
 lfs getstripe -d mydirectory
 ```
 
-The `-d`-parameter will also display striping for all files in the directory
-
+The `-d`-parameter will also display striping for all files in the directory.
diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/permanent.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/permanent.md
index 4c1f01ba6d45db436e9279b94b4961d2597fb639..98e64e7f56c81b811e5455d785239a40d340ced5 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/permanent.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/permanent.md
@@ -5,12 +5,10 @@
 Each user has 50 GB in a `/home` directory independent of the granted capacity for the project.
 Hints for the usage of the global home directory:
 
+- Do not use your `/home` as work directory: Frequent changes (like temporary output from a
+  running job) would fill snapshots and backups (see below).
 - If you need distinct `.bashrc` files for each machine, you should
   create separate files for them, named `.bashrc_<machine_name>`
-- If you use various machines frequently, it might be useful to set
-  the environment variable HISTFILE in `.bashrc_deimos` and
-  `.bashrc_mars` to `$HOME/.bash_history_<machine_name>`. Setting
-  HISTSIZE and HISTFILESIZE to 10000 helps as well.
 - Further, you may use private module files to simplify the process of
   loading the right installation directories, see
   **todo link: private modules - AnchorPrivateModule**.
@@ -19,8 +17,7 @@ Hints for the usage of the global home directory:
 
 For project data, we have a global project directory, that allows better collaboration between the
 members of an HPC project. However, for compute nodes /projects is mounted as read-only, because it
-is not a filesystem for parallel I/O. See below and also check the
-**todo link: HPC introduction - %PUBURL%/Compendium/WebHome/HPC-Introduction.pdf** for more details.
+is not a filesystem for parallel I/O.
 
 ## Backup and Snapshots of the File System
 
@@ -70,4 +67,3 @@ In case a project is above it's limits please ...
     archives with meaningful names or IDs and store e.g. them in an
     [archive](intermediate_archive.md).
   - Refer to the hints for [long term preservation for research data](preservation_research_data.md)
-  
\ No newline at end of file