diff --git a/doc.zih.tu-dresden.de/docs/access.md b/doc.zih.tu-dresden.de/docs/access/access.md
similarity index 88%
rename from doc.zih.tu-dresden.de/docs/access.md
rename to doc.zih.tu-dresden.de/docs/access/access.md
index 8ede54bfe553330035aedb5fc44ba89135ede184..15ff4d4200220426640888c8a39baf1605f0d3a6 100644
--- a/doc.zih.tu-dresden.de/docs/access.md
+++ b/doc.zih.tu-dresden.de/docs/access/access.md
@@ -17,5 +17,5 @@ during the access procedure. Accept the host verifying and enter your password.
 by login nodes in your Taurus home directory.  This method requires two conditions: Linux OS,
 workstation within the campus network. For other options and details check the Login page.
 
-Useful links: [Access]**todo link**, [Project Request Form](application/RequestForResources.md),
+Useful links: [Access]**todo link**, [Project Request Form](../application/request_for_resources.md),
 [Terms Of Use]**todo link**
diff --git a/doc.zih.tu-dresden.de/docs/access/DesktopCloudVisualization.md b/doc.zih.tu-dresden.de/docs/access/desktop_cloud_visualization.md
similarity index 90%
rename from doc.zih.tu-dresden.de/docs/access/DesktopCloudVisualization.md
rename to doc.zih.tu-dresden.de/docs/access/desktop_cloud_visualization.md
index 2104111df87270bc6d1f81dc6f9bcf60307f55a6..39ad90b6cb5210657440df8d9acf39f9eb325d0b 100644
--- a/doc.zih.tu-dresden.de/docs/access/DesktopCloudVisualization.md
+++ b/doc.zih.tu-dresden.de/docs/access/desktop_cloud_visualization.md
@@ -1,7 +1,7 @@
 # Desktop Cloud Visualization (DCV)
 
 NICE DCV enables remote accessing OpenGL-3D-applications running on the server (taurus) using the
-server's GPUs. If you don't need GL acceleration, you might also want to try our [WebVNC](WebVNC.md)
+server's GPUs. If you don't need GL acceleration, you might also want to try our [WebVNC](web_vnc.md)
 solution.
 
 Note that with the 2017 version (and later), while there is still a separate client available, it is
@@ -12,13 +12,13 @@ https://www.nice-software.com/download/nice-dcv-2017
 ## Access with JupyterHub
 
 **todo**
-**Check out the [new documentation about virtual desktops](../software/VirtualDesktops.md).**
+**Check out the** [new documentation about virtual desktops](../software/virtual_desktops.md).
 
 Click here, to start a session on our JupyterHub:
 [https://taurus.hrsk.tu-dresden.de/jupyter/hub/spawn#/\~(partition\~'dcv\~cpuspertask\~'6\~gres\~'gpu\*3a1\~mempercpu\~'2583\~environment\~'production)](https://taurus.hrsk.tu-dresden.de/jupyter/hub/spawn#/~(partition~'dcv~cpuspertask~'6~gres~'gpu*3a1~mempercpu~'2583~environment~'test))\<br
 /> This link starts your session on the dcv partition (taurusi210\[7-8\]) with a GPU, 6 CPU cores
 and 2583 MB memory per core.  Optionally you can modify many different SLURM parameters. For this
-follow the general [JupyterHub](../software/JupyterHub.md) documentation.
+follow the general [JupyterHub](../software/jupyterhub.md) documentation.
 
 Your browser now should load into the JupyterLab application which looks like this:
 
diff --git a/doc.zih.tu-dresden.de/docs/access/Login.md b/doc.zih.tu-dresden.de/docs/access/login.md
similarity index 98%
rename from doc.zih.tu-dresden.de/docs/access/Login.md
rename to doc.zih.tu-dresden.de/docs/access/login.md
index f4ab65f575ba5ef04ad50273a7fb97d3fd2f5378..9635640e9b73057af2b0eef14a7f29417c80d1b3 100644
--- a/doc.zih.tu-dresden.de/docs/access/Login.md
+++ b/doc.zih.tu-dresden.de/docs/access/login.md
@@ -76,4 +76,4 @@ A JupyterHub installation offering IPython Notebook is available under:
 
 <https://taurus.hrsk.tu-dresden.de/jupyter>
 
-See the documentation under [JupyterHub](../software/JupyterHub.md).
+See the documentation under [JupyterHub](../software/jupyterhub.md).
diff --git a/doc.zih.tu-dresden.de/docs/access/SecurityRestrictions.md b/doc.zih.tu-dresden.de/docs/access/security_restrictions.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/access/SecurityRestrictions.md
rename to doc.zih.tu-dresden.de/docs/access/security_restrictions.md
diff --git a/doc.zih.tu-dresden.de/docs/access/SSHMitPutty.md b/doc.zih.tu-dresden.de/docs/access/ssh_mit_putty.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/access/SSHMitPutty.md
rename to doc.zih.tu-dresden.de/docs/access/ssh_mit_putty.md
diff --git a/doc.zih.tu-dresden.de/docs/access/WebVNC.md b/doc.zih.tu-dresden.de/docs/access/web_vnc.md
similarity index 97%
rename from doc.zih.tu-dresden.de/docs/access/WebVNC.md
rename to doc.zih.tu-dresden.de/docs/access/web_vnc.md
index 1d24f993fe37f3342fa804d7479f9b90458e25bc..88c020b902575bb749cb39fc594f8320cf0a4627 100644
--- a/doc.zih.tu-dresden.de/docs/access/WebVNC.md
+++ b/doc.zih.tu-dresden.de/docs/access/web_vnc.md
@@ -10,9 +10,9 @@ Also, we have prepared a script that makes launching the VNC server much easier.
 
 ## Method with JupyterHub
 
-**Check out the [new documentation about virtual desktops](../software/VirtualDesktops.md).**
+**Check out the [new documentation about virtual desktops](../software/virtual_desktops.md).**
 
-The [JupyterHub](../software/JupyterHub.md) service is now able to start a VNC session based on the
+The [JupyterHub](../software/jupyterhub.md) service is now able to start a VNC session based on the
 Singularity container mentioned here.
 
 Quickstart: 1 Click here to start a session immediately: \<a
diff --git a/doc.zih.tu-dresden.de/docs/application/Access.md b/doc.zih.tu-dresden.de/docs/application/access.md
similarity index 99%
rename from doc.zih.tu-dresden.de/docs/application/Access.md
rename to doc.zih.tu-dresden.de/docs/application/access.md
index b3bc9340471c02fd49153f25b8f3ed03b7d06a11..9dd1c3af0f49f9fdf086210ccb28a38d7bf5d931 100644
--- a/doc.zih.tu-dresden.de/docs/application/Access.md
+++ b/doc.zih.tu-dresden.de/docs/application/access.md
@@ -15,7 +15,7 @@ also trial accounts have to fill in the application form.)\<br />**
 It is invariably possible to apply for more/different resources. Whether additional resources are
 granted or not depends on the current allocations and on the availablility of the installed systems.
 
-The terms of use of the HPC systems are only [available in German](TermsOfUse.md) - at the
+The terms of use of the HPC systems are only [available in German](terms_of_use.md) - at the
 moment.
 
 ## Online Project Application
diff --git a/doc.zih.tu-dresden.de/docs/application/Application.md b/doc.zih.tu-dresden.de/docs/application/application.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/application/Application.md
rename to doc.zih.tu-dresden.de/docs/application/application.md
diff --git a/doc.zih.tu-dresden.de/docs/application/ProjectManagement.md b/doc.zih.tu-dresden.de/docs/application/project_management.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/application/ProjectManagement.md
rename to doc.zih.tu-dresden.de/docs/application/project_management.md
diff --git a/doc.zih.tu-dresden.de/docs/application/ProjectRequestForm.md b/doc.zih.tu-dresden.de/docs/application/project_request_form.md
similarity index 99%
rename from doc.zih.tu-dresden.de/docs/application/ProjectRequestForm.md
rename to doc.zih.tu-dresden.de/docs/application/project_request_form.md
index 97074661265f296807f8d58b5498607a2b6aa34a..07ed2eeb7d86c1041c55ae541a6a175f9df45d24 100644
--- a/doc.zih.tu-dresden.de/docs/application/ProjectRequestForm.md
+++ b/doc.zih.tu-dresden.de/docs/application/project_request_form.md
@@ -45,7 +45,7 @@ general project Details.\<br />Any project have:
 <span class="twiki-macro IMAGE" type="frame" align="right"
 caption="picture 4: hardware" width="170" zoom="on
 ">%ATTACHURL%/request_step3_machines.png</span> This step inquire the
-required hardware. You can find the specifications [here](../archive/Hardware.md).
+required hardware. You can find the specifications [here](../archive/hardware.md).
 \<br />For your guidance:
 
 -   gpu => taurus
diff --git a/doc.zih.tu-dresden.de/docs/application/RequestForResources.md b/doc.zih.tu-dresden.de/docs/application/request_for_resources.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/application/RequestForResources.md
rename to doc.zih.tu-dresden.de/docs/application/request_for_resources.md
diff --git a/doc.zih.tu-dresden.de/docs/application/TermsOfUse.md b/doc.zih.tu-dresden.de/docs/application/terms_of_use.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/application/TermsOfUse.md
rename to doc.zih.tu-dresden.de/docs/application/terms_of_use.md
diff --git a/doc.zih.tu-dresden.de/docs/archive/Hardware.md b/doc.zih.tu-dresden.de/docs/archive/Hardware.md
deleted file mode 100644
index 449a2cf644d7453fc20856a74074ab11d6f51f15..0000000000000000000000000000000000000000
--- a/doc.zih.tu-dresden.de/docs/archive/Hardware.md
+++ /dev/null
@@ -1,17 +0,0 @@
-# Hardware
-
-Here, you can find basic information about the hardware installed at ZIH. We try to keep this list
-up-to-date.
-
-- [BULL HPC-Cluster Taurus](TaurusII.md)
-- [SGI Ultraviolet (UV)](HardwareVenus.md)
-
-Hardware hosted by ZIH:
-
-Former systems
-
-- [PC-Farm Deimos](HardwareDeimos.md)
-- [SGI Altix](HardwareAltix.md)
-- [PC-Farm Atlas](HardwareAtlas.md)
-- [PC-Cluster Triton](HardwareTriton.md)
-- [HPC-Windows-Cluster Titan](HardwareTitan.md)
diff --git a/doc.zih.tu-dresden.de/docs/archive/CXFSEndOfSupport.md b/doc.zih.tu-dresden.de/docs/archive/cxfs_end_of_support.md
similarity index 89%
rename from doc.zih.tu-dresden.de/docs/archive/CXFSEndOfSupport.md
rename to doc.zih.tu-dresden.de/docs/archive/cxfs_end_of_support.md
index 0112e2fbf48ec2017ce463e3a2ab4a23d9ad7bcb..84e018b655f958ecb2d0a8d35982aad47a66adb2 100644
--- a/doc.zih.tu-dresden.de/docs/archive/CXFSEndOfSupport.md
+++ b/doc.zih.tu-dresden.de/docs/archive/cxfs_end_of_support.md
@@ -11,10 +11,10 @@ This file system is currently mounted at
 We kindly ask our users to remove their large data from the file system.
 Files worth keeping can be moved
 
-- to the new [Intermediate Archive](../data_management/IntermediateArchive.md) (max storage
+- to the new [Intermediate Archive](../data_lifecycle/intermediate_archive.md) (max storage
     duration: 3 years) - see
     [MigrationHints](#migration-from-cxfs-to-the-intermediate-archive) below,
-- or to the [Log-term Archive](../data_management/PreservationResearchData.md) (tagged with
+- or to the [Log-term Archive](../data_lifecycle/preservation_research_data.md) (tagged with
     metadata).
 
 To run the file system without support comes with the risk of losing
diff --git a/doc.zih.tu-dresden.de/docs/archive/DebuggingTools.md b/doc.zih.tu-dresden.de/docs/archive/debugging_tools.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/archive/DebuggingTools.md
rename to doc.zih.tu-dresden.de/docs/archive/debugging_tools.md
diff --git a/doc.zih.tu-dresden.de/docs/archive/hardware.md b/doc.zih.tu-dresden.de/docs/archive/hardware.md
new file mode 100644
index 0000000000000000000000000000000000000000..624b9b745fcd6adb67bb8984f8d0f648c8224faf
--- /dev/null
+++ b/doc.zih.tu-dresden.de/docs/archive/hardware.md
@@ -0,0 +1,17 @@
+# Hardware
+
+Here, you can find basic information about the hardware installed at ZIH. We try to keep this list
+up-to-date.
+
+- [BULL HPC-Cluster Taurus](taurus_ii.md)
+- [SGI Ultraviolet (UV)](hardware_venus.md)
+
+Hardware hosted by ZIH:
+
+Former systems
+
+- [PC-Farm Deimos](hardware_deimos.md)
+- [SGI Altix](hardware_altix.md)
+- [PC-Farm Atlas](hardware_atlas.md)
+- [PC-Cluster Triton](hardware_triton.md)
+- [HPC-Windows-Cluster Titan](hardware_titan.md)
diff --git a/doc.zih.tu-dresden.de/docs/archive/HardwareAltix.md b/doc.zih.tu-dresden.de/docs/archive/hardware_altix.md
similarity index 99%
rename from doc.zih.tu-dresden.de/docs/archive/HardwareAltix.md
rename to doc.zih.tu-dresden.de/docs/archive/hardware_altix.md
index 7912181e49c8c113b601c419d64cd859c4163b69..202ab10bda1d8829ede7a1fc52da9bf6db292a78 100644
--- a/doc.zih.tu-dresden.de/docs/archive/HardwareAltix.md
+++ b/doc.zih.tu-dresden.de/docs/archive/hardware_altix.md
@@ -13,7 +13,7 @@ installed at ZIH:
 |Uranus |512 |506|4 GB|
 |Neptun |128 |128 |1 GB|
 
-The jobs for these partitions (except Neptun) are scheduled by the [Platform LSF](PlatformLSF.md)
+The jobs for these partitions (except Neptun) are scheduled by the [Platform LSF](platform_lsf.md)
 batch system running on `mars.hrsk.tu-dresden.de`. The actual placement of a submitted job may
 depend on factors like memory size, number of processors, time limit.
 
diff --git a/doc.zih.tu-dresden.de/docs/archive/HardwareAtlas.md b/doc.zih.tu-dresden.de/docs/archive/hardware_atlas.md
similarity index 86%
rename from doc.zih.tu-dresden.de/docs/archive/HardwareAtlas.md
rename to doc.zih.tu-dresden.de/docs/archive/hardware_atlas.md
index 184f395bcd3d8ed952ca9dcf26c64a66ed13210b..62a81ae538fcc40a1664483e1d5353b57ac3e6d1 100644
--- a/doc.zih.tu-dresden.de/docs/archive/HardwareAtlas.md
+++ b/doc.zih.tu-dresden.de/docs/archive/hardware_atlas.md
@@ -13,17 +13,17 @@ following hardware is installed:
 |nodes with 128 GB RAM | 12 |
 |nodes with 512 GB RAM | 8 |
 
-Mars and Deimos users: Please read the [migration hints](MigrateToAtlas.md).
+Mars and Deimos users: Please read the [migration hints](migrate_to_atlas.md).
 
 All nodes share the `/home` and `/fastfs` file system with our other HPC systems. Each
 node has 180 GB local disk space for scratch mounted on `/tmp` . The jobs for the compute nodes are
-scheduled by the [Platform LSF](PlatformLSF.md) batch system from the login nodes
+scheduled by the [Platform LSF](platform_lsf.md) batch system from the login nodes
 `atlas.hrsk.tu-dresden.de` .
 
 A QDR Infiniband interconnect provides the communication and I/O infrastructure for low latency /
 high throughput data traffic.
 
-Users with a login on the [SGI Altix](HardwareAltix.md) can access their home directory via NFS
+Users with a login on the [SGI Altix](hardware_altix.md) can access their home directory via NFS
 below the mount point `/hpc_work`.
 
 ## CPU AMD Opteron 6274
diff --git a/doc.zih.tu-dresden.de/docs/archive/HardwareDeimos.md b/doc.zih.tu-dresden.de/docs/archive/hardware_deimos.md
similarity index 91%
rename from doc.zih.tu-dresden.de/docs/archive/HardwareDeimos.md
rename to doc.zih.tu-dresden.de/docs/archive/hardware_deimos.md
index 81a69258cc34162695b00499c7166af6daaf7b17..a426381651f2807fb9c339e104ac4b2413aaec8f 100644
--- a/doc.zih.tu-dresden.de/docs/archive/HardwareDeimos.md
+++ b/doc.zih.tu-dresden.de/docs/archive/hardware_deimos.md
@@ -16,14 +16,14 @@ installed:
 
 All nodes share a 68 TB on DDN hardware. Each node has per core 40 GB local disk space for scratch
 mounted on `/tmp` . The jobs for the compute nodes are scheduled by the
-[Platform LSF](PlatformLSF.md)
+[Platform LSF](platform_lsf.md)
 batch system from the login nodes `deimos.hrsk.tu-dresden.de` .
 
 Two separate Infiniband networks (10 Gb/s) with low cascading switches provide the communication and
 I/O infrastructure for low latency / high throughput data traffic. An additional gigabit Ethernet
 network is used for control and service purposes.
 
-Users with a login on the [SGI Altix](HardwareAltix.md) can access their home directory via NFS
+Users with a login on the [SGI Altix](hardware_altix.md) can access their home directory via NFS
 below the mount point `/hpc_work`.
 
 ## CPU
diff --git a/doc.zih.tu-dresden.de/docs/archive/HardwarePhobos.md b/doc.zih.tu-dresden.de/docs/archive/hardware_phobos.md
similarity index 92%
rename from doc.zih.tu-dresden.de/docs/archive/HardwarePhobos.md
rename to doc.zih.tu-dresden.de/docs/archive/hardware_phobos.md
index c5ecccb5487d43f6f9e723d65b5553653c38ee88..9f70d45161fac7363e9e0828af4b788d817fc1c9 100644
--- a/doc.zih.tu-dresden.de/docs/archive/HardwarePhobos.md
+++ b/doc.zih.tu-dresden.de/docs/archive/hardware_phobos.md
@@ -13,7 +13,7 @@ the following hardware is installed:
 |RAM per node |4 GB |
 
 All nodes share a 4.4 TB SAN. Each node has additional local disk space mounted on `/scratch`. The
-jobs for the compute nodes are scheduled by a [Platform LSF](PlatformLSF.md) batch system running on
+jobs for the compute nodes are scheduled by a [Platform LSF](platform_lsf.md) batch system running on
 the login node `phobos.hrsk.tu-dresden.de`.
 
 Two separate Infiniband networks (10 Gb/s) with low cascading switches provide the infrastructure
diff --git a/doc.zih.tu-dresden.de/docs/archive/HardwareTitan.md b/doc.zih.tu-dresden.de/docs/archive/hardware_titan.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/archive/HardwareTitan.md
rename to doc.zih.tu-dresden.de/docs/archive/hardware_titan.md
diff --git a/doc.zih.tu-dresden.de/docs/archive/HardwareTriton.md b/doc.zih.tu-dresden.de/docs/archive/hardware_triton.md
similarity index 98%
rename from doc.zih.tu-dresden.de/docs/archive/HardwareTriton.md
rename to doc.zih.tu-dresden.de/docs/archive/hardware_triton.md
index 17fd54449f8e971624cdb72e02d15da981e3a33d..646972202c2679849ce2d7c5ac866123b55e617e 100644
--- a/doc.zih.tu-dresden.de/docs/archive/HardwareTriton.md
+++ b/doc.zih.tu-dresden.de/docs/archive/hardware_triton.md
@@ -12,7 +12,7 @@ hardware is installed:
 |total peak performance |4.9 TFLOPS |
 |dual nodes |64 |
 
-The jobs for the compute nodes are scheduled by the [LoadLeveler](LoadLeveler.md) batch system from
+The jobs for the compute nodes are scheduled by the [LoadLeveler](load_leveler.md) batch system from
 the login node triton.hrsk.tu-dresden.de .
 
 ## CPU
diff --git a/doc.zih.tu-dresden.de/docs/archive/HardwareVenus.md b/doc.zih.tu-dresden.de/docs/archive/hardware_venus.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/archive/HardwareVenus.md
rename to doc.zih.tu-dresden.de/docs/archive/hardware_venus.md
diff --git a/doc.zih.tu-dresden.de/docs/archive/KnlNodes.md b/doc.zih.tu-dresden.de/docs/archive/knl_nodes.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/archive/KnlNodes.md
rename to doc.zih.tu-dresden.de/docs/archive/knl_nodes.md
diff --git a/doc.zih.tu-dresden.de/docs/archive/LoadLeveler.md b/doc.zih.tu-dresden.de/docs/archive/load_leveler.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/archive/LoadLeveler.md
rename to doc.zih.tu-dresden.de/docs/archive/load_leveler.md
diff --git a/doc.zih.tu-dresden.de/docs/archive/MigrateToAtlas.md b/doc.zih.tu-dresden.de/docs/archive/migrate_to_atlas.md
similarity index 98%
rename from doc.zih.tu-dresden.de/docs/archive/MigrateToAtlas.md
rename to doc.zih.tu-dresden.de/docs/archive/migrate_to_atlas.md
index 688f390e874dd43587d3191559b3ed12738c46cc..e39014b5a81030eb915422cafb4ee8a19ba9bcd1 100644
--- a/doc.zih.tu-dresden.de/docs/archive/MigrateToAtlas.md
+++ b/doc.zih.tu-dresden.de/docs/archive/migrate_to_atlas.md
@@ -70,7 +70,7 @@ nodes you have to be more precise in your resource requests.
     -   Larger jobs will use just as many hosts as needed, e.g. 160
         processes will be scheduled on three hosts.
 
-For more details, please see the pages on [LSF](PlatformLSF.md).
+For more details, please see the pages on [LSF](platform_lsf.md).
 
 ## Software
 
diff --git a/doc.zih.tu-dresden.de/docs/archive/NoIBJobs.md b/doc.zih.tu-dresden.de/docs/archive/no_ib_jobs.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/archive/NoIBJobs.md
rename to doc.zih.tu-dresden.de/docs/archive/no_ib_jobs.md
diff --git a/doc.zih.tu-dresden.de/docs/archive/Phase2Migration.md b/doc.zih.tu-dresden.de/docs/archive/phase2_migration.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/archive/Phase2Migration.md
rename to doc.zih.tu-dresden.de/docs/archive/phase2_migration.md
diff --git a/doc.zih.tu-dresden.de/docs/archive/PlatformLSF.md b/doc.zih.tu-dresden.de/docs/archive/platform_lsf.md
similarity index 99%
rename from doc.zih.tu-dresden.de/docs/archive/PlatformLSF.md
rename to doc.zih.tu-dresden.de/docs/archive/platform_lsf.md
index 699db5c9ba6f732514d2fad7d5070b2ffe81fdc4..1be15a0a9beb204c188b339e00c487e6ebbd5af0 100644
--- a/doc.zih.tu-dresden.de/docs/archive/PlatformLSF.md
+++ b/doc.zih.tu-dresden.de/docs/archive/platform_lsf.md
@@ -1,6 +1,6 @@
 # Platform LSF
 
-**This Page is deprecated!** The current bachsystem on Taurus is [Slurm][../jobs/Slurm.md]
+**This Page is deprecated!** The current bachsystem on Taurus is [Slurm][../jobs_and_resources/slurm.md]
 
 The HRSK-I systems are operated with the batch system LSF running on *Mars*, *Atlas* resp..
 
diff --git a/doc.zih.tu-dresden.de/docs/archive/RamDiskDocumentation.md b/doc.zih.tu-dresden.de/docs/archive/ram_disk_documentation.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/archive/RamDiskDocumentation.md
rename to doc.zih.tu-dresden.de/docs/archive/ram_disk_documentation.md
diff --git a/doc.zih.tu-dresden.de/docs/archive/SystemAltix.md b/doc.zih.tu-dresden.de/docs/archive/system_altix.md
similarity index 95%
rename from doc.zih.tu-dresden.de/docs/archive/SystemAltix.md
rename to doc.zih.tu-dresden.de/docs/archive/system_altix.md
index 504d983cf142662f2c615775d91a29ddf15b9bf5..d3ebdbbe554d5aa3f7dcda460d4831974a589744 100644
--- a/doc.zih.tu-dresden.de/docs/archive/SystemAltix.md
+++ b/doc.zih.tu-dresden.de/docs/archive/system_altix.md
@@ -3,7 +3,7 @@
 **This page is deprecated! The SGI Atlix is a former system!**
 
 The SGI Altix is shared memory system for large parallel jobs using up to 2000 cores in parallel (
-[information on the hardware](HardwareAltix.md)). It's partitions are Mars (login), Jupiter, Saturn,
+[information on the hardware](hardware_altix.md)). It's partitions are Mars (login), Jupiter, Saturn,
 Uranus, and Neptun (interactive).
 
 ## Compiling Parallel Applications
@@ -39,7 +39,7 @@ user's job. Normally a job can be submitted with these data:
 ### LSF
 
 The batch sytem on Atlas is LSF. For general information on LSF, please follow
-[this link](PlatformLSF.md).
+[this link](platform_lsf.md).
 
 ### Submission of Parallel Jobs
 
diff --git a/doc.zih.tu-dresden.de/docs/archive/SystemAtlas.md b/doc.zih.tu-dresden.de/docs/archive/system_atlas.md
similarity index 98%
rename from doc.zih.tu-dresden.de/docs/archive/SystemAtlas.md
rename to doc.zih.tu-dresden.de/docs/archive/system_atlas.md
index 59fe0111fbe1052a6b45923128369f703462ea15..859dcef7ea9a311ce9de0aacc3b8df4c52ded3a0 100644
--- a/doc.zih.tu-dresden.de/docs/archive/SystemAtlas.md
+++ b/doc.zih.tu-dresden.de/docs/archive/system_atlas.md
@@ -3,7 +3,7 @@
 **This page is deprecated! Atlas is a former system!**
 
 Atlas is a general purpose HPC cluster for jobs using 1 to 128 cores in parallel
-([Information on the hardware](HardwareAtlas.md)).
+([Information on the hardware](hardware_atlas.md)).
 
 ## Compiling Parallel Applications
 
@@ -36,7 +36,7 @@ user's job. Normally a job can be submitted with these data:
 ### LSF
 
 The batch sytem on Atlas is LSF. For general information on LSF, please follow
-[this link](PlatformLSF.md).
+[this link](platform_lsf.md).
 
 ### Submission of Parallel Jobs
 
diff --git a/doc.zih.tu-dresden.de/docs/archive/SystemVenus.md b/doc.zih.tu-dresden.de/docs/archive/system_venus.md
similarity index 94%
rename from doc.zih.tu-dresden.de/docs/archive/SystemVenus.md
rename to doc.zih.tu-dresden.de/docs/archive/system_venus.md
index 94aa24f360633717694f131dca20f3ab4b79da9c..5e9334d02d0cd68662c2d0744464798b04b0344d 100644
--- a/doc.zih.tu-dresden.de/docs/archive/SystemVenus.md
+++ b/doc.zih.tu-dresden.de/docs/archive/system_venus.md
@@ -3,7 +3,7 @@
 ## Information about the hardware
 
 Detailed information on the currect HPC hardware can be found
-[here](HardwareVenus.md).
+[here](hardware_venus.md).
 
 ## Login to the System
 
@@ -57,7 +57,7 @@ nodes with dedicated resources for the user's job. Normally a job can be submitt
 - executable and command line parameters.
 
 The batch sytem on Venus is Slurm. For general information on Slurm, please follow
-[this link](../jobs/Slurm.md).
+[this link](../jobs_and_resources/slurm.md).
 
 ### Submission of Parallel Jobs
 
@@ -78,4 +78,4 @@ so you have to compile the binaries specifically for their target.
 
 -   The large main memory on the system allows users to create ramdisks
     within their own jobs. The documentation on how to use these
-    ramdisks can be found [here](RamDiskDocumentation.md).
+    ramdisks can be found [here](ram_disk_documentation.md).
diff --git a/doc.zih.tu-dresden.de/docs/archive/TaurusII.md b/doc.zih.tu-dresden.de/docs/archive/taurus_ii.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/archive/TaurusII.md
rename to doc.zih.tu-dresden.de/docs/archive/taurus_ii.md
diff --git a/doc.zih.tu-dresden.de/docs/archive/UNICORERestAPI.md b/doc.zih.tu-dresden.de/docs/archive/unicore_rest_api.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/archive/UNICORERestAPI.md
rename to doc.zih.tu-dresden.de/docs/archive/unicore_rest_api.md
diff --git a/doc.zih.tu-dresden.de/docs/archive/VampirTrace.md b/doc.zih.tu-dresden.de/docs/archive/vampir_trace.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/archive/VampirTrace.md
rename to doc.zih.tu-dresden.de/docs/archive/vampir_trace.md
diff --git a/doc.zih.tu-dresden.de/docs/archive/WindowsBatch.md b/doc.zih.tu-dresden.de/docs/archive/windows_batch.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/archive/WindowsBatch.md
rename to doc.zih.tu-dresden.de/docs/archive/windows_batch.md
diff --git a/doc.zih.tu-dresden.de/docs/data_management/.gitkeep b/doc.zih.tu-dresden.de/docs/data_lifecycle/.gitkeep
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/data_management/.gitkeep
rename to doc.zih.tu-dresden.de/docs/data_lifecycle/.gitkeep
diff --git a/doc.zih.tu-dresden.de/docs/data_management/BeeGFS.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/bee_gfs.md
similarity index 97%
rename from doc.zih.tu-dresden.de/docs/data_management/BeeGFS.md
rename to doc.zih.tu-dresden.de/docs/data_lifecycle/bee_gfs.md
index a881e587249e162ecf4f08303cd54af398495aef..14354286e9793d85f92f8456e733187cb826e854 100644
--- a/doc.zih.tu-dresden.de/docs/data_management/BeeGFS.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/bee_gfs.md
@@ -46,7 +46,7 @@ who needs large and/or fast file storage
 ## Create BeeGFS file system
 
 To reserve nodes for creating BeeGFS file system you need to create a
-[batch](../jobs/Slurm.md) job
+[batch](../jobs_and_resources/slurm.md) job
 
     #!/bin/bash
     #SBATCH -p nvme
@@ -68,7 +68,7 @@ Check the status of the job with 'squeue -u \<username>'
 ## Mount BeeGFS file system
 
 You can mount BeeGFS file system on the ML partition (ppc64
-architecture) or on the Haswell [partition](../jobs/SystemTaurus.md) (x86_64
+architecture) or on the Haswell [partition](../jobs_and_resources/system_taurus.md) (x86_64
 architecture)
 
 ### Mount BeeGFS file system on the ML
diff --git a/doc.zih.tu-dresden.de/docs/data_management/DataManagement.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/data_management.md
similarity index 94%
rename from doc.zih.tu-dresden.de/docs/data_management/DataManagement.md
rename to doc.zih.tu-dresden.de/docs/data_lifecycle/data_management.md
index 9e0cb733ce101fcb97391bef71ecc15e77d1706f..598fd816fcf25782b4e4c270aa75c72a79a9511f 100644
--- a/doc.zih.tu-dresden.de/docs/data_management/DataManagement.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/data_management.md
@@ -9,7 +9,7 @@ the same **data storage** or set of them, the same **set of software** (packages
 ## Data Storage and Management
 
 As soon as you have access to Taurus you have to manage your data. The main concept of
-working with data on Taurus bases on [Workspaces](Workspaces.md). Use it properly:
+working with data on Taurus bases on [Workspaces](workspaces.md). Use it properly:
 
   * use a **/home** directory for the limited amount of personal data, simple examples and the results
     of calculations. The home directory is not a working directory! However, `/home` file system is
@@ -22,16 +22,16 @@ working with data on Taurus bases on [Workspaces](Workspaces.md). Use it properl
 To efficiently handle different types of storage systems, please design your data workflow according
 to characteristics, like I/O footprint (bandwidth/IOPS) of the application, size of the data,
 (number of files,) and duration of the storage. In general, the mechanisms of so-called
-[Workspaces](Workspaces.md) are compulsory for all HPC users to store data for a defined duration -
+[Workspaces](workspaces.md) are compulsory for all HPC users to store data for a defined duration -
 depending on the requirements and the storage system this time span might range from days to a few
 years.
 
-- [HPC file systems](FileSystems.md)
-- [Intermediate Archive](IntermediateArchive.md)
+- [HPC file systems](file_systems.md)
+- [Intermediate Archive](intermediate_archive.md)
 - [Special data containers] **todo** Special data containers (was no valid link in old compendium)
-- [Move data between file systems](../data_moving/DataMover.md)
-- [Move data to/from ZIH's file systems](../data_moving/ExportNodes.md)
-- [Longterm Preservation for ResearchData](PreservationResearchData.md)
+- [Move data between file systems](../data_transfer/data_mover.md)
+- [Move data to/from ZIH's file systems](../data_transfer/export_nodes.md)
+- [Longterm Preservation for ResearchData](preservation_research_data.md)
 
 **Recommendations to choose of storage system:**
 
diff --git a/doc.zih.tu-dresden.de/docs/data_management/experiments.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/experiments.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/data_management/experiments.md
rename to doc.zih.tu-dresden.de/docs/data_lifecycle/experiments.md
diff --git a/doc.zih.tu-dresden.de/docs/data_management/FileSystems.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md
similarity index 99%
rename from doc.zih.tu-dresden.de/docs/data_management/FileSystems.md
rename to doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md
index ecd1919b5014b1dcbbf5d54cbd29a98c6485c211..9f949354a1a1a1e7b24d2bcf5aa50e15496a9348 100644
--- a/doc.zih.tu-dresden.de/docs/data_management/FileSystems.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md
@@ -70,8 +70,8 @@ In case a project is above it's limits please ...
 - *Systematically* handle your important data:
   - For later use (weeks...months) at the HPC systems, build tar
     archives with meaningful names or IDs and store e.g. them in an
-    [archive](IntermediateArchive.md).
-  - Refer to the hints for [long term preservation for research data](PreservationResearchData.md).
+    [archive](intermediate_archive.md).
+  - Refer to the hints for [long term preservation for research data](preservation_research_data.md).
 
 ## Work Directories
 
diff --git a/doc.zih.tu-dresden.de/docs/data_management/HPCStorageConcept2019.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/hpc_storage_concept2019.md
similarity index 96%
rename from doc.zih.tu-dresden.de/docs/data_management/HPCStorageConcept2019.md
rename to doc.zih.tu-dresden.de/docs/data_lifecycle/hpc_storage_concept2019.md
index 0618822c29cedbbd889a0089d31b11be43f1857e..998699215481e1318a3b5aa036eac8b56fa7d94e 100644
--- a/doc.zih.tu-dresden.de/docs/data_management/HPCStorageConcept2019.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/hpc_storage_concept2019.md
@@ -30,7 +30,7 @@ validity of a workspace can be extended twice. \</font>
     "warm archive" which is large but slow. It is available read-only on
     the compute hosts and read-write an login and export nodes. To move
     in your data, you might want to use the
-    [datamover nodes](../data_moving/DataMover.md).\</font>\</p>
+    [datamover nodes](../data_transfer/data_mover.md).\</font>\</p>
 
 ## \<font face="Open Sans, sans-serif">Moving Data from /scratch and /lustre/ssd to your workspaces\</font>
 
@@ -63,7 +63,7 @@ face="Open Sans, sans-serif">Data in workspaces will be deleted
 automatically after the grace period.\</font>\<font face="Open Sans,
 sans-serif"> This is especially true for the warm archive. If you want
 to keep your data for a longer time please use our options for
-[long-term storage](PreservationResearchData.md).\</font>
+[long-term storage](preservation_research_data.md).\</font>
 
 \<font face="Open Sans, sans-serif">To \</font>\<font face="Open Sans,
 sans-serif">help you with that, you can attach your email address for
diff --git a/doc.zih.tu-dresden.de/docs/data_management/IntermediateArchive.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md
similarity index 79%
rename from doc.zih.tu-dresden.de/docs/data_management/IntermediateArchive.md
rename to doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md
index e01cef8bd9786c28e0f54727f11f8d88ecd20f3b..2d20726755cf07c9d4a4f9f87d3ae4d2b5825dbc 100644
--- a/doc.zih.tu-dresden.de/docs/data_management/IntermediateArchive.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md
@@ -1,18 +1,18 @@
 # Intermediate Archive
 
 With the "Intermediate Archive", ZIH is closing the gap between a normal disk-based file system and
-[Longterm Archive](PreservationResearchData.md). The Intermediate Archive is a hierarchical file
+[Longterm Archive](preservation_research_data.md). The Intermediate Archive is a hierarchical file
 system with disks for buffering and tapes for storing research data.
 
 Its intended use is the storage of research data for a maximal duration of 3 years. For storing the
 data after exceeding this time, the user has to supply essential metadata and migrate the files to
-the [Longterm Archive](PreservationResearchData.md). Until then, she/he has to keep track of her/his
+the [Longterm Archive](preservation_research_data.md). Until then, she/he has to keep track of her/his
 files.
 
 Some more information:
 
 - Maximum file size in the archive is 500 GB (split up your files, see
-  [Datamover](../data_moving/DataMover.md))
+  [Datamover](../data_transfer/data_mover.md))
 - Data will be stored in two copies on tape.
 - The bandwidth to this data is very limited. Hence, this file system
   must not be used directly as input or output for HPC jobs.
@@ -20,7 +20,7 @@ Some more information:
 ## How to access the "Intermediate Archive"
 
 For storing and restoring your data in/from the "Intermediate Archive" you can use the tool
-[Datamover](../data_moving/DataMover.md). To use the DataMover you have to login to Taurus
+[Datamover](../data_transfer/data_mover.md). To use the DataMover you have to login to Taurus
 (taurus.hrsk.tu-dresden.de).
 
 ### Store data
diff --git a/doc.zih.tu-dresden.de/docs/data_management/PreservationResearchData.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/preservation_research_data.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/data_management/PreservationResearchData.md
rename to doc.zih.tu-dresden.de/docs/data_lifecycle/preservation_research_data.md
diff --git a/doc.zih.tu-dresden.de/docs/archive/AnnouncementOfQuotas.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/quotas.md
similarity index 96%
rename from doc.zih.tu-dresden.de/docs/archive/AnnouncementOfQuotas.md
rename to doc.zih.tu-dresden.de/docs/data_lifecycle/quotas.md
index bc04e86de79a293cdd144fa8d9023abb5e12b970..24665aa573549b6290fae90523450c98fc9d9240 100644
--- a/doc.zih.tu-dresden.de/docs/archive/AnnouncementOfQuotas.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/quotas.md
@@ -35,7 +35,7 @@ In case a project is above its limits, please
         [DMF system](#AnchorDataMigration). Avoid using this system
         (`/hpc_fastfs`) for files < 1 MB!
     -   refer to the hints for
-        [long term preservation for research data](../data_management/PreservationResearchData.md).
+        [long term preservation for research data](../data_lifecycle/preservation_research_data.md).
 
 ## No Alternatives
 
diff --git a/doc.zih.tu-dresden.de/docs/data_management/WarmArchive.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/warm_archive.md
similarity index 96%
rename from doc.zih.tu-dresden.de/docs/data_management/WarmArchive.md
rename to doc.zih.tu-dresden.de/docs/data_lifecycle/warm_archive.md
index 59155c407dcb51a0f1564235207124aef4b77d41..166f83b64ab373f346a385f7772e5ca0a81323a9 100644
--- a/doc.zih.tu-dresden.de/docs/data_management/WarmArchive.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/warm_archive.md
@@ -14,7 +14,7 @@ Within Taurus (including the HPC-DA nodes), the management software "Quobyte" en
 For external access, you can use:
 
 - S3 to `<bucket>.s3.taurusexport.hrsk.tu-dresden.de`
-- or normal file transfer via our taurusexport nodes (see [DataManagement](DataManagement.md)).
+- or normal file transfer via our taurusexport nodes (see [DataManagement](data_management.md)).
 
 An HPC-DA project can apply for storage space in the warm archive. This is limited in capacity and
 duration.
diff --git a/doc.zih.tu-dresden.de/docs/data_management/Workspaces.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/workspaces.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/data_management/Workspaces.md
rename to doc.zih.tu-dresden.de/docs/data_lifecycle/workspaces.md
diff --git a/doc.zih.tu-dresden.de/docs/data_moving/DataMover.md b/doc.zih.tu-dresden.de/docs/data_transfer/data_mover.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/data_moving/DataMover.md
rename to doc.zih.tu-dresden.de/docs/data_transfer/data_mover.md
diff --git a/doc.zih.tu-dresden.de/docs/data_moving/data_moving.md b/doc.zih.tu-dresden.de/docs/data_transfer/data_moving.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/data_moving/data_moving.md
rename to doc.zih.tu-dresden.de/docs/data_transfer/data_moving.md
diff --git a/doc.zih.tu-dresden.de/docs/data_moving/ExportNodes.md b/doc.zih.tu-dresden.de/docs/data_transfer/export_nodes.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/data_moving/ExportNodes.md
rename to doc.zih.tu-dresden.de/docs/data_transfer/export_nodes.md
diff --git a/doc.zih.tu-dresden.de/docs/use_of_hardware/AlphaCentauri.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md
similarity index 96%
rename from doc.zih.tu-dresden.de/docs/use_of_hardware/AlphaCentauri.md
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md
index e7a1368f44b5c6ee48f359548a7216ac9427dedb..13c8c9c8b9892dffb7f60db3cfb00744608df892 100644
--- a/doc.zih.tu-dresden.de/docs/use_of_hardware/AlphaCentauri.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md
@@ -29,11 +29,11 @@ cluster:
 1. **Modules**
 1  **Virtual Environments (manual software installation)**
 1. [JupyterHub](https://taurus.hrsk.tu-dresden.de/)
-1. [Containers](../software/Containers.md)
+1. [Containers](../software/containers.md)
 
 ### Modules
 
-The easiest way is using the [module system](../software/Modules.md) and Python virtual environment.
+The easiest way is using the [module system](../software/modules.md) and Python virtual environment.
 Modules are a way to use frameworks, compilers, loader, libraries, and utilities. The software
 environment for the **alpha** partition is available under the name **hiera**:
 
@@ -100,7 +100,7 @@ conda deactivate                            #Leave the virtual environment
 
 New software for data analytics is emerging faster than we can install it. If you urgently need a
 certain version we advise you to manually install it (the machine learning frameworks and required
-packages) in your virtual environment (or use a [container](../software/Containers.md).
+packages) in your virtual environment (or use a [container](../software/containers.md).
 
 The **Virtualenv** example:
 
@@ -163,10 +163,10 @@ moment.
 
 ### JupyterHub
 
-There is [JupyterHub](../software/JupyterHub.md) on Taurus, where you can simply run
+There is [JupyterHub](../software/jupyterhub.md) on Taurus, where you can simply run
 your Jupyter notebook on Alpha-Centauri sub-cluster. Also, for more specific cases you can run a
 manually created remote jupyter server. You can find the manual server setup
-[here](../software/DeepLearning.md). However, the simplest option for beginners is using
+[here](../software/deep_learning.md). However, the simplest option for beginners is using
 JupyterHub.
 
 JupyterHub is available at
@@ -183,7 +183,7 @@ parameter).
 On Taurus [Singularity](https://sylabs.io/) is used as a standard container
 solution. It can be run on the `alpha` partition as well. Singularity enables users to have full
 control of their environment. Detailed information about containers can be found
-[here](../software/Containers.md).
+[here](../software/containers.md).
 
 Nvidia
 [NGC](https://developer.nvidia.com/blog/how-to-run-ngc-deep-learning-containers-with-singularity/)
diff --git a/doc.zih.tu-dresden.de/docs/use_of_hardware/BatchSystems.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/batch_systems.md
similarity index 98%
rename from doc.zih.tu-dresden.de/docs/use_of_hardware/BatchSystems.md
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/batch_systems.md
index 3832fc267241b2f2fa260ccca377c387d252213f..06e9be7e7a8ab5efa0ae1272ba6159ac50310e0b 100644
--- a/doc.zih.tu-dresden.de/docs/use_of_hardware/BatchSystems.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/batch_systems.md
@@ -14,7 +14,7 @@ nodes with dedicated resources for user jobs. Normally a job can be submitted wi
 
 Depending on the batch system the syntax differs slightly:
 
-- [Slurm](../jobs/Slurm.md) (taurus, venus)
+- [Slurm](../jobs_and_resources/slurm.md) (taurus, venus)
 
 If you are confused by the different batch systems, you may want to enjoy this [batch system
 commands translation table](http://slurm.schedmd.com/rosetta.pdf).
diff --git a/doc.zih.tu-dresden.de/docs/jobs/BindingAndDistributionOfTasks.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/binding_and_distribution_of_tasks.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/jobs/BindingAndDistributionOfTasks.md
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/binding_and_distribution_of_tasks.md
diff --git a/doc.zih.tu-dresden.de/docs/use_of_hardware/CheckpointRestart.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/checkpoint_restart.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/use_of_hardware/CheckpointRestart.md
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/checkpoint_restart.md
diff --git a/doc.zih.tu-dresden.de/docs/use_of_hardware/HardwareTaurus.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_taurus.md
similarity index 97%
rename from doc.zih.tu-dresden.de/docs/use_of_hardware/HardwareTaurus.md
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_taurus.md
index cc76897e3e4ab437ab12f133dee0b09034ab6237..ff28e9b69d95496f299b80b45179f3787ad996cb 100644
--- a/doc.zih.tu-dresden.de/docs/use_of_hardware/HardwareTaurus.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_taurus.md
@@ -34,7 +34,7 @@
     -   200 GB /tmp on local SSD local disk
 -   Hostnames: taurusi\[7001-7192\]
 -   SLURM partition `romeo`
--   more information under [RomeNodes](RomeNodes.md)
+-   more information under [RomeNodes](rome_nodes.md)
 
 ## Large SMP System HPE Superdome Flex
 
@@ -43,7 +43,7 @@
 -   currently configured as one single node
     -   Hostname: taurussmp8
 -   SLURM partition `julia`
--   more information under [HPE SD Flex](SDFlex.md)
+-   more information under [HPE SD Flex](sd_flex.md)
 
 ## IBM Power9 Nodes for Machine Learning
 
diff --git a/doc.zih.tu-dresden.de/docs/jobs/HPCDA.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hpcda.md
similarity index 71%
rename from doc.zih.tu-dresden.de/docs/jobs/HPCDA.md
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/hpcda.md
index 11fdd1224d245c974ca4ae63302e397b181ad4bd..acdea9af1e75308acd0a2fe78c8465dfeecef3be 100644
--- a/doc.zih.tu-dresden.de/docs/jobs/HPCDA.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hpcda.md
@@ -33,16 +33,16 @@ src="%ATTACHURL%/bandwidth.png" title="bandwidth.png" width="250" />
 
 ## Hardware Overview
 
-- [Nodes for machine learning (Power9)](../use_of_hardware/Power9.md)
-- [NVMe Storage](../use_of_hardware/NvmeStorage.md) (2 PB)
-- [Warm archive](../data_management/WarmArchive.md) (10 PB)
+- [Nodes for machine learning (Power9)](../jobs_and_resources/power9.md)
+- [NVMe Storage](../jobs_and_resources/nvme_storage.md) (2 PB)
+- [Warm archive](../data_lifecycle/warm_archive.md) (10 PB)
 - HPC nodes (x86) for DA (island 6)
 - Compute nodes with high memory bandwidth:
-  [AMD Rome Nodes](../use_of_hardware/RomeNodes.md) (island 7)
+  [AMD Rome Nodes](../jobs_and_resources/rome_nodes.md) (island 7)
 
 Additional hardware:
 
-- [Multi-GPU-Cluster](../use_of_hardware/AlphaCentauri.md) for projects of SCADS.AI
+- [Multi-GPU-Cluster](../jobs_and_resources/alpha_centauri.md) for projects of SCADS.AI
 
 ## File Systems and Object Storage
 
@@ -53,16 +53,16 @@ Additional hardware:
 
 ## HOWTOS
 
-- [Get started with HPC-DA](../software/GetStartedWithHPCDA.md)
-- [IBM Power AI](../software/PowerAI.md)
+- [Get started with HPC-DA](../software/get_started_with_hpcda.md)
+- [IBM Power AI](../software/power_ai.md)
 - [Work with Singularity Containers on Power9]**todo** Cloud
-- [TensorFlow on HPC-DA (native)](../software/TensorFlow.md)
-- [Tensorflow on Jupyter notebook](../software/TensorFlowOnJupyterNotebook.md)
+- [TensorFlow on HPC-DA (native)](../software/tensor_flow.md)
+- [Tensorflow on Jupyter notebook](../software/tensor_flow_on_jupyter_notebook.md)
 - Create and run your own TensorFlow container for HPC-DA (Power9) (todo: no link at all in old compendium)
-- [TensorFlow on x86](../software/DeepLearning.md)
-- [PyTorch on HPC-DA (Power9)](../software/PyTorch.md)
-- [Python on HPC-DA (Power9)](../software/Python.md)
-- [JupyterHub](../software/JupyterHub.md)
-- [R on HPC-DA (Power9)](../software/DataAnalyticsWithR.md)
+- [TensorFlow on x86](../software/deep_learning.md)
+- [PyTorch on HPC-DA (Power9)](../software/py_torch.md)
+- [Python on HPC-DA (Power9)](../software/python.md)
+- [JupyterHub](../software/jupyterhub.md)
+- [R on HPC-DA (Power9)](../software/data_analytics_with_r.md)
 - [Big Data frameworks: Apache Spark, Apache Flink, Apache Hadoop]
    **todo** BigDataFrameworks:ApacheSparkApacheFlinkApacheHadoop 
diff --git a/doc.zih.tu-dresden.de/docs/jobs/index.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/index.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/jobs/index.md
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/index.md
diff --git a/doc.zih.tu-dresden.de/docs/use_of_hardware/NvmeStorage.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/use_of_hardware/NvmeStorage.md
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/nvme_storage.md
diff --git a/doc.zih.tu-dresden.de/docs/use_of_hardware/Power9.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/power9.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/use_of_hardware/Power9.md
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/power9.md
diff --git a/doc.zih.tu-dresden.de/docs/use_of_hardware/RomeNodes.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/use_of_hardware/RomeNodes.md
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/rome_nodes.md
diff --git a/doc.zih.tu-dresden.de/docs/use_of_hardware/SDFlex.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/use_of_hardware/SDFlex.md
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
diff --git a/doc.zih.tu-dresden.de/docs/jobs/Slurm.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md
similarity index 99%
rename from doc.zih.tu-dresden.de/docs/jobs/Slurm.md
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md
index f767f34ff09ab7b7bd1a582dcb3d9b7b3e793c73..2241599fb1c739061a0b50cbc8b8a6e44aae107e 100644
--- a/doc.zih.tu-dresden.de/docs/jobs/Slurm.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md
@@ -332,7 +332,7 @@ The SLURM provides several binding strategies to place and bind the tasks and/or
 to cores, sockets and nodes. Note: Keep in mind that the distribution method has a direct impact on
 the execution time of your application. The manipulation of the distribution can either speed up or
 slow down your application. More detailed information about the binding can be found
-[here](BindingAndDistributionOfTasks.md).
+[here](binding_and_distribution_of_tasks.md).
 
 The default allocation of the tasks/threads for OpenMP, MPI and Hybrid (MPI and OpenMP) are as
 follows.
diff --git a/doc.zih.tu-dresden.de/docs/jobs/SlurmExamples.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/jobs/SlurmExamples.md
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md
diff --git a/doc.zih.tu-dresden.de/docs/jobs/SystemTaurus.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/system_taurus.md
similarity index 95%
rename from doc.zih.tu-dresden.de/docs/jobs/SystemTaurus.md
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/system_taurus.md
index 8e45c2994b2890e355794a7d9039e99363b13547..a0edc3365cda8e66d8b2e7fc081a9c8a1040642d 100644
--- a/doc.zih.tu-dresden.de/docs/jobs/SystemTaurus.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/system_taurus.md
@@ -3,12 +3,12 @@
 ## Information about the Hardware
 
 Detailed information on the current HPC hardware can be found
-[here.](../use_of_hardware/HardwareTaurus.md)
+[here.](../jobs_and_resources/hardware_taurus.md)
 
 ## Applying for Access to the System
 
 Project and login application forms for taurus are available
-[here](../access.md).
+[here](../access/access.md).
 
 ## Login to the System
 
@@ -23,7 +23,7 @@ Please note that if you store data on the local disk (e.g. under /tmp),
 it will be on only one of the three nodes. If you relogin and the data
 is not there, you are probably on another node.
 
-You can find an list of fingerprints [here](../access/Login.md#SSH_access).
+You can find an list of fingerprints [here](../access/login.md#SSH_access).
 
 ## Transferring Data from/to Taurus
 
@@ -46,13 +46,13 @@ contact the Service Desk as well.
 **Phase 2:** The nodes taurusexport\[3,4\] provide access to the
 `/scratch` file system of the second phase.
 
-You can find an list of fingerprints [here](../access/Login.md#SSH_access).
+You can find an list of fingerprints [here](../access/login.md#SSH_access).
 
 ## Compiling Parallel Applications
 
 You have to explicitly load a compiler module and an MPI module on
 Taurus. Eg. with `module load GCC OpenMPI`. ( [read more about
-Modules](../software/RuntimeEnvironment.md), **todo link** (read more about
+Modules](../software/runtime_environment.md), **todo link** (read more about
 Compilers)(Compendium.Compilers))
 
 Use the wrapper commands like e.g. `mpicc` (`mpiicc` for intel),
@@ -83,9 +83,9 @@ The batch system on Taurus is Slurm. If you are migrating from LSF
 (deimos, mars, atlas), the biggest difference is that Slurm has no
 notion of batch queues any more.
 
--   [General information on the Slurm batch system](Slurm.md)
+-   [General information on the Slurm batch system](slurm.md)
 -   Slurm also provides process-level and node-level [profiling of
-    jobs](Slurm.md#Job_Profiling)
+    jobs](slurm.md#Job_Profiling)
 
 ### Partitions
 
@@ -132,7 +132,7 @@ after a given date:
 
 Instead of running one long job, you should split it up into a chain
 job. Even applications that are not capable of chreckpoint/restart can
-be adapted. The HOWTO can be found [here](../use_of_hardware/CheckpointRestart.md),
+be adapted. The HOWTO can be found [here](../jobs_and_resources/checkpoint_restart.md),
 
 ### Memory Limits
 
diff --git a/doc.zih.tu-dresden.de/docs/use_of_hardware.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/use_of_hardware.md
similarity index 97%
rename from doc.zih.tu-dresden.de/docs/use_of_hardware.md
rename to doc.zih.tu-dresden.de/docs/jobs_and_resources/use_of_hardware.md
index 605c9561e8ca41020bc89f6ce04a3bf367b99997..a12b26c37a2d5923ed9de51f0a80e5700c612132 100644
--- a/doc.zih.tu-dresden.de/docs/use_of_hardware.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/use_of_hardware.md
@@ -2,7 +2,7 @@
 
 To run the software, do some calculations or compile your code compute nodes have to be used. Login
 nodes which are using for login can not be used for your computations. Submit your tasks (by using
-[jobs]**todo link**) to compute nodes. The [Slurm](jobs/index.md) (scheduler to handle your jobs) is
+[jobs]**todo link**) to compute nodes. The [Slurm](slurm.md) (scheduler to handle your jobs) is
 using on Taurus for this purposes. [HPC Introduction]**todo link** is a good resource to get started
 with it.
 
diff --git a/doc.zih.tu-dresden.de/docs/software/Applications.md b/doc.zih.tu-dresden.de/docs/software/applications.md
similarity index 71%
rename from doc.zih.tu-dresden.de/docs/software/Applications.md
rename to doc.zih.tu-dresden.de/docs/software/applications.md
index bdf776633769b01628a953d71d607bcb3d03f363..7a939447e6a8fee0d5525f80cb8c18f6192f1ff4 100644
--- a/doc.zih.tu-dresden.de/docs/software/Applications.md
+++ b/doc.zih.tu-dresden.de/docs/software/applications.md
@@ -5,25 +5,25 @@ descriptions are taken from the vendor's web site or from
 Wikipedia.org.)
 
 Before running an application you normally have to load a
-[module](../software/RuntimeEnvironment.md#modules). Please read the instructions given
+[module](../software/runtime_environment.md#modules). Please read the instructions given
 while loading the module, they might be more up-to-date than this
 manual.
 
 -   **TODO Link** (Complete List of Modules)(SoftwareModulesList)
--   [Using Software Modules](../software/RuntimeEnvironment.md#modules)
+-   [Using Software Modules](../software/runtime_environment.md#modules)
 
 <!-- -->
 
--   [Mathematics](../software/Mathematics.md)
--   [Nanoscale Simulations](../software/NanoscaleSimulations.md)
--   [FEM Software](../software/FEMSoftware.md)
--   [Computational Fluid Dynamics](../software/CFD.md)
--   [Deep Learning](../software/DeepLearning.md)
+-   [Mathematics](../software/mathematics.md)
+-   [Nanoscale Simulations](../software/nanoscale_simulations.md)
+-   [FEM Software](../software/fem_software.md)
+-   [Computational Fluid Dynamics](../software/cfd.md)
+-   [Deep Learning](../software/deep_learning.md)
 
 <!-- -->
 
--   [Visualization Tools](../software/Visualization.md),
-    [Remote Rendering on GPU nodes](../access/DesktopCloudVisualization.md)
+-   [Visualization Tools](../software/visualization.md),
+    [Remote Rendering on GPU nodes](../access/desktop_cloud_visualization.md)
 -   UNICORE support has been abandoned and so this way of access is no
     longer available.
 
diff --git a/doc.zih.tu-dresden.de/docs/software/Bioinformatics.md b/doc.zih.tu-dresden.de/docs/software/bioinformatics.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/software/Bioinformatics.md
rename to doc.zih.tu-dresden.de/docs/software/bioinformatics.md
diff --git a/doc.zih.tu-dresden.de/docs/software/BuildingSoftware.md b/doc.zih.tu-dresden.de/docs/software/building_software.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/software/BuildingSoftware.md
rename to doc.zih.tu-dresden.de/docs/software/building_software.md
diff --git a/doc.zih.tu-dresden.de/docs/software/CFD.md b/doc.zih.tu-dresden.de/docs/software/cfd.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/software/CFD.md
rename to doc.zih.tu-dresden.de/docs/software/cfd.md
diff --git a/doc.zih.tu-dresden.de/docs/software/Cloud.md b/doc.zih.tu-dresden.de/docs/software/cloud.md
similarity index 96%
rename from doc.zih.tu-dresden.de/docs/software/Cloud.md
rename to doc.zih.tu-dresden.de/docs/software/cloud.md
index 3819bdc56ceab064bfe46a722b3e1168324e6659..5104c7b35587aaeaca86d64419ffd8965d2fa27b 100644
--- a/doc.zih.tu-dresden.de/docs/software/Cloud.md
+++ b/doc.zih.tu-dresden.de/docs/software/cloud.md
@@ -1,7 +1,7 @@
 # Virtual machine on Taurus
 
 The following instructions are primarily aimed at users who want to build their
-[Singularity](Containers.md) containers on Taurus.
+[Singularity](containers.md) containers on Taurus.
 
 The Singularity container setup requires a Linux machine with root privileges, the same architecture
 and a compatible kernel. If some of these requirements can not be fulfilled, then there is
@@ -60,7 +60,7 @@ Last login: Fri Jul 24 13:53:48 2020 from gateway
 
 ## Automation
 
-We provide [Tools](VMTools.md) to automate these steps. You may just type `startInVM --arch=power9`
+We provide [Tools](vm_tools.md) to automate these steps. You may just type `startInVM --arch=power9`
 on a tauruslogin node and you will be inside the VM with everything mounted.
 
 ## Known Issues
diff --git a/doc.zih.tu-dresden.de/docs/software/Compilers.md b/doc.zih.tu-dresden.de/docs/software/compilers.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/software/Compilers.md
rename to doc.zih.tu-dresden.de/docs/software/compilers.md
diff --git a/doc.zih.tu-dresden.de/docs/software/Containers.md b/doc.zih.tu-dresden.de/docs/software/containers.md
similarity index 97%
rename from doc.zih.tu-dresden.de/docs/software/Containers.md
rename to doc.zih.tu-dresden.de/docs/software/containers.md
index f1a312891e88c224ab94b9e46282baf20d297af6..34a5af44186c6d35150e2b9d20e14e347a9bf430 100644
--- a/doc.zih.tu-dresden.de/docs/software/Containers.md
+++ b/doc.zih.tu-dresden.de/docs/software/containers.md
@@ -26,9 +26,9 @@ Existing Docker containers can easily be converted.
 
 ZIH wiki sites:
 
-- [Example Definitions](SingularityExampleDefinitions.md)
-- [Building Singularity images on Taurus](VMTools.md)
-- [Hints on Advanced usage](SingularityRecipeHints.md)
+- [Example Definitions](singularity_example_definitions.md)
+- [Building Singularity images on Taurus](vm_tools.md)
+- [Hints on Advanced usage](singularity_recipe_hints.md)
 
 It is available on Taurus without loading any module.
 
@@ -79,7 +79,7 @@ the necessary privileges and then simply copy your container file to Taurus and
 
 This does not work on our **ml** partition, as it uses Power9 as its architecture which is
 different to the x86 architecture in common computers/laptops. For that you can use the
-[VM Tools](VMTools.md).
+[VM Tools](vm_tools.md).
 
 #### Creating a container
 
@@ -89,7 +89,7 @@ Creating a container is done by writing a definition file and passing it to
 singularity build myContainer.sif myDefinition.def
 ```
 
-NOTE: This must be done on a machine (or [VM](Cloud.md) with root rights.
+NOTE: This must be done on a machine (or [VM](cloud.md) with root rights.
 
 A definition file contains a bootstrap
 [header](https://sylabs.io/guides/3.2/user-guide/definition_files.html#header)
diff --git a/doc.zih.tu-dresden.de/docs/software/CustomEasyBuildEnvironment.md b/doc.zih.tu-dresden.de/docs/software/custom_easy_build_environment.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/software/CustomEasyBuildEnvironment.md
rename to doc.zih.tu-dresden.de/docs/software/custom_easy_build_environment.md
diff --git a/doc.zih.tu-dresden.de/docs/software/Dask.md b/doc.zih.tu-dresden.de/docs/software/dask.md
similarity index 96%
rename from doc.zih.tu-dresden.de/docs/software/Dask.md
rename to doc.zih.tu-dresden.de/docs/software/dask.md
index 2b872c94ef0e376ed24614b9887a4a7d271ccbca..d6f7d087e8f39fb884a85834f807a4a91d236216 100644
--- a/doc.zih.tu-dresden.de/docs/software/Dask.md
+++ b/doc.zih.tu-dresden.de/docs/software/dask.md
@@ -49,8 +49,8 @@ Create a conda virtual environment. We would recommend using a workspace. See th
 
 **Note:** You could work with simple examples in your home directory (where you are loading by
 default). However, in accordance with the
-[HPC storage concept](../data_management/HPCStorageConcept2019.md) please use a
-[workspaces](../data_management/Workspaces.md) for your study and work projects.
+[HPC storage concept](../data_lifecycle/hpc_storage_concept2019.md) please use a
+[workspaces](../data_lifecycle/workspaces.md) for your study and work projects.
 
 ```Bash
 conda create --prefix /scratch/ws/0/aabc1234-Workproject/conda-virtual-environment/dask-test python=3.6
diff --git a/doc.zih.tu-dresden.de/docs/software/DataAnalyticsWithR.md b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md
similarity index 89%
rename from doc.zih.tu-dresden.de/docs/software/DataAnalyticsWithR.md
rename to doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md
index 423ddde6a8019474a2baea9de13738085f6b84f5..cd079210e8d4ecd40c0ad4b46370a7dc8b91dee7 100644
--- a/doc.zih.tu-dresden.de/docs/software/DataAnalyticsWithR.md
+++ b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_r.md
@@ -13,23 +13,24 @@ learning algorithms, linear regression, time series, statistical inference.
 general as well as on the HPC-DA system.
 
 **Prerequisites:** To work with the R on Taurus you obviously need access for the Taurus system and
-basic knowledge about programming and [Slurm](../jobs/Slurm.md) system.
+basic knowledge about programming and [Slurm](../jobs_and_resources/slurm.md) system.
 
 For general information on using the HPC-DA system, see the
-[Get started with HPC-DA system](GetStartedWithHPCDA.md) page.
+[Get started with HPC-DA system](get_started_with_hpcda.md) page.
 
 You can also find the information you need on the HPC-Introduction and HPC-DA-Introduction
 presentation slides.
 
-We recommend using **Haswell** and/or [Romeo](../use_of_hardware/RomeNodes.md) partitions to work
+We recommend using **Haswell** and/or [Romeo](../jobs_and_resources/rome_nodes.md) partitions to work
 with R. Please use the ml partition only if you need GPUs!
 
 ## R Console
 
 This is a quickstart example. The `srun` command is used to submit a real-time execution job
 designed for interactive use with output monitoring. Please check
-[the Slurm page](../jobs/Slurm.md) for details. The R language available for both types of Taurus
-nodes/architectures x86 (scs5 software environment) and Power9 (ml software environment).
+[the Slurm page](../jobs_and_resources/slurm.md) for details. The R language available for both
+types of Taurus nodes/architectures x86 (scs5 software environment) and Power9 (ml software
+environment).
 
 ### Haswell Partition
 
@@ -51,19 +52,19 @@ R
 
 Here are the parameters of the job with all the details to show you the correct and optimal way to
 do it. Please allocate the job with respect to
-[hardware specification](../use_of_hardware/HardwareTaurus.md)! Besides, it should be noted that the
-value of the `--mem-per-cpu` parameter is different for the different partitions. it is
-important to respect [memory limits](../jobs/SystemTaurus.md#memory-limits).
-Please note that the default limit is 300 MB per cpu.
+[hardware specification](../jobs_and_resources/hardware_taurus.md)! Besides, it should be noted that
+the value of the `--mem-per-cpu` parameter is different for the different partitions. it is
+important to respect [memory limits](../jobs_and_resources/system_taurus.md#memory-limits).  Please
+note that the default limit is 300 MB per cpu.
 
 However, using srun directly on the shell will lead to blocking and launch an interactive job. Apart
 from short test runs, it is **recommended to launch your jobs into the background by using batch
 jobs**. For that, you can conveniently place the parameters directly into the job file which can be
 submitted using `sbatch [options] <job file>`.
-The examples could be found [here](GetStartedWithHPCDA.md) or [here](../jobs/Slurm.md). Furthermore,
-you could work with simple examples in your home directory but according to
-[storage concept](../data_management/HPCStorageConcept2019.md) **please use**
-[workspaces](../data_management/Workspaces.md) **for your study and work projects!**
+The examples could be found [here](get_started_with_hpcda.md) or
+[here](../jobs_and_resources/slurm.md). Furthermore, you could work with simple examples in your
+home directory but according to [storage concept](../data_lifecycle/hpc_storage_concept2019.md)
+**please use** [workspaces](../data_lifecycle/workspaces.md) **for your study and work projects!**
 
 It is also possible to run Rscript directly (after loading the module):
 
@@ -76,11 +77,11 @@ Rscript /path/to/script/your_script.R param1 param2
 
 In addition to using interactive srun jobs and batch jobs, there is another way to work with the
 **R** on Taurus. JupyterHub is a quick and easy way to work with jupyter notebooks on Taurus.
-See the [JupyterHub page](JupyterHub.md) for detailed instructions.
+See the [JupyterHub page](jupyterhub.md) for detailed instructions.
 
-The [production environment](JupyterHub.md#standard-environments) of JupyterHub contains R as a module
+The [production environment](jupyterhub.md#standard-environments) of JupyterHub contains R as a module
 for all partitions. R could be run in the Notebook or Console for
-[JupyterLab](JupyterHub.md#jupyterlab).
+[JupyterLab](jupyterhub.md#jupyterlab).
 
 ## RStudio
 
@@ -92,7 +93,7 @@ x86 (scs5) and Power9 (ml) nodes/architectures.
 The best option to run RStudio is to use JupyterHub. RStudio will work in a browser. It is currently
 available in the **test** environment on both x86 (**scs5**) and Power9 (**ml**)
 architectures/partitions. It can be started similarly as a new kernel from
-[JupyterLab](JupyterHub.md#jupyterlab) launcher. See the picture below.
+[JupyterLab](jupyterhub.md#jupyterlab) launcher. See the picture below.
 
 **todo** image
 \<img alt="environments.png" height="70"
@@ -105,8 +106,8 @@ title="Launcher.png" width="195" />
 
 Please keep in mind that it is not currently recommended to use the interactive x11 job with the
 desktop version of Rstudio, as described, for example,
-[here](../jobs/Slurm.md#interactive-jobs) or in introduction HPC-DA slides. This method is
-unstable.
+[here](../jobs_and_resources/slurm.md#interactive-jobs) or in introduction HPC-DA slides. This
+method is unstable.
 
 ## Install Packages in R
 
@@ -160,7 +161,7 @@ which R
 ```
 
 Please allocate the job with respect to
-[hardware specification](../use_of_hardware/HardwareTaurus.md)! Note that the ML nodes have
+[hardware specification](../jobs_and_resources/hardware_taurus.md)! Note that the ML nodes have
 4way-SMT, so for every physical core allocated, you will always get 4\*1443mb =5772mb.
 
 To configure "reticulate" R library to point to the Python executable in your virtual environment,
@@ -226,13 +227,13 @@ code to use mclapply function. Check out an [example]**todo** %ATTACHURL%/multic
 shared-memory parallelism approach that it is limited by the number of cores(cpus) on a single node.
 
 **Important:** Please allocate the job with respect to
-[hardware specification](../use_of_hardware/HardwareTaurus.md). The current maximum number of
+[hardware specification](../jobs_and_resources/hardware_taurus.md). The current maximum number of
 processors (read as cores) for an SMP-parallel program on Taurus is 56 (smp2 partition), for the
 Haswell partition, it is a 24.  The large SMP system (Julia) is coming soon with a total number of
 896 nodes.
 
 Submitting a multicore R job to Slurm is very similar to
-[Submitting an OpenMP Job](../jobs/Slurm.md#binding-and-distribution-of-tasks)
+[Submitting an OpenMP Job](../jobs_and_resources/slurm.md#binding-and-distribution-of-tasks)
 since both are running multicore jobs on a **single** node. Below is an example:
 
 ```Bash
@@ -269,7 +270,7 @@ This way of the R parallelism uses the
 [MPI](https://en.wikipedia.org/wiki/Message_Passing_Interface) (Message Passing Interface) as a
 "backend" for its parallel operations.  Parallel R codes submitting a multinode MPI R job to SLURM
 is very similar to
-[submitting an MPI Job](../jobs/Slurm.md#binding-and-distribution-of-tasks)
+[submitting an MPI Job](../jobs_and_resources/slurm.md#binding-and-distribution-of-tasks)
 since both are running multicore jobs on multiple nodes. Below is an example of running R script
 with the Rmpi on Taurus:
 
@@ -319,7 +320,7 @@ To use Rmpi and MPI please use one of these partitions: **Haswell**, **Broadwell
 **Important:** Please allocate the required number of nodes and cores according to the hardware
 specification: 1 Haswell's node: 2 x [Intel Xeon (12 cores)]; 1 Broadwell's Node: 2 x [Intel Xeon
 (14 cores)]; 1 Rome's node: 2 x [AMD EPYC (64 cores)]. Please also check the
-[hardware specification](../use_of_hardware/HardwareTaurus.md) (number of nodes etc). The `sinfo`
+[hardware specification](../jobs_and_resources/hardware_taurus.md) (number of nodes etc). The `sinfo`
 command gives you a quick overview of the status of partitions.
 
 Please use `mpirun` command to run the Rmpi script. It is a wrapper that enables the communication
diff --git a/doc.zih.tu-dresden.de/docs/software/Debuggers.md b/doc.zih.tu-dresden.de/docs/software/debuggers.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/software/Debuggers.md
rename to doc.zih.tu-dresden.de/docs/software/debuggers.md
diff --git a/doc.zih.tu-dresden.de/docs/software/DeepLearning.md b/doc.zih.tu-dresden.de/docs/software/deep_learning.md
similarity index 93%
rename from doc.zih.tu-dresden.de/docs/software/DeepLearning.md
rename to doc.zih.tu-dresden.de/docs/software/deep_learning.md
index 5eee674f447668a9d1f7f4119505af3941a2beb0..14f64769cbe8c17a7eb8a11f28e0105e736c4355 100644
--- a/doc.zih.tu-dresden.de/docs/software/DeepLearning.md
+++ b/doc.zih.tu-dresden.de/docs/software/deep_learning.md
@@ -1,6 +1,6 @@
 # Deep learning
 
-**Prerequisites**: To work with Deep Learning tools you obviously need [Login](../access/Login.md)
+**Prerequisites**: To work with Deep Learning tools you obviously need [Login](../access/login.md)
 for the Taurus system and basic knowledge about Python, SLURM manager.
 
 **Aim** of this page is to introduce users on how to start working with Deep learning software on
@@ -14,22 +14,22 @@ both the ml environment and the scs5 environment of the Taurus system.
 for dataflow and differentiable programming across a range of tasks.
 
 TensorFlow is available in both main partitions
-[ml environment and scs5 environment](Modules.md#module-environments)
+[ml environment and scs5 environment](modules.md#module-environments)
 under the module name "TensorFlow". However, for purposes of machine learning and deep learning, we
-recommend using Ml partition [HPC-DA](../jobs/HPCDA.md). For example:
+recommend using Ml partition [HPC-DA](../jobs_and_resources/hpcda.md). For example:
 
 ```Bash
 module load TensorFlow
 ```
 
-There are numerous different possibilities on how to work with [TensorFlow](TensorFlow.md) on
+There are numerous different possibilities on how to work with [TensorFlow](tensor_flow.md) on
 Taurus. On this page, for all examples default, scs5 partition is used. Generally, the easiest way
-is using the [modules system](Modules.md)
+is using the [modules system](modules.md)
 and Python virtual environment (test case). However, in some cases, you may need directly installed
 Tensorflow stable or night releases. For this purpose use the
-[EasyBuild](CustomEasyBuildEnvironment.md), [Containers](TensorFlowContainerOnHPCDA.md) and see
+[EasyBuild](custom_easy_build_environment.md), [Containers](tensor_flow_container_on_hpcda.md) and see
 [the example](https://www.tensorflow.org/install/pip). For examples of using TensorFlow for ml partition
-with module system see [TensorFlow page for HPC-DA](TensorFlow.md).
+with module system see [TensorFlow page for HPC-DA](tensor_flow.md).
 
 Note: If you are going used manually installed Tensorflow release we recommend use only stable
 versions.
@@ -38,15 +38,15 @@ versions.
 
 [Keras](https://keras.io/) is a high-level neural network API, written in Python and capable of
 running on top of [TensorFlow](https://github.com/tensorflow/tensorflow) Keras is available in both
-environments [ml environment and scs5 environment](Modules.md#module-environments) under the module
+environments [ml environment and scs5 environment](modules.md#module-environments) under the module
 name "Keras".
 
 On this page for all examples default scs5 partition used. There are numerous different
-possibilities on how to work with [TensorFlow](TensorFlow.md) and Keras
-on Taurus. Generally, the easiest way is using the [module system](Modules.md) and Python
+possibilities on how to work with [TensorFlow](tensor_flow.md) and Keras
+on Taurus. Generally, the easiest way is using the [module system](modules.md) and Python
 virtual environment (test case) to see Tensorflow part above.
 For examples of using Keras for ml partition with the module system see the 
-[Keras page for HPC-DA](Keras.md).
+[Keras page for HPC-DA](keras.md).
 
 It can either use TensorFlow as its backend. As mentioned in Keras documentation Keras capable of
 running on Theano backend. However, due to the fact that Theano has been abandoned by the
@@ -117,7 +117,7 @@ public datasets without downloading it (for example
 If you still need to download some datasets, first of all, be careful with the size of the datasets
 which you would like to download (some of them have a size of few Terabytes). Don't download what
 you really not need to use! Use login nodes only for downloading small files (hundreds of the
-megabytes). For downloading huge files use [DataMover](../data_moving/DataMover.md).
+megabytes). For downloading huge files use [DataMover](../data_transfer/data_mover.md).
 For example, you can use command `dtwget` (it is an analogue of the general wget
 command). This command submits a job to the data transfer machines.  If you need to download or
 allocate massive files (more than one terabyte) please contact the support before.
@@ -144,7 +144,7 @@ jupyterhub.
 These sections show how to run and set up a remote jupyter server within a sbatch GPU job and which
 modules and packages you need for that.
 
-**Note:** On Taurus, there is a [JupyterHub](JupyterHub.md), where you do not need the manual server
+**Note:** On Taurus, there is a [JupyterHub](jupyterhub.md), where you do not need the manual server
 setup described below and can simply run your Jupyter notebook on HPC nodes. Keep in mind that with
 Jupyterhub you can't work with some special instruments. However general data analytics tools are
 available.
@@ -270,7 +270,7 @@ Start the script above (e.g. with the name jnotebook) with sbatch command:
 sbatch jnotebook.slurm
 ```
 
-If you have a question about sbatch script see the article about [Slurm](../jobs/Slurm.md).
+If you have a question about sbatch script see the article about [Slurm](../jobs_and_resources/slurm.md).
 
 Check by the command: `tail notebook_output.txt` the status and the **token** of the server. It
 should look like this:
@@ -313,7 +313,7 @@ important to use SSL cert
 To login into the jupyter notebook site, you have to enter the **token**.
 (`https://localhost:8887`). Now you can create and execute notebooks on Taurus with GPU support.
 
-If you would like to use [JupyterHub](JupyterHub.md) after using a remote manually configurated
+If you would like to use [JupyterHub](jupyterhub.md) after using a remote manually configurated
 jupyter server (example above) you need to change the name of the configuration file
 (`/home//.jupyter/jupyter_notebook_config.py`) to any other.
 
diff --git a/doc.zih.tu-dresden.de/docs/software/FEMSoftware.md b/doc.zih.tu-dresden.de/docs/software/fem_software.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/software/FEMSoftware.md
rename to doc.zih.tu-dresden.de/docs/software/fem_software.md
diff --git a/doc.zih.tu-dresden.de/docs/software/GetStartedWithHPCDA.md b/doc.zih.tu-dresden.de/docs/software/get_started_with_hpcda.md
similarity index 93%
rename from doc.zih.tu-dresden.de/docs/software/GetStartedWithHPCDA.md
rename to doc.zih.tu-dresden.de/docs/software/get_started_with_hpcda.md
index 1c14c5d346050d50992270cecbe7eb3ea9dab582..8740bfd78ae5b4f5c8d9f6138ed7f64a23ae5f09 100644
--- a/doc.zih.tu-dresden.de/docs/software/GetStartedWithHPCDA.md
+++ b/doc.zih.tu-dresden.de/docs/software/get_started_with_hpcda.md
@@ -9,7 +9,7 @@ and tasks connected with the big data.
 The main **aim** of this guide is to help users who have started working with Taurus and focused on
 working with Machine learning frameworks such as TensorFlow or Pytorch.
 
-**Prerequisites:** To work with HPC-DA, you need [Login](../access/Login.md) for the Taurus system
+**Prerequisites:** To work with HPC-DA, you need [Login](../access/login.md) for the Taurus system
 and preferably have basic knowledge about High-Performance computers and Python.
 
 **Disclaimer:** This guide provides the main steps on the way of using Taurus, for details please
@@ -27,7 +27,7 @@ architecture from IBM. HPC-DA created from
 for AI challenges, analytics and working with, Machine learning, data-intensive workloads,
 deep-learning frameworks and accelerated databases. POWER9 is the processor with state-of-the-art
 I/O subsystem technology, including next-generation NVIDIA NVLink, PCIe Gen4 and OpenCAPI.
-[Here](../use_of_hardware/Power9.md) you could find a detailed specification of the TU Dresden
+[Here](../jobs_and_resources/power9.md) you could find a detailed specification of the TU Dresden
 HPC-DA system.
 
 The main feature of the Power9 architecture (ppc64le) is the ability to work the
@@ -59,7 +59,7 @@ during the access procedure. Accept the host verifying and enter your password.
 
 This method requires two conditions:
 Linux OS, workstation within the campus network. For other options and
-details check the [login page](../access/Login.md).
+details check the [login page](../access/login.md).
 
 ## Data management
 
@@ -68,8 +68,8 @@ details check the [login page](../access/Login.md).
 As soon as you have access to HPC-DA you have to manage your data. The main method of working with
 data on Taurus is using Workspaces.  You could work with simple examples in your home directory
 (where you are loading by default). However, in accordance with the 
-[storage concept](../data_management/HPCStorageConcept2019.md)
-**please use** a [workspace](../data_management/Workspaces.md)
+[storage concept](../data_lifecycle/hpc_storage_concept2019.md)
+**please use** a [workspace](../data_lifecycle/workspaces.md)
 for your study and work projects.
 
 You should create your workspace with a similar command:
@@ -97,7 +97,7 @@ consider the following points:
 
 #### Moving data to/from the HPC machines
 
-To copy data to/from the HPC machines, the Taurus [export nodes](../data_moving/ExportNodes.md)
+To copy data to/from the HPC machines, the Taurus [export nodes](../data_transfer/export_nodes.md)
 should be used. They are the preferred way to transfer your data. There are three possibilities to
 exchanging data between your local machine (lm) and the HPC machines (hm): **SCP, RSYNC, SFTP**.
 
@@ -122,8 +122,8 @@ scp -r &lt;zih-user&gt;@taurusexport.hrsk.tu-dresden.de:&lt;directory&gt; &lt;ta
 
 #### Moving data inside the HPC machines. Datamover
 
-The best way to transfer data inside the Taurus is the [data mover](../data_moving/DataMover.md). It
-is the special data transfer machine providing the global file systems of each ZIH HPC system.
+The best way to transfer data inside the Taurus is the [data mover](../data_transfer/data_mover.md).
+It is the special data transfer machine providing the global file systems of each ZIH HPC system.
 Datamover provides the best data speed. To load, move, copy etc.  files from one file system to
 another file system, you have to use commands with **dt** prefix, such as:
 
@@ -149,7 +149,7 @@ Job submission can be done with the command: `-srun [options] <command>.`
 
 This is a simple example which you could use for your start. The `srun` command is used to submit a
 job for execution in real-time designed for interactive use, with monitoring the output. For some
-details please check [the Slurm page](../jobs/Slurm.md).
+details please check [the Slurm page](../jobs_and_resources/slurm.md).
 
 ```Bash
 srun -p ml -N 1 --gres=gpu:1 --time=01:00:00 --pty --mem-per-cpu=8000 bash   #Job submission in ml nodes with allocating: 1 node, 1 gpu per node, with 8000 mb on 1 hour.
@@ -198,7 +198,7 @@ There are three main options on how to work with Tensorflow and PyTorch:
 
 ### Modules
 
-The easiest way is using the [modules system](Modules.md) and Python virtual environment. Modules
+The easiest way is using the [modules system](modules.md) and Python virtual environment. Modules
 are a way to use frameworks, compilers, loader, libraries, and utilities. The module is a user
 interface that provides utilities for the dynamic modification of a user's environment without
 manual modifications. You could use them for srun , bath jobs (sbatch) and the Jupyterhub.
@@ -263,9 +263,9 @@ with TensorFlow on Taurus with GUI (graphic user interface) in a **web browser**
 to see intermediate results step by step of your work. This can be useful for users who dont have
 huge experience with HPC or Linux.
 
-There is [JupyterHub](JupyterHub.md) on Taurus, where you can simply run your Jupyter notebook on
+There is [JupyterHub](jupyterhub.md) on Taurus, where you can simply run your Jupyter notebook on
 HPC nodes. Also, for more specific cases you can run a manually created remote jupyter server. You
-can find the manual server setup [here](DeepLearning.md). However, the simplest option for
+can find the manual server setup [here](deep_learning.md). However, the simplest option for
 beginners is using JupyterHub.
 
 JupyterHub is available at
@@ -277,14 +277,14 @@ You can select the required number of CPUs and GPUs. For the acquaintance with t
 the examples below the recommended amount of CPUs and 1 GPU will be enough.
 With the advanced form, you can use
 the configuration with 1 GPU and 7 CPUs. To access for all your workspaces use " / " in the
-workspace scope. Please check updates and details [here](JupyterHub.md).
+workspace scope. Please check updates and details [here](jupyterhub.md).
 
 Several Tensorflow and PyTorch examples for the Jupyter notebook have been prepared based on some
 simple tasks and models which will give you an understanding of how to work with ML frameworks and
 JupyterHub. It could be found as the [attachment] **todo** %ATTACHURL%/machine_learning_example.py
 in the bottom of the page. A detailed explanation and examples for TensorFlow can be found
-[here](TensorFlowOnJupyterNotebook.md). For the Pytorch - [here](PyTorch.md).  Usage information
-about the environments for the JupyterHub could be found [here](JupyterHub.md) in the chapter
+[here](tensor_flow_on_jupyter_notebook.md). For the Pytorch - [here](py_torch.md).  Usage information
+about the environments for the JupyterHub could be found [here](jupyterhub.md) in the chapter
 *Creating and using your own environment*.
 
 Versions: TensorFlow 1.14, 1.15, 2.0, 2.1; PyTorch 1.1, 1.3 are
@@ -327,11 +327,11 @@ page of the container.
 
 To use not a pure Tensorflow, PyTorch but also with some Python packages
 you have to use the definition file to create the container
-(bootstrapping). For details please see the [Container](Containers.md) page
+(bootstrapping). For details please see the [Container](containers.md) page
 from our wiki. Bootstrapping **has required root privileges** and
 Virtual Machine (VM) should be used! There are two main options on how
-to work with VM on Taurus: [VM tools](VMTools.md) - automotive algorithms
-for using virtual machines; [Manual method](Cloud.md) - it requires more
+to work with VM on Taurus: [VM tools](vm_tools.md) - automotive algorithms
+for using virtual machines; [Manual method](cloud.md) - it requires more
 operations but gives you more flexibility and reliability.
 
 - [machine_learning_example.py] **todo** %ATTACHURL%/machine_learning_example.py:
diff --git a/doc.zih.tu-dresden.de/docs/software/GPUProgramming.md b/doc.zih.tu-dresden.de/docs/software/gpu_programming.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/software/GPUProgramming.md
rename to doc.zih.tu-dresden.de/docs/software/gpu_programming.md
diff --git a/doc.zih.tu-dresden.de/docs/software/JupyterHub.md b/doc.zih.tu-dresden.de/docs/software/jupyterhub.md
similarity index 96%
rename from doc.zih.tu-dresden.de/docs/software/JupyterHub.md
rename to doc.zih.tu-dresden.de/docs/software/jupyterhub.md
index f7b34b52250b8426e40b8c61eca4896eaf7b8d2f..7d75b2a99f457b0461c5fd5952eeb6ea2a7b8980 100644
--- a/doc.zih.tu-dresden.de/docs/software/JupyterHub.md
+++ b/doc.zih.tu-dresden.de/docs/software/jupyterhub.md
@@ -6,7 +6,7 @@ work with jupyter notebooks on Taurus.
 Subpages:
 
 -   [JupyterHub for Teaching (git-pull feature, quickstart links, direct
-    links to notebook files)](JupyterHubForTeaching.md)
+    links to notebook files)](jupyterhub_for_teaching.md)
 
 ## Disclaimer
 
@@ -19,7 +19,7 @@ support in every case.
 ## Access
 
 <span style="color:red">**NOTE**</span> This service is only available for users with
-an active HPC project. See [here](../access.md) how to apply for an HPC
+an active HPC project. See [here](../access/access.md) how to apply for an HPC
 project.
 
 JupyterHub is available here:\
@@ -41,10 +41,10 @@ For advanced users we have an extended form where you can change many
 settings. You can:
 
 -   modify Slurm parameters to your needs ( [more about
-    Slurm](../jobs/Slurm.md))
+    Slurm](../jobs_and_resources/slurm.md))
 -   assign your session to a project or reservation
 -   load modules from the [LMOD module
-    system](../software/RuntimeEnvironment.md)
+    system](../software/runtime_environment.md)
 -   choose a different standard environment (in preparation for future
     software updates or testing additional features)
 
@@ -160,8 +160,8 @@ This message often appears instantly if your Slurm parameters are not
 valid. Please check those settings against the available hardware.
 Useful pages for valid Slurm parameters:
 
--   [Slurm batch system (Taurus)] **TODO LINK** (../jobs/SystemTaurus#Batch_System)
--   [General information how to use Slurm](../jobs/Slurm.md)
+-   [Slurm batch system (Taurus)] **TODO LINK** (../jobs_and_resources/SystemTaurus#Batch_System)
+-   [General information how to use Slurm](../jobs_and_resources/slurm.md)
 
 ### Error message in JupyterLab
 
@@ -224,7 +224,7 @@ Here's a short list of some included software:
 
 \* generic = all partitions except ml
 
-\*\* R is loaded from the [module system](../software/RuntimeEnvironment.md)
+\*\* R is loaded from the [module system](../software/runtime_environment.md)
 
 ### Creating and using your own environment
 
diff --git a/doc.zih.tu-dresden.de/docs/software/JupyterHubForTeaching.md b/doc.zih.tu-dresden.de/docs/software/jupyterhub_for_teaching.md
similarity index 99%
rename from doc.zih.tu-dresden.de/docs/software/JupyterHubForTeaching.md
rename to doc.zih.tu-dresden.de/docs/software/jupyterhub_for_teaching.md
index 17350f2a6da807d076065b01cf3cd0f5f2c79a14..ef3dacca8c243374e9efc44268b6277be5ebe2f1 100644
--- a/doc.zih.tu-dresden.de/docs/software/JupyterHubForTeaching.md
+++ b/doc.zih.tu-dresden.de/docs/software/jupyterhub_for_teaching.md
@@ -13,7 +13,7 @@ the file systems or the batch system.
 accordingly.
 - Access to HPC resources is handled through projects. See your course
 as a project. Projects need to be registered beforehand (more info
-on the page [Access](./../application/Access.md)).
+on the page [Access](./../application/access.md)).
 - Don't forget to **TODO ANCHOR**(add your
 users)(ProjectManagement#manage_project_members_40dis_45_47enable_41)
 (eg. students or tutors) to your project.
diff --git a/doc.zih.tu-dresden.de/docs/software/Keras.md b/doc.zih.tu-dresden.de/docs/software/keras.md
similarity index 95%
rename from doc.zih.tu-dresden.de/docs/software/Keras.md
rename to doc.zih.tu-dresden.de/docs/software/keras.md
index 89b789d61d4345dbe3f008606cc8a72473e4819f..122c446af42cf552aeec59fd4b615955a2d5a1e0 100644
--- a/doc.zih.tu-dresden.de/docs/software/Keras.md
+++ b/doc.zih.tu-dresden.de/docs/software/keras.md
@@ -15,7 +15,7 @@ functionality, such as [eager execution](https://www.tensorflow.org/guide/keras#
 and [estimators](https://www.tensorflow.org/guide/estimator).
 
 On the machine learning nodes (machine learning partition), you can use
-the tools from [IBM Power AI](./PowerAI.md). PowerAI is an enterprise
+the tools from [IBM Power AI](./power_ai.md). PowerAI is an enterprise
 software distribution that combines popular open-source deep learning
 frameworks, efficient AI development tools (Tensorflow, Caffe, etc).
 
@@ -29,12 +29,12 @@ options:
     Keras and GPUs.
 
 **Prerequisites**: To work with Keras you, first of all, need 
-[access](./../access/Login.md) for the Taurus system, loaded
+[access](./../access/login.md) for the Taurus system, loaded
 Tensorflow module on ml partition, activated Python virtual environment.
 Basic knowledge about Python, SLURM system also required.
 
 **Aim** of this page is to introduce users on how to start working with
-Keras and TensorFlow on the [HPC-DA](./../jobs/HPCDA.md)
+Keras and TensorFlow on the [HPC-DA](./../jobs_and_resources/hpcda.md)
 system - part of the TU Dresden HPC system.
 
 There are three main options on how to work with Keras and Tensorflow on
@@ -42,12 +42,12 @@ the HPC-DA: 1. Modules; 2. JupyterNotebook; 3. Containers. One of the
 main ways is using the **TODO LINK MISSING** (Modules
 system)(RuntimeEnvironment#Module_Environments) and Python virtual
 environment. Please see the 
-[Python page](./Python.md) for the HPC-DA
+[Python page](./python.md) for the HPC-DA
 system.
 
 The information about the Jupyter notebook and the **JupyterHub** could
-be found [here](./JupyterHub.md). The use of
-Containers is described [here](./TensorFlowContainerOnHPCDA.md).
+be found [here](./jupyterhub.md). The use of
+Containers is described [here](./tensor_flow_container_on_hpcda.md).
 
 Keras contains numerous implementations of commonly used neural-network
 building blocks such as layers,
@@ -71,7 +71,7 @@ Keras (using the module system). To get started, import [tf.keras](https://www.t
 as part of your TensorFlow program setup.
 tf.keras is TensorFlow's implementation of the [Keras API
 specification](https://keras.io/). This is a modified example that we
-used for the [Tensorflow page](./TensorFlow.md).
+used for the [Tensorflow page](./tensor_flow.md).
 
 ```bash
 srun -p ml --gres=gpu:1 -n 1 --pty --mem-per-cpu=8000 bash
@@ -164,7 +164,7 @@ Generally, for machine learning purposes ml partition is used but for
 some special issues, SCS5 partition can be useful. The following sbatch
 script will automatically execute the above Python script on ml
 partition. If you have a question about the sbatch script see the
-article about [SLURM](./../jobs/BindingAndDistributionOfTasks.md). 
+article about [SLURM](./../jobs_and_resources/binding_and_distribution_of_tasks.md). 
 Keep in mind that you need to put the executable file (Keras_example) with 
 python code to the same folder as bash script or specify the path.
 
diff --git a/doc.zih.tu-dresden.de/docs/software/Libraries.md b/doc.zih.tu-dresden.de/docs/software/libraries.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/software/Libraries.md
rename to doc.zih.tu-dresden.de/docs/software/libraries.md
diff --git a/doc.zih.tu-dresden.de/docs/software/MachineLearning.md b/doc.zih.tu-dresden.de/docs/software/machine_learning.md
similarity index 99%
rename from doc.zih.tu-dresden.de/docs/software/MachineLearning.md
rename to doc.zih.tu-dresden.de/docs/software/machine_learning.md
index ab85f0adc4b35e9830a02421316a1783c78fbb93..beeb33c1be73cd8a079b1d45bbd9e1b5cd811b47 100644
--- a/doc.zih.tu-dresden.de/docs/software/MachineLearning.md
+++ b/doc.zih.tu-dresden.de/docs/software/machine_learning.md
@@ -1,7 +1,7 @@
 # Machine Learning
 
 On the machine learning nodes, you can use the tools from [IBM Power
-AI](PowerAI.md). 
+AI](power_ai.md). 
 
 ## Interactive Session Examples
 
diff --git a/doc.zih.tu-dresden.de/docs/software/Mathematics.md b/doc.zih.tu-dresden.de/docs/software/mathematics.md
similarity index 98%
rename from doc.zih.tu-dresden.de/docs/software/Mathematics.md
rename to doc.zih.tu-dresden.de/docs/software/mathematics.md
index 45af03beb748f75abf3d79789de5622872438e02..bd127067fcff6afa7d3fce1526388ce443d94925 100644
--- a/doc.zih.tu-dresden.de/docs/software/Mathematics.md
+++ b/doc.zih.tu-dresden.de/docs/software/mathematics.md
@@ -105,7 +105,7 @@ Or use:
        module load MATLAB
 
 (then you will get the most recent Matlab version. [Refer to the modules
-section for details.](../software/RuntimeEnvironment.md#Modules))
+section for details.](../software/runtime_environment.md#Modules))
 
 ### matlab interactive
 
@@ -152,7 +152,7 @@ variable $EBROOTMATLAB as set by the module file for that.
 
 -   then run the binary via the wrapper script in a job (just a simple
     example, you should be using an [sbatch
-    script](../jobs/Slurm.md#Job_Submission) for that): \<pre>srun
+    script](../jobs_and_resources/slurm.md#Job_Submission) for that): \<pre>srun
     ./run_compiled_executable.sh $EBROOTMATLAB\</pre>
 
 ### matlab parallel (with 'local' configuration)
diff --git a/doc.zih.tu-dresden.de/docs/software/Modules.md b/doc.zih.tu-dresden.de/docs/software/modules.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/software/Modules.md
rename to doc.zih.tu-dresden.de/docs/software/modules.md
diff --git a/doc.zih.tu-dresden.de/docs/software/MPIUsageErrorDetection.md b/doc.zih.tu-dresden.de/docs/software/mpi_usage_error_detection.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/software/MPIUsageErrorDetection.md
rename to doc.zih.tu-dresden.de/docs/software/mpi_usage_error_detection.md
diff --git a/doc.zih.tu-dresden.de/docs/software/NanoscaleSimulations.md b/doc.zih.tu-dresden.de/docs/software/nanoscale_simulations.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/software/NanoscaleSimulations.md
rename to doc.zih.tu-dresden.de/docs/software/nanoscale_simulations.md
diff --git a/doc.zih.tu-dresden.de/docs/software/Overview.md b/doc.zih.tu-dresden.de/docs/software/overview.md
similarity index 96%
rename from doc.zih.tu-dresden.de/docs/software/Overview.md
rename to doc.zih.tu-dresden.de/docs/software/overview.md
index f856f706e2126c5cbf40939020dece5967e00212..dbb75ca856f493db73454fb45bbedc4955c6d7be 100644
--- a/doc.zih.tu-dresden.de/docs/software/Overview.md
+++ b/doc.zih.tu-dresden.de/docs/software/overview.md
@@ -14,7 +14,7 @@ There are a lot of different possibilities to work with software on Taurus:
 ## Modules
 
 Usage of software on HPC systems is managed by a **modules system**. Thus, it is crucial to
-be familiar with the [modules concept and commands](Modules.md).  Modules are a way to use
+be familiar with the [modules concept and commands](modules.md).  Modules are a way to use
 frameworks, compilers, loader, libraries, and utilities. A module is a user interface that provides
 utilities for the dynamic modification of a user's environment without manual modifications. You
 could use them for `srun`, batch jobs (`sbatch`) and the Jupyterhub.
diff --git a/doc.zih.tu-dresden.de/docs/software/PapiLibrary.md b/doc.zih.tu-dresden.de/docs/software/papi_library.md
similarity index 91%
rename from doc.zih.tu-dresden.de/docs/software/PapiLibrary.md
rename to doc.zih.tu-dresden.de/docs/software/papi_library.md
index 414e08a4bf0226493e10bfeabbad620df14f59c1..c3190a32296c72f0e16646f632430d98ceeda116 100644
--- a/doc.zih.tu-dresden.de/docs/software/PapiLibrary.md
+++ b/doc.zih.tu-dresden.de/docs/software/papi_library.md
@@ -22,7 +22,7 @@ resources. (see the uncore manuals listed in top of this documentation).
 
 ## Usage
 
-[Score-P](ScoreP.md) supports per-core PMCs. To include uncore PMCs into Score-P traces use the
+[Score-P](score_p.md) supports per-core PMCs. To include uncore PMCs into Score-P traces use the
 software module **scorep-uncore/2016-03-29**on the Haswell partition. If you do so, disable
 profiling to include the uncore measurements. This metric plugin is available at
 [github](https://github.com/score-p/scorep_plugin_uncore/).
@@ -33,8 +33,8 @@ the environment variables **PAPI_INC**, **PAPI_LIB**, and **PAPI_ROOT**. Have a
 
 ## Related Software
 
-* [Score-P](ScoreP.md)
-* [Linux Perf Tools](PerfTools.md)
+* [Score-P](score_p.md)
+* [Linux Perf Tools](perf_tools.md)
 
 If you just need a short summary of your job, you might want to have a look at
-[perf stat](PerfTools.md).
+[perf stat](perf_tools.md).
diff --git a/doc.zih.tu-dresden.de/docs/software/PerfTools.md b/doc.zih.tu-dresden.de/docs/software/perf_tools.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/software/PerfTools.md
rename to doc.zih.tu-dresden.de/docs/software/perf_tools.md
diff --git a/doc.zih.tu-dresden.de/docs/software/PIKA.md b/doc.zih.tu-dresden.de/docs/software/pika.md
similarity index 95%
rename from doc.zih.tu-dresden.de/docs/software/PIKA.md
rename to doc.zih.tu-dresden.de/docs/software/pika.md
index 5ff0adffa4609fc1baa7e2bb34d45b86deefe8a1..8a2b9fdb31123d64d87befdc8728ec82444eb9cf 100644
--- a/doc.zih.tu-dresden.de/docs/software/PIKA.md
+++ b/doc.zih.tu-dresden.de/docs/software/pika.md
@@ -2,18 +2,19 @@
 
 Pika is a hardware performance monitoring stack to identify inefficient HPC jobs. Taurus users have
 the possibility to visualize and analyze the efficiency of their jobs via the [Pika web
-interface](https://selfservice.zih.tu-dresden.de/l/index.php/hpcportal/jobmonitoring/zih/jobs).
+interface](https://selfservice.zih.tu-dresden.de/l/index.php/hpcportal/jobmonitoring/z../jobs_and_resources).
 
-**Hint:** To understand this small guide, it is recommended to open the [web
-interface](https://selfservice.zih.tu-dresden.de/l/index.php/hpcportal/jobmonitoring/zih/jobs) in a
-separate window. Furthermore, at least one real HPC job should have been submitted on Taurus. 
+**Hint:** To understand this small guide, it is recommended to open the
+[web
+interface](https://selfservice.zih.tu-dresden.de/l/index.php/hpcportal/jobmonitoring/z../jobs_and_resources)
+in a separate window. Furthermore, at least one real HPC job should have been submitted on Taurus. 
 
 ## Overview
 
 Pika consists of several components and tools.  It uses the collection daemon collectd, InfluxDB to
 store time-series data and MariaDB to store job metadata.  Furthermore, it provides a powerful [web
-interface](https://selfservice.zih.tu-dresden.de/l/index.php/hpcportal/jobmonitoring/zih/jobs) for
-the visualization and analysis of job performance data.
+interface](https://selfservice.zih.tu-dresden.de/l/index.php/hpcportal/jobmonitoring/z../jobs_and_resources)
+for the visualization and analysis of job performance data.
 
 ## Table View and Job Search
 
diff --git a/doc.zih.tu-dresden.de/docs/software/PowerAI.md b/doc.zih.tu-dresden.de/docs/software/power_ai.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/software/PowerAI.md
rename to doc.zih.tu-dresden.de/docs/software/power_ai.md
diff --git a/doc.zih.tu-dresden.de/docs/software/PyTorch.md b/doc.zih.tu-dresden.de/docs/software/py_torch.md
similarity index 94%
rename from doc.zih.tu-dresden.de/docs/software/PyTorch.md
rename to doc.zih.tu-dresden.de/docs/software/py_torch.md
index 90018e4efd20fef71e7637c516d713fe2b69a608..080a61c83fa1ed27c40162207526b67d08603c8d 100644
--- a/doc.zih.tu-dresden.de/docs/software/PyTorch.md
+++ b/doc.zih.tu-dresden.de/docs/software/py_torch.md
@@ -8,24 +8,24 @@ PyTorch provides a core datastructure, the tensor, a multi-dimensional array tha
 similarities with Numpy arrays. 
 PyTorch also consumed Caffe2 for its backend and added support of ONNX.
 
-**Prerequisites:** To work with PyTorch you obviously need [access](../access/Login.md) for the 
+**Prerequisites:** To work with PyTorch you obviously need [access](../access/login.md) for the 
 Taurus system and basic knowledge about Python, Numpy and SLURM system.
 
 **Aim** of this page is to introduce users on how to start working with PyTorch on the 
-[HPC-DA](../use_of_hardware/Power9.md) system -  part of the TU Dresden HPC system.
+[HPC-DA](../jobs_and_resources/power9.md) system -  part of the TU Dresden HPC system.
 
 There are numerous different possibilities of how to work with PyTorch on Taurus. 
 Here we will consider two main methods.
 
-1\. The first option is using Jupyter notebook with HPC-DA nodes. The easiest way is by using [Jupyterhub](JupyterHub.md).
+1\. The first option is using Jupyter notebook with HPC-DA nodes. The easiest way is by using [Jupyterhub](jupyterhub.md).
 It is a recommended way for beginners in PyTorch 
 and users who are just starting their work with Taurus.
 
 2\. The second way is using the Modules system and Python or conda virtual environment. 
-See [the Python page](Python.md) for the HPC-DA system.
+See [the Python page](python.md) for the HPC-DA system.
 
 Note: The information on working with the PyTorch using Containers could be found
-[here](Containers.md).
+[here](containers.md).
 
 ## Get started with PyTorch
 
@@ -97,7 +97,7 @@ which you can submit using *sbatch [options] <job_file_name>*.
 Below are examples of Jupyter notebooks with PyTorch models which you can run on ml nodes of HPC-DA.
 
 There are two ways how to work with the Jupyter notebook on HPC-DA system. You can use a  
-[remote Jupyter server](DeepLearning.md) or [JupyterHub](JupyterHub.md). 
+[remote Jupyter server](deep_learning.md) or [JupyterHub](jupyterhub.md). 
 Jupyterhub is a simple and recommended way to use PyTorch.
 We are using Jupyterhub for our examples. 
 
@@ -110,15 +110,15 @@ JupyterHub is available here: [https://taurus.hrsk.tu-dresden.de/jupyter](https:
 After login, you can start a new session by clicking on the button.
 
 **Note:** Detailed guide (with pictures and instructions) how to run the Jupyterhub 
-you could find on [the page](JupyterHub.md).
+you could find on [the page](jupyterhub.md).
 
 Please choose the "IBM Power (ppc64le)". You need to download an example 
 (prepared as jupyter notebook file) that already contains all you need for the start of the work. 
 Please put the file into your previously created virtual environment in your working directory or 
-use the kernel for your notebook [see Jupyterhub page](JupyterHub.md).
+use the kernel for your notebook [see Jupyterhub page](jupyterhub.md).
 
 Note: You could work with simple examples in your home directory but according to 
-[HPCStorageConcept2019](../data_management/HPCStorageConcept2019.md) please use **workspaces** 
+[HPCStorageConcept2019](../data_lifecycle/hpc_storage_concept2019.md) please use **workspaces** 
 for your study and work projects. 
 For this reason, you have to use advanced options of Jupyterhub and put "/" in "Workspace scope" field.
 
@@ -132,7 +132,7 @@ virtual environment you could use the following command:
     unzip example_MNIST_Pytorch.zip
 
 Also, you could use kernels for all notebooks, not only for them which
-placed in your virtual environment. See the [jupyterhub](JupyterHub.md) page.
+placed in your virtual environment. See the [jupyterhub](jupyterhub.md) page.
 
 Examples:
 
@@ -147,7 +147,7 @@ for this kind of models. Recommended parameters for running this model are 1 GPU
 
 ### Running the model
 
-Open [JupyterHub](JupyterHub.md) and follow instructions above.
+Open [JupyterHub](jupyterhub.md) and follow instructions above.
 
 In Jupyterhub documents are organized with tabs and a very versatile split-screen feature. 
 On the left side of the screen, you can open your file. Use 'File-Open from Path' 
@@ -185,7 +185,7 @@ Recommended parameters for running this model are 1 GPU and 7 cores (28 thread).
 
 (example_Pytorch_image_recognition.zip)
 
-Remember that for using [JupyterHub service](JupyterHub.md) 
+Remember that for using [JupyterHub service](jupyterhub.md) 
 for PyTorch you need to create and activate 
 a virtual environment (kernel) with loaded essential modules (see "envtest" environment form the virtual
 environment example.
@@ -225,7 +225,7 @@ model are **2 GPU** and 14 cores (56 thread).
 
 (example_PyTorch_parallel.zip)
 
-Remember that for using [JupyterHub service](JupyterHub.md) 
+Remember that for using [JupyterHub service](jupyterhub.md) 
 for PyTorch you need to create and activate 
 a virtual environment (kernel) with loaded essential modules.
 
diff --git a/doc.zih.tu-dresden.de/docs/software/Python.md b/doc.zih.tu-dresden.de/docs/software/python.md
similarity index 96%
rename from doc.zih.tu-dresden.de/docs/software/Python.md
rename to doc.zih.tu-dresden.de/docs/software/python.md
index 92d7070a7e5d42ed74e0613ec2dabba9321085c7..548ba169d58c6c9e6ae74c7a19d109a3ae2739d7 100644
--- a/doc.zih.tu-dresden.de/docs/software/Python.md
+++ b/doc.zih.tu-dresden.de/docs/software/python.md
@@ -6,19 +6,19 @@ effective. Taurus allows working with a lot of available packages and
 libraries which give more useful functionalities and allow use all
 features of Python and to avoid minuses.
 
-**Prerequisites:** To work with PyTorch you obviously need [access](../access/Login.md) for the 
+**Prerequisites:** To work with PyTorch you obviously need [access](../access/login.md) for the 
 Taurus system and basic knowledge about Python, Numpy and SLURM system.
 
 **Aim** of this page is to introduce users on how to start working with Python on the 
-[HPC-DA](../use_of_hardware/Power9.md) system -  part of the TU Dresden HPC system.
+[HPC-DA](../jobs_and_resources/power9.md) system -  part of the TU Dresden HPC system.
 
 There are three main options on how to
-work with Keras and Tensorflow on the HPC-DA: 1. Modules; 2. [JupyterNotebook](JupyterHub.md); 
-3.[Containers](Containers.md). The main way is using the
-[Modules system](Modules.md) and Python virtual environment.
+work with Keras and Tensorflow on the HPC-DA: 1. Modules; 2. [JupyterNotebook](jupyterhub.md); 
+3.[Containers](containers.md). The main way is using the
+[Modules system](modules.md) and Python virtual environment.
 
 Note: You could work with simple examples in your home directory but according to 
-[HPCStorageConcept2019](../data_management/HPCStorageConcept2019.md) please use **workspaces** 
+[HPCStorageConcept2019](../data_lifecycle/hpc_storage_concept2019.md) please use **workspaces** 
 for your study and work projects.
 
 ## Virtual environment
@@ -117,10 +117,10 @@ course with machine learning.
 There are two general options on how to work Jupyter notebooks using
 HPC.
 
-On Taurus, there is [JupyterHub](JupyterHub.md) where you can simply run your Jupyter notebook 
+On Taurus, there is [JupyterHub](jupyterhub.md) where you can simply run your Jupyter notebook 
 on HPC nodes. Also, you can run a remote jupyter server within a sbatch
 GPU job and with the modules and packages you need. The manual server
-setup you can find [here](DeepLearning.md).
+setup you can find [here](deep_learning.md).
 
 With Jupyterhub you can work with general
 data analytics tools. This is the recommended way to start working with
@@ -206,7 +206,7 @@ in some cases better results than pure TensorFlow and PyTorch.
 #### Horovod as a module
 
 Horovod is available as a module with **TensorFlow** or **PyTorch**for **all** module environments.
-Please check the [software module list](Modules.md) for the current version of the software.
+Please check the [software module list](modules.md) for the current version of the software.
 Horovod can be loaded like other software on the Taurus:
 
 ```Bash
diff --git a/doc.zih.tu-dresden.de/docs/software/RuntimeEnvironment.md b/doc.zih.tu-dresden.de/docs/software/runtime_environment.md
similarity index 98%
rename from doc.zih.tu-dresden.de/docs/software/RuntimeEnvironment.md
rename to doc.zih.tu-dresden.de/docs/software/runtime_environment.md
index 3bb467308049f189dfb278eb19cd8e61dc4ba849..1bca8daa7cfa08f3b58b19e5608c2e333b9055f9 100644
--- a/doc.zih.tu-dresden.de/docs/software/RuntimeEnvironment.md
+++ b/doc.zih.tu-dresden.de/docs/software/runtime_environment.md
@@ -74,7 +74,7 @@ software in all modenv environments. It will also display information on
 how to load a found module when giving a precise module (with version)
 as the parameter.
 
-Also see the information under [SCS5 software](../software/SCS5Software.md).
+Also see the information under [SCS5 software](../software/scs5_software.md).
 
 ### Per-Architecture Builds
 
@@ -171,7 +171,7 @@ can load the modules with `module load` .
 
 ## Misc
 
-An automated [backup](../data_management/FileSystems.md#backup-and-snapshots-of-the-file-system)
+An automated [backup](../data_lifecycle/file_systems.md#backup-and-snapshots-of-the-file-system)
 system provides security for the HOME-directories on `Taurus` and `Venus` on a daily basis. This is
 the reason why we urge our users to store (large) temporary data (like checkpoint files) on the
 /scratch -Filesystem or at local scratch disks.
diff --git a/doc.zih.tu-dresden.de/docs/software/ScoreP.md b/doc.zih.tu-dresden.de/docs/software/score_p.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/software/ScoreP.md
rename to doc.zih.tu-dresden.de/docs/software/score_p.md
diff --git a/doc.zih.tu-dresden.de/docs/software/SCS5Software.md b/doc.zih.tu-dresden.de/docs/software/scs5_software.md
similarity index 97%
rename from doc.zih.tu-dresden.de/docs/software/SCS5Software.md
rename to doc.zih.tu-dresden.de/docs/software/scs5_software.md
index 70b86d93f7a3b73d8f10bdecb5f919f5a45e6f44..e0ec933cdafa4bbb4f296518943a7964b3a8223d 100644
--- a/doc.zih.tu-dresden.de/docs/software/SCS5Software.md
+++ b/doc.zih.tu-dresden.de/docs/software/scs5_software.md
@@ -16,12 +16,12 @@ Here are the major changes from the user's perspective:
 Due to the new operating system, the host keys of the login nodes have also changed. If you have
 logged into tauruslogin6 before and still have the old one saved in your `known_hosts` file, just
 remove it and accept the new one after comparing its fingerprint with those listed under
-[Login](../access/Login.md#ssh-access).
+[Login](../access/login.md#ssh-access).
 
 ## Using Software Modules
 
 Starting with SCS5, we only provide
-[Lmod](../software/RuntimeEnvironment.md#lmod-an-alternative-module-implementation) as the
+[Lmod](../software/runtime_environment.md#lmod-an-alternative-module-implementation) as the
 environment module tool of choice.
 
 As usual, you can get a list of the available software modules via:
diff --git a/doc.zih.tu-dresden.de/docs/software/SingularityExampleDefinitions.md b/doc.zih.tu-dresden.de/docs/software/singularity_example_definitions.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/software/SingularityExampleDefinitions.md
rename to doc.zih.tu-dresden.de/docs/software/singularity_example_definitions.md
diff --git a/doc.zih.tu-dresden.de/docs/software/SingularityRecipeHints.md b/doc.zih.tu-dresden.de/docs/software/singularity_recipe_hints.md
similarity index 100%
rename from doc.zih.tu-dresden.de/docs/software/SingularityRecipeHints.md
rename to doc.zih.tu-dresden.de/docs/software/singularity_recipe_hints.md
diff --git a/doc.zih.tu-dresden.de/docs/software/SoftwareDevelopment.md b/doc.zih.tu-dresden.de/docs/software/software_development.md
similarity index 87%
rename from doc.zih.tu-dresden.de/docs/software/SoftwareDevelopment.md
rename to doc.zih.tu-dresden.de/docs/software/software_development.md
index f7b222bf733bc911ea5cf1aef7d851c4e4af6cd8..28fd208e9504ed50f8ee7dee8790490212b7fd92 100644
--- a/doc.zih.tu-dresden.de/docs/software/SoftwareDevelopment.md
+++ b/doc.zih.tu-dresden.de/docs/software/software_development.md
@@ -36,12 +36,12 @@ Some questions you should ask yourself:
 
 Subsections:
 
-- [Compilers](Compilers.md)
+- [Compilers](compilers.md)
 - [Debugging Tools](Debugging Tools.md)
-  - [Debuggers](Debuggers.md) (GDB, Allinea DDT, Totalview)
-  - [Tools to detect MPI usage errors](MPIUsageErrorDetection.md) (MUST)
-- PerformanceTools.md: [Score-P](ScoreP.md), [Vampir](Vampir.md), [Papi Library](PapiLibrary.md)
-- [Libraries](Libraries.md)
+  - [Debuggers](debuggers.md) (GDB, Allinea DDT, Totalview)
+  - [Tools to detect MPI usage errors](mpi_usage_error_detection.md) (MUST)
+- PerformanceTools.md: [Score-P](score_p.md), [Vampir](vampir.md), [Papi Library](papi_library.md)
+- [Libraries](libraries.md)
 
 Intel Tools Seminar \[Oct. 2013\]
 
diff --git a/doc.zih.tu-dresden.de/docs/software/TensorFlow.md b/doc.zih.tu-dresden.de/docs/software/tensor_flow.md
similarity index 95%
rename from doc.zih.tu-dresden.de/docs/software/TensorFlow.md
rename to doc.zih.tu-dresden.de/docs/software/tensor_flow.md
index 4d051c5467d749c8a31cc5232c570d443fa8ff13..1bae1ed4139b969d1956bb4b5c1725418d269540 100644
--- a/doc.zih.tu-dresden.de/docs/software/TensorFlow.md
+++ b/doc.zih.tu-dresden.de/docs/software/tensor_flow.md
@@ -3,10 +3,10 @@
 ## Introduction
 
 This is an introduction of how to start working with TensorFlow and run
-machine learning applications on the [HPC-DA](../jobs/HPCDA.md) system of Taurus.
+machine learning applications on the [HPC-DA](../jobs_and_resources/hpcda.md) system of Taurus.
 
 \<span style="font-size: 1em;">On the machine learning nodes (machine
-learning partition), you can use the tools from [IBM PowerAI](PowerAI.md) or the other
+learning partition), you can use the tools from [IBM PowerAI](power_ai.md) or the other
 modules. PowerAI is an enterprise software distribution that combines popular open-source 
 deep learning frameworks, efficient AI development tools (Tensorflow, Caffe, etc). For
 this page and examples was used [PowerAI version 1.5.4](https://www.ibm.com/support/knowledgecenter/en/SS5SF7_1.5.4/navigation/pai_software_pkgs.html)
@@ -19,7 +19,7 @@ community resources. It is available on taurus along with other common machine
 learning packages like Pillow, SciPY, Numpy.
 
 **Prerequisites:** To work with Tensorflow on Taurus, you obviously need
-[access](../access/Login.md) for the Taurus system and basic knowledge about Python, SLURM system.
+[access](../access/login.md) for the Taurus system and basic knowledge about Python, SLURM system.
 
 **Aim** of this page is to introduce users on how to start working with
 TensorFlow on the \<a href="HPCDA" target="\_self">HPC-DA\</a> system -
@@ -27,13 +27,13 @@ part of the TU Dresden HPC system.
 
 There are three main options on how to work with Tensorflow on the
 HPC-DA: **1.** **Modules,** **2.** **JupyterNotebook, 3. Containers**. The best option is 
-to use [module system](../software/RuntimeEnvironment.md#Module_Environments) and 
-Python virtual environment. Please see the next chapters and the [Python page](Python.md) for the
+to use [module system](../software/runtime_environment.md#Module_Environments) and 
+Python virtual environment. Please see the next chapters and the [Python page](python.md) for the
 HPC-DA system.
 
 The information about the Jupyter notebook and the **JupyterHub** could
-be found [here](JupyterHub.md). The use of
-Containers is described [here](TensorFlowContainerOnHPCDA.md).
+be found [here](jupyterhub.md). The use of
+Containers is described [here](tensor_flow_container_on_hpcda.md).
 
 On Taurus, there exist different module environments, each containing a set 
 of software modules. The default is *modenv/scs5* which is already loaded, 
@@ -52,7 +52,7 @@ of Taurus (e.g. from modenv/scs5).
 
 Each node on the ml partition has 6x Tesla V-100 GPUs, with 176 parallel threads 
 on 44 cores per node (Simultaneous multithreading (SMT) enabled) and 256GB RAM. 
-The specification could be found [here](../use_of_hardware/Power9.md).
+The specification could be found [here](../jobs_and_resources/power9.md).
 
 %RED%Note:<span class="twiki-macro ENDCOLOR"></span> Users should not
 reserve more than 28 threads per each GPU device so that other users on
@@ -261,4 +261,4 @@ else stay with the default of modenv/scs5.
 
 Q: How to change the module environment and know more about modules?
 
-A: [Modules](../software/RuntimeEnvironment.md#Modules)
+A: [Modules](../software/runtime_environment.md#Modules)
diff --git a/doc.zih.tu-dresden.de/docs/software/TensorFlowContainerOnHPCDA.md b/doc.zih.tu-dresden.de/docs/software/tensor_flow_container_on_hpcda.md
similarity index 96%
rename from doc.zih.tu-dresden.de/docs/software/TensorFlowContainerOnHPCDA.md
rename to doc.zih.tu-dresden.de/docs/software/tensor_flow_container_on_hpcda.md
index 7d2c1f60a28cf70ecd6eaf15bcfb6c0c12894d57..6fb63ba25803a173a30d00e34f320b59a8f0f725 100644
--- a/doc.zih.tu-dresden.de/docs/software/TensorFlowContainerOnHPCDA.md
+++ b/doc.zih.tu-dresden.de/docs/software/tensor_flow_container_on_hpcda.md
@@ -44,10 +44,10 @@ is a Virtual Machine (VM) on the ml partition which allows users to gain
 root permissions in an isolated environment. There are two main options
 on how to work with VM on Taurus:
 
-1\. [VM tools](VMTools.md). Automative algorithms for using virtual
+1\. [VM tools](vm_tools.md). Automative algorithms for using virtual
 machines;
 
-2\. [Manual method](Cloud.md). It required more operations but gives you
+2\. [Manual method](cloud.md). It required more operations but gives you
 more flexibility and reliability.
 
 Short algorithm to run the virtual machine manually:
diff --git a/doc.zih.tu-dresden.de/docs/software/TensorFlowOnJupyterNotebook.md b/doc.zih.tu-dresden.de/docs/software/tensor_flow_on_jupyter_notebook.md
similarity index 95%
rename from doc.zih.tu-dresden.de/docs/software/TensorFlowOnJupyterNotebook.md
rename to doc.zih.tu-dresden.de/docs/software/tensor_flow_on_jupyter_notebook.md
index 778f1006c6b254aee53e5496c585ecc4b3ec359b..9ed4195af6281224c2e1cd979b1092b6b06966c7 100644
--- a/doc.zih.tu-dresden.de/docs/software/TensorFlowOnJupyterNotebook.md
+++ b/doc.zih.tu-dresden.de/docs/software/tensor_flow_on_jupyter_notebook.md
@@ -19,7 +19,7 @@ with HPC or Linux. \</span>
 basic knowledge about Python, SLURM system and the Jupyter notebook.
 
 \<span style="font-size: 1em;"> **This page aims** to introduce users on
-how to start working with TensorFlow on the [HPCDA](../jobs/HPCDA.md) system - part
+how to start working with TensorFlow on the [HPCDA](../jobs_and_resources/hpcda.md) system - part
 of the TU Dresden HPC system with a graphical interface.\</span>
 
 ## Get started with Jupyter notebook
@@ -38,7 +38,7 @@ work Jupyter notebooks using HPC. \</span>
     available [here](https://taurus.hrsk.tu-dresden.de/jupyter)
 -   For more specific cases you can run a manually created **remote
     jupyter server.** \<span style="font-size: 1em;"> You can find the
-    manual server setup [here](DeepLearning.md).
+    manual server setup [here](deep_learning.md).
 
 \<span style="font-size: 13px;">Keep in mind that with Jupyterhub you
 can't work with some special instruments. However general data analytics
@@ -65,7 +65,7 @@ for a particular version of Python, plus several additional packages. At
 its core, the main purpose of Python virtual environments is to create
 an isolated environment for Python projects. Python virtual environment is
 the main method to work with Deep Learning software as TensorFlow on the 
-[HPCDA](../jobs/HPCDA.md) system.
+[HPCDA](../jobs_and_resources/hpcda.md) system.
 
 ### Conda and Virtualenv
 
@@ -145,12 +145,12 @@ with jupyterhub and tensorflow models. It can be useful and instructive
 to start your acquaintance with Tensorflow and HPC-DA system from these
 simple examples.
 
-You can use a [remote Jupyter server](JupyterHub.md). For simplicity, we
+You can use a [remote Jupyter server](jupyterhub.md). For simplicity, we
 will recommend using Jupyterhub for our examples.
 
 JupyterHub is available [here](https://taurus.hrsk.tu-dresden.de/jupyter)
 
-Please check updates and details [JupyterHub](JupyterHub.md). However, 
+Please check updates and details [JupyterHub](jupyterhub.md). However, 
 the general pipeline can be briefly explained as follows.
 
 After logging, you can start a new session and configure it. There are
@@ -168,8 +168,8 @@ into your previously created virtual environment in your working
 directory or use the kernel for your notebook.
 
 Note: You could work with simple examples in your home directory but according to 
-[new storage concept](../data_management/HPCStorageConcept2019.md) please use 
-[workspaces](../data_management/Workspaces.md) for your study and work projects**. 
+[new storage concept](../data_lifecycle/hpc_storage_concept2019.md) please use 
+[workspaces](../data_lifecycle/workspaces.md) for your study and work projects**. 
 For this reason, you have to use advanced options and put "/" in "Workspace scope" field.
 
 To download the first example (from the list below) into your previously
@@ -184,7 +184,7 @@ created virtual environment you could use the following command:
 ```
 
 Also, you could use kernels for all notebooks, not only for them which placed 
-in your virtual environment. See the [jupyterhub](JupyterHub.md) page.
+in your virtual environment. See the [jupyterhub](jupyterhub.md) page.
 
 ### Examples:
 
diff --git a/doc.zih.tu-dresden.de/docs/software/Vampir.md b/doc.zih.tu-dresden.de/docs/software/vampir.md
similarity index 99%
rename from doc.zih.tu-dresden.de/docs/software/Vampir.md
rename to doc.zih.tu-dresden.de/docs/software/vampir.md
index 8e2de70bcfbea2b576fc87459e65d551a9fa6f00..464f29bb14ce5c775938bdbd0023767d72765287 100644
--- a/doc.zih.tu-dresden.de/docs/software/Vampir.md
+++ b/doc.zih.tu-dresden.de/docs/software/vampir.md
@@ -11,7 +11,7 @@ explanation of various performance bottlenecks such as load imbalances and commu
 deficiencies. [Follow this link for further
 information](http://tu-dresden.de/die_tu_dresden/zentrale_einrichtungen/zih/forschung/projekte/vampir).
 
-A growing number of performance monitoring environments like [VampirTrace](../archive/VampirTrace.md),
+A growing number of performance monitoring environments like [VampirTrace](../archive/vampir_trace.md),
 Score-P, TAU or KOJAK can produce trace files that are readable by Vampir. The tool supports trace
 files in Open Trace Format (OTF, OTF2) that is developed by ZIH and its partners and is especially
 designed for massively parallel programs.
diff --git a/doc.zih.tu-dresden.de/docs/software/VirtualDesktops.md b/doc.zih.tu-dresden.de/docs/software/virtual_desktops.md
similarity index 99%
rename from doc.zih.tu-dresden.de/docs/software/VirtualDesktops.md
rename to doc.zih.tu-dresden.de/docs/software/virtual_desktops.md
index 24b5b3837cba6ca1c0aa5d3fd15cb4dc48e5cd61..bc5db15748e3dcfdbdbc8afe858a1e6f1be9c390 100644
--- a/doc.zih.tu-dresden.de/docs/software/VirtualDesktops.md
+++ b/doc.zih.tu-dresden.de/docs/software/virtual_desktops.md
@@ -15,7 +15,7 @@ Use WebVNC or NICE DCV to run GUI applications on HPC resources.
 <span class="twiki-macro TABLE" columnwidths="10%,45%,45%"></span> \|
 **step 1** \| Navigate to \<a href="<https://taurus.hrsk.tu-dresden.de>"
 target="\_blank"><https://taurus.hrsk.tu-dresden.de>\</a>. There is our
-[JupyterHub](../software/JupyterHub.md) instance. \|\| \| **step 2** \|
+[JupyterHub](../software/jupyterhub.md) instance. \|\| \| **step 2** \|
 Click on the "advanced" tab and choose a preset: \|\|
 
 |             |                                                                                                                                                                             |                                                             |
diff --git a/doc.zih.tu-dresden.de/docs/software/Visualization.md b/doc.zih.tu-dresden.de/docs/software/visualization.md
similarity index 98%
rename from doc.zih.tu-dresden.de/docs/software/Visualization.md
rename to doc.zih.tu-dresden.de/docs/software/visualization.md
index 79b3bdef27121a47cd6110277f8643aec0237d20..b01739eec80bc9f11f9eefe07bbd2556a15651ea 100644
--- a/doc.zih.tu-dresden.de/docs/software/Visualization.md
+++ b/doc.zih.tu-dresden.de/docs/software/visualization.md
@@ -4,7 +4,7 @@
 
 [ParaView](https://paraview.org) is an open-source, multi-platform data
 analysis and visualization application. It is available on Taurus under
-the `ParaView` [modules](Modules.md#modules-environment)
+the `ParaView` [modules](modules.md#modules-environment)
 
 ```Bash
 taurus$ module avail ParaView
@@ -119,7 +119,7 @@ There are different ways of using ParaView on the cluster:
 
 This option provides hardware accelerated OpenGL and might provide the best performance and smooth
 handling. First, you need to open a DCV session, so please follow the instructions under
-[virtual desktops](VirtualDesktops.md). Start a terminal (right-click on desktop -> Terminal) in your
+[virtual desktops](virtual_desktops.md). Start a terminal (right-click on desktop -> Terminal) in your
 virtual desktop session, then load the ParaView module as usual and start the GUI:
 
 ```Bash
diff --git a/doc.zih.tu-dresden.de/docs/software/VMTools.md b/doc.zih.tu-dresden.de/docs/software/vm_tools.md
similarity index 98%
rename from doc.zih.tu-dresden.de/docs/software/VMTools.md
rename to doc.zih.tu-dresden.de/docs/software/vm_tools.md
index 884926b697f0cf72920874047ef112c0373f4c80..751821b6b9160f64d59daa2ac703237a2b68f92e 100644
--- a/doc.zih.tu-dresden.de/docs/software/VMTools.md
+++ b/doc.zih.tu-dresden.de/docs/software/vm_tools.md
@@ -1,7 +1,7 @@
 # Singularity on Power9 / ml partition
 
 Building Singularity containers from a recipe on Taurus is normally not possible due to the
-requirement of root (administrator) rights, see [Containers](Containers.md). For obvious reasons
+requirement of root (administrator) rights, see [Containers](containers.md). For obvious reasons
 users on Taurus cannot be granted root permissions.
 
 The solution is to build your container on your local Linux machine by executing something like
@@ -17,7 +17,7 @@ likely doesn't.
 
 For this we provide a Virtual Machine (VM) on the ml partition which allows users to gain root
 permissions in an isolated environment. The workflow to use this manually is described at
-[another page](Cloud.md) but is quite cumbersome.
+[another page](cloud.md) but is quite cumbersome.
 
 To make this easier two programs are provided: `buildSingularityImage` and `startInVM` which do what
 they say. The latter is for more advanced use cases so you should be fine using
diff --git a/doc.zih.tu-dresden.de/docs/use_of_hardware/.gitkeep b/doc.zih.tu-dresden.de/docs/use_of_hardware/.gitkeep
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/doc.zih.tu-dresden.de/mkdocs.yml b/doc.zih.tu-dresden.de/mkdocs.yml
index cacb32caba8bb1d1da3a6333289f63d494c2f128..cf62658a7ce3dddbc7731805670683ada89bffe4 100644
--- a/doc.zih.tu-dresden.de/mkdocs.yml
+++ b/doc.zih.tu-dresden.de/mkdocs.yml
@@ -1,103 +1,103 @@
 nav:
   - Home: index.md
   - Application for Login and Resources:
-    - Overview: application/Application.md
-    - Access: application/Access.md
-    - Terms: application/TermsOfUse.md
-    - Request for Resources: application/RequestForResources.md
-    - Project Request Form: application/ProjectRequestForm.md
-    - Project Management: application/ProjectManagement.md
+    - Overview: application/application.md
+    - Access: application/access.md
+    - Terms: application/terms_of_use.md
+    - Request for Resources: application/request_for_resources.md
+    - Project Request Form: application/project_request_form.md
+    - Project Management: application/project_management.md
   - Access to the Cluster:
-    - Overview: access.md
-    - Desktop Visualization: access/DesktopCloudVisualization.md
-    - Web VNC: access/WebVNC.md
-    - Login: access/Login.md
-    - Security Restrictions: access/SecurityRestrictions.md
-    - SSH with Putty: access/SSHMitPutty.md
+    - Overview: access/access.md
+    - Desktop Visualization: access/desktop_cloud_visualization.md
+    - Web VNC: access/web_vnc.md
+    - Login: access/login.md
+    - Security Restrictions: access/security_restrictions.md
+    - SSH with Putty: access/ssh_mit_putty.md
   - Transfer of Data:
-    - Overview: data_moving/data_moving.md
-    - Data Mover: data_moving/DataMover.md
-    - Export Nodes: data_moving/ExportNodes.md
+    - Overview: data_transfer/data_moving.md
+    - Data Mover: data_transfer/data_mover.md
+    - Export Nodes: data_transfer/export_nodes.md
   - Environment and Software:
-    - Overview: software/Overview.md
+    - Overview: software/overview.md
     - Environment:
-      - Modules: software/Modules.md
-      - Runtime Environment: software/RuntimeEnvironment.md
-      - Custom EasyBuild Modules: software/CustomEasyBuildEnvironment.md
+      - Modules: software/modules.md
+      - Runtime Environment: software/runtime_environment.md
+      - Custom EasyBuild Modules: software/custom_easy_build_environment.md
     - JupyterHub:
-      - Overview: software/JupyterHub.md
-      - JupyterHub for Teaching: software/JupyterHubForTeaching.md
+      - Overview: software/jupyterhub.md
+      - JupyterHub for Teaching: software/jupyterhub_for_teaching.md
     - Containers:
-      - Singularity: software/Containers.md
-      - Singularity Recicpe Hints: software/SingularityRecipeHints.md
-      - Singularity Example Definitions: software/SingularityExampleDefinitions.md
-      - VM tools: software/VMTools.md
+      - Singularity: software/containers.md
+      - Singularity Recicpe Hints: software/singularity_recipe_hints.md
+      - Singularity Example Definitions: software/singularity_example_definitions.md
+      - VM tools: software/vm_tools.md
     - Applications:
-      - Overview: software/Applications.md
-      - Bio Informatics: software/Bioinformatics.md
-      - Computational Fluid Dynamics (CFD): software/CFD.md
-      - Nanoscale Simulations: software/NanoscaleSimulations.md
-      - FEM Software: software/FEMSoftware.md
-    - Visualization: software/Visualization.md
+      - Overview: software/applications.md
+      - Bio Informatics: software/bioinformatics.md
+      - Computational Fluid Dynamics (CFD): software/cfd.md
+      - Nanoscale Simulations: software/nanoscale_simulations.md
+      - FEM Software: software/fem_software.md
+    - Visualization: software/visualization.md
     - HPC-DA:
-      - Get started with HPC-DA: software/GetStartedWithHPCDA.md
-      - Machine Learning: software/MachineLearning.md
-      - Deep Learning: software/DeepLearning.md
-      - Data Analytics with R: software/DataAnalyticsWithR.md
-      - Data Analytics with Python: software/Python.md
+      - Get started with HPC-DA: software/get_started_with_hpcda.md
+      - Machine Learning: software/machine_learning.md
+      - Deep Learning: software/deep_learning.md
+      - Data Analytics with R: software/data_analytics_with_r.md
+      - Data Analytics with Python: software/python.md
       - TensorFlow: 
-        - TensorFlow Overview: software/TensorFlow.md
-        - TensorFlow in Container: software/TensorFlowContainerOnHPCDA.md
-        - TensorFlow in JupyterHub: software/TensorFlowOnJupyterNotebook.md 
-      - Keras: software/Keras.md
-      - Dask: software/Dask.md
-      - Power AI: software/PowerAI.md
-      - PyTorch: software/PyTorch.md
-    - SCS5 Migration Hints: software/SCS5Software.md
-    - Cloud: software/Cloud.md
-    - Virtual Desktops: software/VirtualDesktops.md
+        - TensorFlow Overview: software/tensor_flow.md
+        - TensorFlow in Container: software/tensor_flow_container_on_hpcda.md
+        - TensorFlow in JupyterHub: software/tensor_flow_on_jupyter_notebook.md 
+      - Keras: software/keras.md
+      - Dask: software/dask.md
+      - Power AI: software/power_ai.md
+      - PyTorch: software/py_torch.md
+    - SCS5 Migration Hints: software/scs5_software.md
+    - Cloud: software/cloud.md
+    - Virtual Desktops: software/virtual_desktops.md
     - Software Development and Tools:
-      - Overview: software/SoftwareDevelopment.md
-      - Building Software: software/BuildingSoftware.md
-      - GPU Programming: software/GPUProgramming.md
-      - Compilers: software/Compilers.md
-      - Debuggers: software/Debuggers.md
-      - Libraries: software/Libraries.md
-      - MPI Error Detection: software/MPIUsageErrorDetection.md
-      - Score-P: software/ScoreP.md
-      - PAPI Library: software/PapiLibrary.md 
-      - Perf Tools: software/PerfTools.md 
-      - PIKA: software/PIKA.md
-      - Vampir: software/Vampir.md
-      - Mathematics: software/Mathematics.md
+      - Overview: software/software_development.md
+      - Building Software: software/building_software.md
+      - GPU Programming: software/gpu_programming.md
+      - Compilers: software/compilers.md
+      - Debuggers: software/debuggers.md
+      - Libraries: software/libraries.md
+      - MPI Error Detection: software/mpi_usage_error_detection.md
+      - Score-P: software/score_p.md
+      - PAPI Library: software/papi_library.md 
+      - Perf Tools: software/perf_tools.md 
+      - PIKA: software/pika.md
+      - Vampir: software/vampir.md
+      - Mathematics: software/mathematics.md
   - Data Lifecycle Management:
-    - Overview: data_management/DataManagement.md
-    - Announcement of Quotas: data_management/AnnouncementOfQuotas.md
-    - Workspaces: data_management/Workspaces.md
-    - BeeGFS: data_management/BeeGFS.md
-    - Intermediate Archive: data_management/IntermediateArchive.md
-    - Filesystems: data_management/FileSystems.md
-    - Warm Archive: data_management/WarmArchive.md
-    - HPC Storage Concept 2019: data_management/HPCStorageConcept2019.md
-    - Preservation of Research Data: data_management/PreservationResearchData.md
-    - Structuring Experiments: experiments.md
+    - Overview: data_lifecycle/data_management.md
+    - Quotas: data_lifecycle/quotas.md
+    - Workspaces: data_lifecycle/workspaces.md
+    - BeeGFS: data_lifecycle/bee_gfs.md
+    - Intermediate Archive: data_lifecycle/intermediate_archive.md
+    - Filesystems: data_lifecycle/file_systems.md
+    - Warm Archive: data_lifecycle/warm_archive.md
+    - HPC Storage Concept 2019: data_lifecycle/hpc_storage_concept2019.md
+    - Preservation of Research Data: data_lifecycle/preservation_research_data.md
+    - Structuring Experiments: data_lifecycle/experiments.md
   - Jobs and Resources:
-    - Overview: use_of_hardware.md
-    - Batch Systems: use_of_hardware/BatchSystems.md
+    - Overview: jobs_and_resources/use_of_hardware.md
+    - Batch Systems: jobs_and_resources/batch_systems.md
     - Hardware Resources:
-      - Hardware Taurus: use_of_hardware/HardwareTaurus.md
-      - AMD Rome Nodes: use_of_hardware/RomeNodes.md
-      - IBM Power9 Nodes: use_of_hardware/Power9.md
-      - NVMe Storage: use_of_hardware/NvmeStorage.md
-      - Alpha Centauri: use_of_hardware/AlphaCentauri.md
-      - HPE Superdome Flex: use_of_hardware/SDFlex.md
-    - Checkpoint/Restart: use_of_hardware/CheckpointRestart.md
-    - Overview2: jobs/index.md
-    - Taurus: jobs/SystemTaurus.md
-    - Slurm Examples: jobs/SlurmExamples.md
-    - Slurm: jobs/Slurm.md
-    - HPC-DA: jobs/HPCDA.md
-    - Binding And Distribution Of Tasks: jobs/BindingAndDistributionOfTasks.md
+      - Hardware Taurus: jobs_and_resources/hardware_taurus.md
+      - AMD Rome Nodes: jobs_and_resources/rome_nodes.md
+      - IBM Power9 Nodes: jobs_and_resources/power9.md
+      - NVMe Storage: jobs_and_resources/nvme_storage.md
+      - Alpha Centauri: jobs_and_resources/alpha_centauri.md
+      - HPE Superdome Flex: jobs_and_resources/sd_flex.md
+    - Checkpoint/Restart: jobs_and_resources/checkpoint_restart.md
+    - Overview2: jobs_and_resources/index.md
+    - Taurus: jobs_and_resources/system_taurus.md
+    - Slurm Examples: jobs_and_resources/slurm_examples.md
+    - Slurm: jobs_and_resources/slurm.md
+    - HPC-DA: jobs_and_resources/hpcda.md
+    - Binding And Distribution Of Tasks: jobs_and_resources/binding_and_distribution_of_tasks.md
       #    - Queue Policy: jobs/policy.md
       #    - Examples: jobs/examples/index.md
       #    - Affinity: jobs/affinity/index.md
@@ -109,33 +109,33 @@ nav:
   #- Tests: tests.md
   - Support: support.md
   - Archive:
-    - CXFS End of Support: archive/CXFSEndOfSupport.md
-    - Debugging Tools: archive/DebuggingTools.md
-    - Hardware: archive/Hardware.md
-    - Hardware Altix: archive/HardwareAltix.md
-    - Hardware Atlas: archive/HardwareAtlas.md
-    - Hardware Deimos: archive/HardwareDeimos.md
-    - Hardware Phobos: archive/HardwarePhobos.md
-    - Hardware Titan: archive/HardwareTitan.md
-    - Hardware Triton: archive/HardwareTriton.md
-    - Hardware Venus: archive/HardwareVenus.md
-    - Introduction: archive/Introduction.md
-    - KNL Nodes: archive/KnlNodes.md
-    - Load Leveler: archive/LoadLeveler.md
-    - Migrate to Atlas: archive/MigrateToAtlas.md
-    - No IB Jobs: archive/NoIBJobs.md
-    - Phase2 Migration: archive/Phase2Migration.md
-    - Platform LSF: archive/PlatformLSF.md
-    - RamDisk Documentation: archive/RamDiskDocumentation.md
-    - Step by Step Taurus: archive/StepByStepTaurus.md
-    - System Altix: archive/SystemAltix.md
-    - System Atlas: archive/SystemAtlas.md
-    - System Venus: archive/SystemVenus.md
-    - Taurus II: archive/TaurusII.md
-    - UNICORE Rest API: archive/UNICORERestAPI.md
-    - Vampir Trace: archive/VampirTrace.md
-    - Venus Open: archive/VenusOpen.md
-    - Windows Batchjobs: jobs/WindowsBatch.md
+    - CXFS End of Support: archive/cxfs_end_of_support.md
+    - Debugging Tools: archive/debugging_tools.md
+    - Hardware: archive/hardware.md
+    - Hardware Altix: archive/hardware_altix.md
+    - Hardware Atlas: archive/hardware_atlas.md
+    - Hardware Deimos: archive/hardware_deimos.md
+    - Hardware Phobos: archive/hardware_phobos.md
+    - Hardware Titan: archive/hardware_titan.md
+    - Hardware Triton: archive/hardware_triton.md
+    - Hardware Venus: archive/hardware_venus.md
+    - Introduction: archive/introduction.md
+    - KNL Nodes: archive/knl_nodes.md
+    - Load Leveler: archive/load_leveler.md
+    - Migrate to Atlas: archive/migrate_to_atlas.md
+    - No IB Jobs: archive/no_ib_jobs.md
+    - Phase2 Migration: archive/phase2_migration.md
+    - Platform LSF: archive/platform_lsf.md
+    - RamDisk Documentation: archive/ram_disk_documentation.md
+    - Step by Step Taurus: archive/step_by_step_taurus.md
+    - System Altix: archive/system_altix.md
+    - System Atlas: archive/system_atlas.md
+    - System Venus: archive/system_venus.md
+    - Taurus II: archive/taurus_ii.md
+    - UNICORE Rest API: archive/unicore_rest_api.md
+    - Vampir Trace: archive/vampir_trace.md
+    - Venus Open: archive/venus_open.md
+    - Windows Batchjobs: jobs/windows_batch.md
 
 
 # Project Information