diff --git a/doc.zih.tu-dresden.de/docs/archive/HardwareAltix.md b/doc.zih.tu-dresden.de/docs/archive/HardwareAltix.md
index 7e163a400d67d6025996229bb20d3d547beffc88..7912181e49c8c113b601c419d64cd859c4163b69 100644
--- a/doc.zih.tu-dresden.de/docs/archive/HardwareAltix.md
+++ b/doc.zih.tu-dresden.de/docs/archive/HardwareAltix.md
@@ -1,5 +1,3 @@
-
-
 # HPC Component SGI Altix
 
 The SGI Altix 4700 is a shared memory system with dual core Intel
@@ -7,34 +5,36 @@ Itanium 2 CPUs (Montecito) operated by the Linux operating system SuSE
 SLES 10 with a 2.6 kernel. Currently, the following Altix partitions are
 installed at ZIH:
 
-\|\*Name \*\|\*Total Cores \*\|**Compute Cores**\|**Memory per Core**\|
-\| Mars \|384 \|348 \|1 GB\| \|Jupiter \|512 \|506 \|4 GB\| \|Saturn
-\|512 \|506 \|4 GB\| \|Uranus \|512 \|506 \|4 GB\| \|Neptun \|128 \|128
-\|1 GB\|
+|Name|Total Cores|Compute Cores|Memory per Core|
+|:----|:----|:----|:----|
+| Mars |384 |348 |1 GB|
+|Jupiter |512 |506 |4 GB|
+|Saturn |512 |506 |4 GB|
+|Uranus |512 |506|4 GB|
+|Neptun |128 |128 |1 GB|
 
-\<P> The jobs for these partitions (except \<TT>Neptun\</TT>) are
-scheduled by the [Platform LSF](Platform LSF) batch system running on
-`mars.hrsk.tu-dresden.de`. The actual placement of a submitted job may
+The jobs for these partitions (except Neptun) are scheduled by the [Platform LSF](PlatformLSF.md)
+batch system running on `mars.hrsk.tu-dresden.de`. The actual placement of a submitted job may
 depend on factors like memory size, number of processors, time limit.
 
-## Filesystems All partitions share the same CXFS filesystems `/work` and `/fastfs`. ... [more information](FileSystems)
+## Filesystems
+
+All partitions share the same CXFS filesystems `/work` and `/fastfs`.
 
 ## ccNuma Architecture
 
-The SGI Altix has a ccNUMA architecture, which stands for Cache Coherent
-Non-Uniform Memory Access. It can be considered as a SM-MIMD (*shared
-memory - multiple instruction multiple data*) machine. The SGI ccNuma
-system has the following properties:
+The SGI Altix has a ccNUMA architecture, which stands for Cache Coherent Non-Uniform Memory Access.
+It can be considered as a SM-MIMD (*shared memory - multiple instruction multiple data*) machine.
+The SGI ccNuma system has the following properties:
 
--   Memory is physically distributed but logically shared
--   Memory is kept coherent automatically by hardware.
--   Coherent memory: memory is always valid (caches hold copies)
--   Granularity is L3 cacheline (128 B)
--   Bandwidth of NumaLink4 is 6.4 GB/s
+- Memory is physically distributed but logically shared
+- Memory is kept coherent automatically by hardware.
+- Coherent memory: memory is always valid (caches hold copies)
+- Granularity is L3 cacheline (128 B)
+- Bandwidth of NumaLink4 is 6.4 GB/s
 
-The ccNuma is a compromise between a distributed memory system and a
-flat symmetric multi processing machine (SMP). Altough the memory is
-shared, the access properties are not the same.
+The ccNuma is a compromise between a distributed memory system and a flat symmetric multi processing
+machine (SMP). Altough the memory is shared, the access properties are not the same.
 
 ## Compute Module
 
@@ -79,13 +79,9 @@ properties:
 | L3 cache                            | 9 MB, 12 clock latency     |
 | front side bus                      | 128 bit x 200 MHz          |
 
-The theoretical peak performance of all Altix partitions is hence about
-13.1 TFLOPS.
-
-The processor has hardware support for efficient software pipelining.
-For many scientific applications it provides a high sustained
-performance exceeding the performance of RISC CPUs with similar peak
-performance. On the down side is the fact that the compiler has to
-explicitely discover and exploit the parallelism in the application.
+The theoretical peak performance of all Altix partitions is hence about 13.1 TFLOPS.
 
-<span class="twiki-macro COMMENT"></span>
+The processor has hardware support for efficient software pipelining.  For many scientific
+applications it provides a high sustained performance exceeding the performance of RISC CPUs with
+similar peak performance. On the down side is the fact that the compiler has to explicitely discover
+and exploit the parallelism in the application.
diff --git a/doc.zih.tu-dresden.de/docs/archive/SystemAltix.md b/doc.zih.tu-dresden.de/docs/archive/SystemAltix.md
index 6f26ccc8ce43edbe1d3ad1e7c6e0402ea2db7c99..504d983cf142662f2c615775d91a29ddf15b9bf5 100644
--- a/doc.zih.tu-dresden.de/docs/archive/SystemAltix.md
+++ b/doc.zih.tu-dresden.de/docs/archive/SystemAltix.md
@@ -1,29 +1,25 @@
 # SGI Altix
 
-**`%RED%This page is deprecated! The SGI Atlix is a former system! [[Compendium.Hardware][(Current hardware)]]%ENDCOLOR%`**
+**This page is deprecated! The SGI Atlix is a former system!**
 
-The SGI Altix is shared memory system for large parallel jobs using up
-to 2000 cores in parallel ( [information on the
-hardware](HardwareAltix)). It's partitions are Mars (login), Jupiter,
-Saturn, Uranus, and Neptun (interactive).
+The SGI Altix is shared memory system for large parallel jobs using up to 2000 cores in parallel (
+[information on the hardware](HardwareAltix.md)). It's partitions are Mars (login), Jupiter, Saturn,
+Uranus, and Neptun (interactive).
 
 ## Compiling Parallel Applications
 
-This installation of the Message Passing Interface supports the MPI 1.2
-standard with a few MPI-2 features (see `man mpi` ). There is no command
-like `mpicc`, instead you just have to use the normal compiler (e.g.
-`icc`, `icpc`, or `ifort`) and append `-lmpi` to the linker command
-line. Since the include files as well as the library are in standard
-directories there is no need to append additional library- or
-include-paths.
+This installation of the Message Passing Interface supports the MPI 1.2 standard with a few MPI-2
+features (see `man mpi` ). There is no command like `mpicc`, instead you just have to use the normal
+compiler (e.g.  `icc`, `icpc`, or `ifort`) and append `-lmpi` to the linker command line. Since the
+include files as well as the library are in standard directories there is no need to append
+additional library- or include-paths.
 
--   Note for C++ programmers: You need to link with
-    `-lmpi++abi1002 -lmpi` instead of `-lmpi`.
--   Note for Fortran programmers: The MPI module is only provided for
-    the Intel compiler and does not work with gfortran.
+- Note for C++ programmers: You need to link with `-lmpi++abi1002 -lmpi` instead of `-lmpi`.
+- Note for Fortran programmers: The MPI module is only provided for the Intel compiler and does not
+  work with gfortran.
 
-Please follow these following guidelines to run your parallel program
-using the batch system on Mars.
+Please follow these following guidelines to run your parallel program using the batch system on
+Mars.
 
 ## Batch system
 
@@ -42,24 +38,27 @@ user's job. Normally a job can be submitted with these data:
 
 ### LSF
 
-The batch sytem on Atlas is LSF. For general information on LSF, please
-follow [this link](PlatformLSF).
+The batch sytem on Atlas is LSF. For general information on LSF, please follow
+[this link](PlatformLSF.md).
 
 ### Submission of Parallel Jobs
 
-The MPI library running on the Altix is provided by SGI and highly
-optimized for the ccNUMA architecture of this machine. However,
-communication within a partition is faster than across partitions. Take
-this into consideration when you submit your job.
+The MPI library running on the Altix is provided by SGI and highly optimized for the ccNUMA
+architecture of this machine. However, communication within a partition is faster than across
+partitions. Take this into consideration when you submit your job.
 
 Single-partition jobs can be started like this:
 
-    <span class='WYSIWYG_HIDDENWHITESPACE'>&nbsp;</span>bsub -R "span[hosts=1]" -n 16 mpirun -np 16 a.out<span class='WYSIWYG_HIDDENWHITESPACE'>&nbsp;</span>
+```Bash
+bsub -R "span[hosts=1]" -n 16 mpirun -np 16 a.out<
+```
 
 Really large jobs with over 256 CPUs might run over multiple partitions.
 Cross-partition jobs can be submitted via PAM like this
 
-    <span class='WYSIWYG_HIDDENWHITESPACE'>&nbsp;</span>bsub -n 1024 pamrun a.out<span class='WYSIWYG_HIDDENWHITESPACE'>&nbsp;</span>
+```Bash
+bsub -n 1024 pamrun a.out
+```
 
 ### Batch Queues
 
@@ -70,5 +69,3 @@ Cross-partition jobs can be submitted via PAM like this
 | `intermediate` | `all`            | `min. 64, max. 255` | `12h`           | `120h`       |
 | `large`        | `all`            | `min.256, max.1866` | `12h`           | `24h`        |
 | `ilr`          | `selected users` | `min. 1, max. 768`  | `12h`           | `24h`        |
-
--- Main.UlfMarkwardt - 2013-02-27