Skip to content
Snippets Groups Projects
Commit dc0326ee authored by Martin Schroschk's avatar Martin Schroschk
Browse files

Fix spelling; use markdown mode of aspell; add words to word list

parent 4520bda0
No related branches found
No related tags found
3 merge requests!322Merge preview into main,!319Merge preview into main,!262Restructure pages regarding switched off systems
...@@ -9,10 +9,10 @@ type="frame" align="right" caption="picture 1: login screen" width="170" ...@@ -9,10 +9,10 @@ type="frame" align="right" caption="picture 1: login screen" width="170"
zoom="on zoom="on
">%ATTACHURL%/request_step1_b.png</span> ">%ATTACHURL%/request_step1_b.png</span>
The first step is asking for the personal informations of the requester. The first step is asking for the personal information of the requester.
**That's you**, not the leader of this project! \<br />If you have an **That's you**, not the leader of this project! \<br />If you have an
ZIH-Login, you can use it \<sup>\[Pic 1\]\</sup>. If not, you have to ZIH-Login, you can use it \<sup>\[Pic 1\]\</sup>. If not, you have to
fill in the whole informations \<sup>\[Pic.:2\]\</sup>. <span fill in the whole information \<sup>\[Pic.:2\]\</sup>. <span
class="twiki-macro IMAGE">clear</span> class="twiki-macro IMAGE">clear</span>
## second step (project details) ## second step (project details)
...@@ -27,8 +27,8 @@ general project Details.\<br />Any project have: ...@@ -27,8 +27,8 @@ general project Details.\<br />Any project have:
- Projects starts at the first of a month and ends on the last day - Projects starts at the first of a month and ends on the last day
of a month. So you are not able to send on the second of a month of a month. So you are not able to send on the second of a month
a project request which start in this month. a project request which start in this month.
- The approval is for a maximum of one year. Be carfull: a - The approval is for a maximum of one year. Be careful: a
duratoin from "May, 2013" till "May 2014" has 13 month. duration from "May, 2013" till "May 2014" has 13 month.
- a selected science, according to the DFG: - a selected science, according to the DFG:
<http://www.dfg.de/dfg_profil/gremien/fachkollegien/faecher/index.jsp> <http://www.dfg.de/dfg_profil/gremien/fachkollegien/faecher/index.jsp>
- a sponsorship - a sponsorship
......
...@@ -2,12 +2,12 @@ ...@@ -2,12 +2,12 @@
!!! warning !!! warning
**This page is deprecated! The SGI Atlix is a former system!** **This page is deprecated! The SGI Altix is a former system!**
## System ## System
The SGI Altix 4700 is a shared memory system with dual core Intel Itanium 2 CPUs (Montecito) The SGI Altix 4700 is a shared memory system with dual core Intel Itanium 2 CPUs (Montecito)
operated by the Linux operating system SuSE SLES 10 with a 2.6 kernel. Currently, the following operated by the Linux operating system SUSE SLES 10 with a 2.6 kernel. Currently, the following
Altix partitions are installed at ZIH: Altix partitions are installed at ZIH:
|Name|Total Cores|Compute Cores|Memory per Core| |Name|Total Cores|Compute Cores|Memory per Core|
...@@ -22,23 +22,23 @@ The jobs for these partitions (except Neptun) are scheduled by the [Platform LSF ...@@ -22,23 +22,23 @@ The jobs for these partitions (except Neptun) are scheduled by the [Platform LSF
batch system running on `mars.hrsk.tu-dresden.de`. The actual placement of a submitted job may batch system running on `mars.hrsk.tu-dresden.de`. The actual placement of a submitted job may
depend on factors like memory size, number of processors, time limit. depend on factors like memory size, number of processors, time limit.
### Filesystems ### File Systems
All partitions share the same CXFS filesystems `/work` and `/fastfs`. All partitions share the same CXFS file systems `/work` and `/fastfs`.
### ccNuma Architecture ### ccNUMA Architecture
The SGI Altix has a ccNUMA architecture, which stands for Cache Coherent Non-Uniform Memory Access. The SGI Altix has a ccNUMA architecture, which stands for *Cache Coherent Non-Uniform Memory Access*.
It can be considered as a SM-MIMD (*shared memory - multiple instruction multiple data*) machine. It can be considered as a SM-MIMD (*shared memory - multiple instruction multiple data*) machine.
The SGI ccNuma system has the following properties: The SGI ccNUMA system has the following properties:
- Memory is physically distributed but logically shared - Memory is physically distributed but logically shared
- Memory is kept coherent automatically by hardware. - Memory is kept coherent automatically by hardware.
- Coherent memory: memory is always valid (caches hold copies) - Coherent memory: memory is always valid (caches hold copies)
- Granularity is L3 cacheline (128 B) - Granularity is L3 cache line (128 B)
- Bandwidth of NumaLink4 is 6.4 GB/s - Bandwidth of NUMAlink4 is 6.4 GB/s
The ccNuma is a compromise between a distributed memory system and a flat symmetric multi processing The ccNUMA is a compromise between a distributed memory system and a flat symmetric multi processing
machine (SMP). Although the memory is shared, the access properties are not the same. machine (SMP). Although the memory is shared, the access properties are not the same.
### Compute Module ### Compute Module
...@@ -69,7 +69,7 @@ Remote memory access via SHUBs and NUMAlink ...@@ -69,7 +69,7 @@ Remote memory access via SHUBs and NUMAlink
### CPU ### CPU
The current SGI Altix is based on the dual core Intel Itanium 2 The current SGI Altix is based on the dual core Intel Itanium 2
processor (codename "Montecito"). One core has the following basic processor (code name "Montecito"). One core has the following basic
properties: properties:
| | | | | |
......
...@@ -7,14 +7,14 @@ ...@@ -7,14 +7,14 @@
## System ## System
The PC farm `Atlas` is a heterogeneous, general purpose cluster based on multicore chips AMD Opteron The PC farm `Atlas` is a heterogeneous, general purpose cluster based on multicore chips AMD Opteron
6274 ("Bulldozer"). The nodes are operated by the Linux operating system SuSE SLES 11 with a 2.6 6274 ("Bulldozer"). The nodes are operated by the Linux operating system SUSE SLES 11 with a 2.6
kernel. Currently, the following hardware is installed: kernel. Currently, the following hardware is installed:
| Component | Count | | Component | Count |
|-----------|--------| |-----------|--------|
| CPUs |AMD Opteron 6274 | | CPUs |AMD Opteron 6274 |
| number of cores | 5120 | | number of cores | 5120 |
|th. peak performance | 45 TFlops | |th. peak performance | 45 TFLOPS |
|compute nodes | 4-way nodes *Saxonid* with 64 cores | |compute nodes | 4-way nodes *Saxonid* with 64 cores |
|nodes with 64 GB RAM | 48 | |nodes with 64 GB RAM | 48 |
|nodes with 128 GB RAM | 12 | |nodes with 128 GB RAM | 12 |
...@@ -23,7 +23,7 @@ kernel. Currently, the following hardware is installed: ...@@ -23,7 +23,7 @@ kernel. Currently, the following hardware is installed:
Mars and Deimos users: Please read the [migration hints](migrate_to_atlas.md). Mars and Deimos users: Please read the [migration hints](migrate_to_atlas.md).
All nodes share the `/home` and `/fastfs` file system with our other HPC systems. Each All nodes share the `/home` and `/fastfs` file system with our other HPC systems. Each
node has 180 GB local disk space for scratch mounted on `/tmp` . The jobs for the compute nodes are node has 180 GB local disk space for scratch mounted on `/tmp`. The jobs for the compute nodes are
scheduled by the [Platform LSF](platform_lsf.md) batch system from the login nodes scheduled by the [Platform LSF](platform_lsf.md) batch system from the login nodes
`atlas.hrsk.tu-dresden.de` . `atlas.hrsk.tu-dresden.de` .
...@@ -44,7 +44,7 @@ below the mount point `/hpc_work`. ...@@ -44,7 +44,7 @@ below the mount point `/hpc_work`.
| L2 cache | 2 MB per module | | L2 cache | 2 MB per module |
| L3 cache | 12 MB total, 6 MB shared between 4 modules = 8 cores | | L3 cache | 12 MB total, 6 MB shared between 4 modules = 8 cores |
| FP units | 1 per module (supports fused multiply-add) | | FP units | 1 per module (supports fused multiply-add) |
| th. peak performance | 8.8 GFlops per core (w/o turbo) | | th. peak performance | 8.8 GFLOPS per core (w/o turbo) |
The CPU belongs to the x86_64 family. Since it is fully capable of The CPU belongs to the x86_64 family. Since it is fully capable of
running x86-code, one should compare the performances of the 32 and 64 running x86-code, one should compare the performances of the 32 and 64
...@@ -86,7 +86,7 @@ user's job. Normally a job can be submitted with these data: ...@@ -86,7 +86,7 @@ user's job. Normally a job can be submitted with these data:
#### LSF #### LSF
The batch sytem on Atlas is LSF. For general information on LSF, please follow The batch system on Atlas is LSF. For general information on LSF, please follow
[this link](platform_lsf.md). [this link](platform_lsf.md).
#### Submission of Parallel Jobs #### Submission of Parallel Jobs
......
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
**This page is deprecated! Trition is a former system!** **This page is deprecated! Trition is a former system!**
Trition is a cluster based on quadcore Intel Xeon CPUs. The nodes are operated Trition is a cluster based on quadcore Intel Xeon CPUs. The nodes are operated
by the Linux operating system SuSE SLES 11. Currently, the following by the Linux operating system SUSE SLES 11. Currently, the following
hardware is installed: hardware is installed:
| Component | Count | | Component | Count |
......
...@@ -19,9 +19,9 @@ the Linux operating system SLES 11 SP 3 with a kernel version 3.x. ...@@ -19,9 +19,9 @@ the Linux operating system SLES 11 SP 3 with a kernel version 3.x.
From our experience, most parallel applications benefit from using the additional hardware From our experience, most parallel applications benefit from using the additional hardware
hyperthreads. hyperthreads.
### Filesystems ### File Systems
Venus uses the same HOME file system as all our other HPC installations. Venus uses the same `home` file system as all our other HPC installations.
For computations, please use `/scratch`. For computations, please use `/scratch`.
## Usage ## Usage
...@@ -77,7 +77,7 @@ nodes with dedicated resources for the user's job. Normally a job can be submitt ...@@ -77,7 +77,7 @@ nodes with dedicated resources for the user's job. Normally a job can be submitt
- files for redirection of output and error messages, - files for redirection of output and error messages,
- executable and command line parameters. - executable and command line parameters.
The batch sytem on Venus is Slurm. For general information on Slurm, please follow The batch system on Venus is Slurm. For general information on Slurm, please follow
[this link](../jobs_and_resources/slurm.md). [this link](../jobs_and_resources/slurm.md).
#### Submission of Parallel Jobs #### Submission of Parallel Jobs
......
...@@ -8,7 +8,7 @@ basedir=`dirname "$basedir"` ...@@ -8,7 +8,7 @@ basedir=`dirname "$basedir"`
wordlistfile=$basedir/wordlist.aspell wordlistfile=$basedir/wordlist.aspell
function getNumberOfAspellOutputLines(){ function getNumberOfAspellOutputLines(){
cat - | aspell -p "$wordlistfile" --ignore 2 -l en_US list | sort -u | wc -l cat - | aspell -p "$wordlistfile" --ignore 2 -l en_US list --mode=markdown | sort -u | wc -l
} }
branch="preview" branch="preview"
...@@ -50,3 +50,5 @@ done <<< "$files" ...@@ -50,3 +50,5 @@ done <<< "$files"
if [ "$any_fails" == true ]; then if [ "$any_fails" == true ]; then
exit 1 exit 1
fi fi
echo "hier"
...@@ -4,7 +4,7 @@ scriptpath=${BASH_SOURCE[0]} ...@@ -4,7 +4,7 @@ scriptpath=${BASH_SOURCE[0]}
basedir=`dirname "$scriptpath"` basedir=`dirname "$scriptpath"`
basedir=`dirname "$basedir"` basedir=`dirname "$basedir"`
wordlistfile=$basedir/wordlist.aspell wordlistfile=$basedir/wordlist.aspell
acmd="aspell -p $wordlistfile --ignore 2 -l en_US list" acmd="aspell -p $wordlistfile --ignore 2 -l en_US list --mode=markdown"
function spell_check () { function spell_check () {
file_to_check=$1 file_to_check=$1
......
...@@ -40,3 +40,55 @@ TensorFlow ...@@ -40,3 +40,55 @@ TensorFlow
Theano Theano
Vampir Vampir
ZIH ZIH
DFG
NUMAlink
ccNUMA
NUMA
Montecito
Opteron
Saxonid
MIMD
LSF
lsf
Itanium
mpif
mpicc
mpiCC
mpicxx
mpirun
mpifort
ifort
icc
icpc
gfortran
Altix
Neptun
Trition
SUSE
SLES
Fortran
SMP
MEGWARE
SGI
CXFS
NFS
CPUs
GFLOPS
TFLOPS
png
jpg
pdf
bsub
OpenMPI
openmpi
multicore
fastfs
tmp
MKL
TBB
LoadLeveler
Gnuplot
gnuplot
RSA
SHA
pipelining
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment