Skip to content
Snippets Groups Projects
Commit 7ab8c911 authored by Lars Jitschin's avatar Lars Jitschin
Browse files

Fixing aspell spelling mistakes

together with Martin, thank you for helping me with this.
parent caf468c8
No related branches found
No related tags found
3 merge requests!412Manual attempt to merge preview into main,!402Solved issue-194. Added missing information regarding usage of SSH config for...,!347Review environment section
......@@ -30,20 +30,20 @@ see all topics and their available module files. If you just wish to see the ins
certain module, you can use `module avail softwarename` and it will display the available versions of
`softwarename` only.
## Module environments
## Module Environments
On the ZIH systems, there exist different module environments, each containing a set of software modules.
They are activated via the meta module modenv which has different versions, one of which is loaded
They are activated via the meta module `modenv` which has different versions, one of which is loaded
by default. You can switch between them by simply loading the desired modenv-version, e.g.:
```
```console
marie@compute$ module load modenv/ml
```
### modenv/scs5 (default)
* SCS5 software
* usually optimized for Intel processors (Partitions: haswell, broadwell, gpu2, julia)
* usually optimized for Intel processors (Partitions: `haswell`, `broadwell`, `gpu2`, `julia`)
### modenv/ml
......@@ -70,12 +70,11 @@ The command `module spider <modname>` allows searching for specific software in
environments. It will also display information on how to load a found module when giving a precise
module (with version) as the parameter.
## Per-architecture builds
## Per-architecture Builds
Since we have a heterogeneous cluster, we do individual builds of some of the software for each
architecture present. This ensures that, no matter what partition the software runs on, a build
optimized for the host architecture is used automatically. This is achieved by having
'/sw/installed' symlinked to different directories on the compute nodes.
optimized for the host architecture is used automatically. For that purpose we have created symbolic links at the system path '/sw/installed' on the compute nodes.
However, not every module will be available for each node type or partition. Especially when
introducing new hardware to the cluster, we do not want to rebuild all of the older module versions
......@@ -84,7 +83,7 @@ and in some cases cannot fall-back to a more generic build either. That's why we
### Example Invocation of ml_arch_avail
```
```console
marie@compute$ ml_ar:qLch_avail CP2K
Example output:
......@@ -112,12 +111,12 @@ In order to use your own module files please use the command
that are searched by lmod (i.e. the `module` command). You may use a directory `privatemodules`
within your home or project directory to setup your own module files.
Please see the [Environment Modules open source project's webpage](http://modules.sourceforge.net/)
for futher information on writing module files.
Please see the [Environment Modules open source project's web page](http://modules.sourceforge.net/)
for further information on writing module files.
### 1. Create Directories
```
```console
marie@compute$ cd $HOME
marie@compute$ mkdir --verbose --parents privatemodules/testsoftware
marie@compute$ cd privatemodules/testsoftware
......@@ -127,7 +126,7 @@ marie@compute$ cd privatemodules/testsoftware
### 2. Notify lmod
```
```console
marie@compute$ module use $HOME/privatemodules
```
......@@ -137,7 +136,7 @@ marie@compute$ module use $HOME/privatemodules
Create a file with the name `1.0` with a
test software in the `testsoftware` directory you created earlier
(use e.g. echo, emacs, etc)
(using your favorite editor) and paste the following text into it:
```
#%Module######################################################################
......@@ -171,9 +170,9 @@ Check the availability of the module with `ml av`, the output should look like t
### 5. Load Module
Load the test module with `module load testsoftware`, the output:
Load the test module with `module load testsoftware`, the output should look like this:
```
```console
Load testsoftware version 1.0
Module testsoftware/1.0 loaded.
```
......@@ -188,7 +187,7 @@ The module files have to be stored in your global projects directory
above. To use a project-wide module file you have to add the path to the module file to the module
environment with the command
```
```console
marie@compute$ module use /projects/p_projectname/privatemodules
```
......@@ -202,5 +201,5 @@ basis. This is the reason why we urge users to store (large) temporary data (lik
on the /scratch filesystem or at local scratch disks.
**Please note**: We have set `ulimit -c 0` as a default to prevent users from filling the disk with
the dump of a crashed program. bash -users can use `ulimit -Sc unlimited` to enable the debugging
via analyzing the core file (limit coredumpsize unlimited for tcsh).
the dump of crashed programs. `bash` users can use `ulimit -Sc unlimited` to enable the debugging
via analyzing the core file.
personal_ws-1.1 en 203
personal_ws-1.1 en 203
APIs
AVX
Abaqus
Altix
Amber
Amdahl's
analytics
anonymized
APIs
AVX
BeeGFS
benchmarking
BLAS
broadwell
bsub
bullx
BeeGFS
CCM
ccNUMA
centauri
CentOS
cgroups
checkpointing
Chemnitz
citable
CLI
conda
CPU
CPUID
CPUs
css
CSV
CUDA
cuDNN
CXFS
dask
dataframes
DataFrames
datamover
DataParallel
CentOS
Chemnitz
DDP
DDR
DFG
DistributedDataParallel
DMTCP
DNS
DataFrames
DataParallel
DistributedDataParallel
DockerHub
Dockerfile
Dockerfiles
DockerHub
dockerized
EasyBuild
ecryptfs
engl
english
env
EPYC
Espresso
ESSL
fastfs
EasyBuild
Espresso
FFT
FFTW
filesystem
filesystems
Flink
FMA
foreach
Flink
Fortran
Galilei
Gauss
Gaussian
GBit
GDDR
GFLOPS
gfortran
GPU
GPUs
GROMACS
Galilei
Gauss
Gaussian
GiB
gifferent
GitHub
GitLab
GitLab's
glibc
gnuplot
GPU
GPUs
GROMACS
hadoop
haswell
HBM
HDF
HDFS
HDFView
Horovod
hostname
Hostnames
HPC
HPE
HPL
html
hyperparameter
hyperparameters
hyperthreading
icc
icpc
ifort
Horovod
Hostnames
IPs
ISA
ImageNet
Infiniband
inode
IPs
Itanium
jobqueue
jpg
jss
Jupyter
JupyterHub
JupyterLab
Keras
KNL
Keras
LAMMPS
LAPACK
lapply
LINPACK
linter
Linter
LoadLeveler
localhost
lsf
lustre
markdownlint
Mathematica
MathKernel
MathWorks
matlab
MEGWARE
MiB
MIMD
Miniconda
mkdocs
MKL
MNIST
MathKernel
MathWorks
Mathematica
MiB
Miniconda
Montecito
mountpoint
mpi
mpicc
mpiCC
mpicxx
mpif
mpifort
mpirun
multicore
multithreaded
Multithreading
MultiThreading
Multithreading
NAMD
natively
nbsp
nbsp
NCCL
Neptun
NFS
NGC
NRINGS
NUMA
NUMAlink
NumPy
Nutzungsbedingungen
Nvidia
NVLINK
NVMe
NWChem
Neptun
NumPy
Nutzungsbedingungen
Nvidia
OME
OmniOpt
OPARI
OmniOpt
OpenACC
OpenBLAS
OpenCL
OpenGL
OpenMP
openmpi
OpenMPI
OpenSSH
Opteron
PAPI
PESSL
PGI
PMI
PSOCK
Pandarallel
Perf
PiB
Pika
PowerAI
Pre
Preload
Pthreads
Quantum
README
RHEL
RSA
RSS
RStudio
Rmpi
Rsync
Runtime
SFTP
SGEMM
SGI
SHA
SHMEM
SLES
SMP
SMT
SSHFS
STAR
SUSE
SXM
Sandybridge
Saxonid
ScaDS
ScaLAPACK
Scalasca
SciPy
Scikit
Slurm
SubMathKernel
Superdome
TBB
TCP
TFLOPS
TensorBoard
TensorFlow
Theano
ToDo
Trition
VASP
VMSize
VMs
VPN
Vampir
VampirTrace
VampirTrace's
VirtualGL
WebVNC
WinSCP
Workdir
XArray
XGBoost
XLC
XLF
Xeon
Xming
ZIH
ZIH's
analytics
anonymized
benchmarking
broadwell
bsub
bullx
ccNUMA
centauri
cgroups
checkpointing
citable
conda
css
cuDNN
dask
dataframes
datamover
dockerized
ecryptfs
engl
english
env
fastfs
filesystem
filesystems
foreach
gfortran
gifferent
glibc
gnuplot
hadoop
haswell
hiera
hostname
html
hyperparameter
hyperparameters
hyperthreading
icc
icpc
ifort
inode
jobqueue
jpg
jss
lapply
linter
lmod
localhost
lsf
lustre
markdownlint
matlab
mkdocs
modenv
modenvs
modulefile
mountpoint
mpi
mpiCC
mpicc
mpicxx
mpif
mpifort
mpirun
multicore
multithreaded
natively
nbsp
nbsp
openmpi
overfitting
pandarallel
Pandarallel
PAPI
parallelization
parallelize
parfor
pdf
Perf
PESSL
PGI
PiB
Pika
pipelining
PMI
png
PowerAI
ppc
pre
Pre
Preload
preloaded
preloading
PSOCK
Pthreads
pymdownx
Quantum
queue
randint
reachability
README
reproducibility
requeueing
RHEL
Rmpi
rome
romeo
RSA
RSS
RStudio
Rsync
runnable
runtime
Runtime
salloc
Sandybridge
Saxonid
sbatch
ScaDS
scalable
ScaLAPACK
Scalasca
scancel
Scikit
SciPy
scontrol
scp
scs
SFTP
SGEMM
SGI
SHA
SHMEM
SLES
Slurm
SMP
SMT
squeue
srun
ssd
SSHFS
STAR
stderr
stdout
subdirectories
subdirectory
SubMathKernel
Superdome
SUSE
SXM
TBB
TCP
TensorBoard
TensorFlow
TFLOPS
Theano
tmp
todo
ToDo
toolchain
toolchains
tracefile
tracefiles
transferability
Trition
unencrypted
uplink
userspace
Vampir
VampirTrace
VampirTrace's
VASP
vectorization
venv
virtualenv
VirtualGL
VMs
VMSize
VPN
WebVNC
WinSCP
Workdir
workspace
workspaces
XArray
Xeon
XGBoost
XLC
XLF
Xming
yaml
zih
ZIH
ZIH's
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment