diff --git a/doc.zih.tu-dresden.de/docs/access/desktop_cloud_visualization.md b/doc.zih.tu-dresden.de/docs/access/desktop_cloud_visualization.md index b9c0d1cd8f894c6944b52daa07fa09c772c73dc0..7395aad287f5c197ae8ba639491c493e87f2ffe9 100644 --- a/doc.zih.tu-dresden.de/docs/access/desktop_cloud_visualization.md +++ b/doc.zih.tu-dresden.de/docs/access/desktop_cloud_visualization.md @@ -11,7 +11,7 @@ if you want to know whether your browser is supported by DCV. **Check out our new documentation about** [Virtual Desktops](../software/virtual_desktops.md). -To start a JupyterHub session on the dcv partition (taurusi210\[4-8\]) with one GPU, six CPU cores +To start a JupyterHub session on the partition `dcv` (`taurusi210[4-8]`) with one GPU, six CPU cores and 2583 MB memory per core, click on: [https://taurus.hrsk.tu-dresden.de/jupyter/hub/spawn#/~(partition~'dcv~cpuspertask~'6~gres~'gpu*3a1~mempercpu~'2583~environment~'production)](https://taurus.hrsk.tu-dresden.de/jupyter/hub/spawn#/~(partition~'dcv~cpuspertask~'6~gres~'gpu*3a1~mempercpu~'2583~environment~'production)) Optionally, you can modify many different Slurm parameters. For this diff --git a/doc.zih.tu-dresden.de/docs/access/graphical_applications_with_webvnc.md b/doc.zih.tu-dresden.de/docs/access/graphical_applications_with_webvnc.md index 6837ace6473f9532e608778ec96049394b4c4494..c652738dc859beecf3dc9669fdde684dc49d04f3 100644 --- a/doc.zih.tu-dresden.de/docs/access/graphical_applications_with_webvnc.md +++ b/doc.zih.tu-dresden.de/docs/access/graphical_applications_with_webvnc.md @@ -38,7 +38,7 @@ marie@login$ srun --pty --partition=interactive --mem-per-cpu=2500 --cpus-per-ta [...] ``` -Of course, you can adjust the batch job parameters to your liking. Note that the default timelimit +Of course, you can adjust the batch job parameters to your liking. Note that the default time limit in partition `interactive` is only 30 minutes, so you should specify a longer one with `--time` (or `-t`). The script will automatically generate a self-signed SSL certificate and place it in your home diff --git a/doc.zih.tu-dresden.de/docs/access/ssh_login.md b/doc.zih.tu-dresden.de/docs/access/ssh_login.md index 69dc79576910d37b001aaaff4cfc43c8ab583b18..60e24a0f3fdcc479a34f477864944025193b0f57 100644 --- a/doc.zih.tu-dresden.de/docs/access/ssh_login.md +++ b/doc.zih.tu-dresden.de/docs/access/ssh_login.md @@ -9,7 +9,7 @@ connection to enter the campus network. While active, it allows the user to conn HPC login nodes. For more information on our VPN and how to set it up, please visit the corresponding -[ZIH service catalogue page](https://tu-dresden.de/zih/dienste/service-katalog/arbeitsumgebung/zugang_datennetz/vpn). +[ZIH service catalog page](https://tu-dresden.de/zih/dienste/service-katalog/arbeitsumgebung/zugang_datennetz/vpn). ## Connecting from Linux diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md index 6aee19dd87cf1f9bcf589c2950ca11e5b99b1b65..bcfc86b6b35f01bc0a5a1eebffdf65ee6319d171 100644 --- a/doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md +++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md @@ -1,12 +1,12 @@ # Intermediate Archive With the "Intermediate Archive", ZIH is closing the gap between a normal disk-based filesystem and -[Longterm Archive](preservation_research_data.md). The Intermediate Archive is a hierarchical +[Long-term Archive](preservation_research_data.md). The Intermediate Archive is a hierarchical filesystem with disks for buffering and tapes for storing research data. Its intended use is the storage of research data for a maximal duration of 3 years. For storing the data after exceeding this time, the user has to supply essential metadata and migrate the files to -the [Longterm Archive](preservation_research_data.md). Until then, she/he has to keep track of her/his +the [Long-term Archive](preservation_research_data.md). Until then, she/he has to keep track of her/his files. Some more information: diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/preservation_research_data.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/preservation_research_data.md index 5c035e56d8a3fa647f9d847a08ed5be9ef903f93..79ae1cf00b45f8bf46bc054e1502fc9404417b75 100644 --- a/doc.zih.tu-dresden.de/docs/data_lifecycle/preservation_research_data.md +++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/preservation_research_data.md @@ -1,4 +1,4 @@ -# Longterm Preservation for Research Data +# Long-term Preservation for Research Data ## Why should research data be preserved? @@ -55,7 +55,7 @@ Below are some examples: - ISBN - possible meta-data for an electronically saved image would be: - resolution of the image - - information about the colour depth of the picture + - information about the color depth of the picture - file format (jpg or tiff or ...) - file size how was this image created (digital camera, scanner, ...) - description of what the image shows @@ -79,6 +79,6 @@ information about managing research data. ## I want to store my research data at ZIH. How can I do that? -Longterm preservation of research data is under construction at ZIH and in a testing phase. +Long-term preservation of research data is under construction at ZIH and in a testing phase. Nevertheless you can already use the archiving service. If you would like to become a test user, please write an E-Mail to [Dr. Klaus Köhler](mailto:klaus.koehler@tu-dresden.de). diff --git a/doc.zih.tu-dresden.de/docs/software/cfd.md b/doc.zih.tu-dresden.de/docs/software/cfd.md index 186d7b3a5a97a2daf06d8618c7c91dc91d7ab971..62ed65116e51ae8bbb593664f4bc48a3373d3a41 100644 --- a/doc.zih.tu-dresden.de/docs/software/cfd.md +++ b/doc.zih.tu-dresden.de/docs/software/cfd.md @@ -16,7 +16,7 @@ The OpenFOAM (Open Field Operation and Manipulation) CFD Toolbox can simulate an fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics, electromagnetics and the pricing of financial options. OpenFOAM is developed primarily by [OpenCFD Ltd](https://www.openfoam.com) and is freely available and open-source, -licensed under the GNU General Public Licence. +licensed under the GNU General Public License. The command `module spider OpenFOAM` provides the list of installed OpenFOAM versions. In order to use OpenFOAM, it is mandatory to set the environment by sourcing the `bashrc` (for users running diff --git a/doc.zih.tu-dresden.de/docs/software/containers.md b/doc.zih.tu-dresden.de/docs/software/containers.md index bbb3e80772f3fcc71480e4555fb146f602806804..d15535933ef7f2b9e0330d07e35168f10fc22ded 100644 --- a/doc.zih.tu-dresden.de/docs/software/containers.md +++ b/doc.zih.tu-dresden.de/docs/software/containers.md @@ -12,10 +12,10 @@ Singularity. Information about the use of Singularity on ZIH systems can be foun In some cases using Singularity requires a Linux machine with root privileges (e.g. using the partition `ml`), the same architecture and a compatible kernel. For many reasons, users on ZIH systems cannot be granted root permissions. A solution is a Virtual Machine (VM) on the partition -`ml` which allows users to gain root permissions in an isolated environment. There are two main +`ml` which allows users to gain root permissions in an isolated environment. There are two main options on how to work with Virtual Machines on ZIH systems: -1. [VM tools](virtual_machines_tools.md): Automative algorithms for using virtual machines; +1. [VM tools](virtual_machines_tools.md): Automated algorithms for using virtual machines; 1. [Manual method](virtual_machines.md): It requires more operations but gives you more flexibility and reliability. @@ -35,7 +35,7 @@ execution. Follow the instructions for [locally installing Singularity](#local-i [container creation](#container-creation). Moreover, existing Docker container can easily be converted, see [Import a docker container](#importing-a-docker-container). -If you are already familar with Singularity, you might be more intressted in our [singularity +If you are already familiar with Singularity, you might be more interested in our [singularity recipes and hints](singularity_recipe_hints.md). ### Local Installation diff --git a/doc.zih.tu-dresden.de/docs/software/custom_easy_build_environment.md b/doc.zih.tu-dresden.de/docs/software/custom_easy_build_environment.md index 3a0bc91ab60320f00911fb6bfe8cb07eb23c5e85..231ce447b0fa8157ebb9b4a8ea6dd9bb1542fa7b 100644 --- a/doc.zih.tu-dresden.de/docs/software/custom_easy_build_environment.md +++ b/doc.zih.tu-dresden.de/docs/software/custom_easy_build_environment.md @@ -26,12 +26,12 @@ information about how to obtain and build the software: - Version - Toolchain (think: Compiler + some more) - Download URL -- Buildsystem (e.g. `configure && make` or `cmake && make`) +- Build system (e.g. `configure && make` or `cmake && make`) - Config parameters - Tests to ensure a successful build The build system part is implemented in so-called "EasyBlocks" and contains the common workflow. -Sometimes, those are specialized to encapsulate behaviour specific to multiple/all versions of the +Sometimes, those are specialized to encapsulate behavior specific to multiple/all versions of the software. Everything is written in Python, which gives authors a great deal of flexibility. ## Set up a custom module environment and build your own modules @@ -61,7 +61,7 @@ using the command `sbatch` instead of `srun`. For the sake of illustration, we u interactive job as an example. Depending on the partitions that you want the module to be usable on later, you need to select nodes with the same architecture. Thus, use nodes from partition ml for building, if you want to use the module on nodes of that partition. In this example, we assume -that we want to use the module on nodes with x86 architecture und thus, Haswell nodes will be used. +that we want to use the module on nodes with x86 architecture and thus, we use Haswell nodes. ```console marie@login$ srun --partition=haswell --nodes=1 --cpus-per-task=4 --time=08:00:00 --pty /bin/bash -l diff --git a/doc.zih.tu-dresden.de/docs/software/debuggers.md b/doc.zih.tu-dresden.de/docs/software/debuggers.md index d88ca5f068f0145e8acc46407feca93a14968522..0d4bda97f61fe6453d6027406ff88145c4204cfb 100644 --- a/doc.zih.tu-dresden.de/docs/software/debuggers.md +++ b/doc.zih.tu-dresden.de/docs/software/debuggers.md @@ -73,8 +73,8 @@ modified by DDT available, which has better support for Fortran 90 (e.g. derive  - Intuitive graphical user interface and great support for parallel applications -- We have 1024 licences, so many user can use this tool for parallel debugging -- Don't expect that debugging an MPI program with 100ths of process will always work without +- We have 1024 licenses, so many user can use this tool for parallel debugging +- Don't expect that debugging an MPI program with hundreds of processes will always work without problems - The more processes and nodes involved, the higher is the probability for timeouts or other problems @@ -159,7 +159,7 @@ marie@login$ srun -n 1 valgrind ./myprog - Not recommended for MPI parallel programs, since usually the MPI library will throw a lot of errors. But you may use Valgrind the following way such that every rank - writes its own Valgrind logfile: + writes its own Valgrind log file: ```console marie@login$ module load Valgrind diff --git a/doc.zih.tu-dresden.de/docs/software/fem_software.md b/doc.zih.tu-dresden.de/docs/software/fem_software.md index 3be2314889bfe45f9554fb499c4d757337bef33d..160aeded633f50e9abfdfae6d74a7627257ca565 100644 --- a/doc.zih.tu-dresden.de/docs/software/fem_software.md +++ b/doc.zih.tu-dresden.de/docs/software/fem_software.md @@ -176,7 +176,7 @@ under: `<MaxNumberProcessors>2</MaxNumberProcessors>` -that you can simply change to something like 16 oder 24. For now, you should stay within single-node +that you can simply change to something like 16 or 24. For now, you should stay within single-node boundaries, because multi-node calculations require additional parameters. The number you choose should match your used `--cpus-per-task` parameter in your job file. diff --git a/doc.zih.tu-dresden.de/docs/software/gpu_programming.md b/doc.zih.tu-dresden.de/docs/software/gpu_programming.md index 9847cc9dbfec4137eada70dbc23285c7825effc7..070176efcb2ab0f463da30675841ade0e0a585a3 100644 --- a/doc.zih.tu-dresden.de/docs/software/gpu_programming.md +++ b/doc.zih.tu-dresden.de/docs/software/gpu_programming.md @@ -2,8 +2,9 @@ ## Directive Based GPU Programming -Directives are special compiler commands in your C/C++ or Fortran source code. The tell the compiler -how to parallelize and offload work to a GPU. This section explains how to use this technique. +Directives are special compiler commands in your C/C++ or Fortran source code. They tell the +compiler how to parallelize and offload work to a GPU. This section explains how to use this +technique. ### OpenACC @@ -19,10 +20,11 @@ newer for full support for the NVIDIA Tesla K20x GPUs at ZIH. #### Using OpenACC with PGI compilers -* For compilaton please add the compiler flag `-acc`, to enable OpenACC interpreting by the compiler; -* `-Minfo` will tell you what the compiler is actually doing to your code; +* For compilation, please add the compiler flag `-acc` to enable OpenACC interpreting by the + compiler; +* `-Minfo` tells you what the compiler is actually doing to your code; * If you only want to use the created binary at ZIH resources, please also add `-ta=nvidia:keple`; -* OpenACC Turorial: intro1.pdf, intro2.pdf. +* OpenACC Tutorial: intro1.pdf, intro2.pdf. ### HMPP @@ -38,4 +40,4 @@ use the following slides as an introduction: * Introduction to CUDA; * Advanced Tuning for NVIDIA Kepler GPUs. -In order to compiler an application with CUDA use the `nvcc` compiler command. +In order to compile an application with CUDA use the `nvcc` compiler command. diff --git a/doc.zih.tu-dresden.de/docs/software/modules.md b/doc.zih.tu-dresden.de/docs/software/modules.md index 58f200d25f01d52385626776b53c93f38e999397..fb9107b5d362ca348987e848a663de7586fb6a72 100644 --- a/doc.zih.tu-dresden.de/docs/software/modules.md +++ b/doc.zih.tu-dresden.de/docs/software/modules.md @@ -206,7 +206,8 @@ Note that this will not work for meta-modules that do not have an installation d ## Advanced Usage -For writing your own Modulefiles please have a look at the [Guide for writing project and private Modulefiles](private_modules.md). +For writing your own module files please have a look at the +[Guide for writing project and private module files](private_modules.md). ## Troubleshooting diff --git a/doc.zih.tu-dresden.de/docs/software/mpi_usage_error_detection.md b/doc.zih.tu-dresden.de/docs/software/mpi_usage_error_detection.md index 8d1d7e17a02c3dd2ab572216899cd37f7a9aee3a..b083e80cf9962a01a6580f8b5393912ebd2c3f40 100644 --- a/doc.zih.tu-dresden.de/docs/software/mpi_usage_error_detection.md +++ b/doc.zih.tu-dresden.de/docs/software/mpi_usage_error_detection.md @@ -40,7 +40,7 @@ Besides loading a MUST module, no further changes are needed during compilation ### Running your Application with MUST -In order to run your application with MUST you need to replace the srun command with mustrun: +In order to run your application with MUST you need to replace the `srun` command with `mustrun`: ```console marie@login$ mustrun -np <number of MPI processes> ./<your binary> @@ -65,14 +65,14 @@ marie@login$ mustrun -np 4 ./fancy-program [MUST] Execution finished, inspect "/home/marie/MUST_Output.html"! ``` -Besides replacing the srun command you need to be aware that **MUST always allocates an extra +Besides replacing the `srun` command you need to be aware that **MUST always allocates an extra process**, i.e. if you issue a `mustrun -np 4 ./a.out` then MUST will start 5 processes instead. This is usually not critical, however in batch jobs **make sure to allocate an extra CPU for this task**. Finally, MUST assumes that your application may crash at any time. To still gather correctness results under this assumption is extremely expensive in terms of performance overheads. Thus, if -your application does not crash, you should add an "--must:nocrash" to the mustrun command to make +your application does not crash, you should add `--must:nocrash` to the `mustrun` command to make MUST aware of this knowledge. Overhead is drastically reduced with this switch. ### Result Files diff --git a/doc.zih.tu-dresden.de/docs/software/pika.md b/doc.zih.tu-dresden.de/docs/software/pika.md index 36aab905dbf33602c64333e2a695070ffc0ad9db..d9616e900e258909267fc9870db6ddfa24fee0de 100644 --- a/doc.zih.tu-dresden.de/docs/software/pika.md +++ b/doc.zih.tu-dresden.de/docs/software/pika.md @@ -90,7 +90,7 @@ reason for further investigation, since not all HUs are equally utilized. To identify imbalances between HUs over time, the visualization modes *Best* and *Lowest* are a first indicator how much the HUs differ in terms of resource usage. The timelines *Best* and -*Lowest* show the recoded performance data of the best/lowest average HU over time. +*Lowest* show the recorded performance data of the best/lowest average HU over time. ## Footprint Visualization @@ -111,7 +111,7 @@ investigating their correlation. ## Hints If users wish to perform their own measurement of performance counters using performance tools other -than PIKA, it is recommended to disable PIKA monitoring. This can be done using the following slurm +than PIKA, it is recommended to disable PIKA monitoring. This can be done using the following Slurm flags in the job script: ```Bash diff --git a/doc.zih.tu-dresden.de/docs/software/vampir.md b/doc.zih.tu-dresden.de/docs/software/vampir.md index 24a22c35acda9afcfa6e1e56bdd553da716ec245..9df5eb62a0d461da97fcb2ce28f461d9042e93a2 100644 --- a/doc.zih.tu-dresden.de/docs/software/vampir.md +++ b/doc.zih.tu-dresden.de/docs/software/vampir.md @@ -146,7 +146,7 @@ marie@local$ ssh -L 30000:taurusi1253:30055 taurus.hrsk.tu-dresden.de ``` Now, the port 30000 on your desktop is connected to the VampirServer port 30055 at the compute node -taurusi1253 of the ZIH system. Finally, start your local Vampir client and establish a remote +`taurusi1253` of the ZIH system. Finally, start your local Vampir client and establish a remote connection to `localhost`, port 30000 as described in the manual. ```console diff --git a/doc.zih.tu-dresden.de/docs/software/visualization.md b/doc.zih.tu-dresden.de/docs/software/visualization.md index 328acc490f5fa5c65e687d50bf9f43ceae44c541..f1e551c968cb4478069c98e691eef11bce7ccb01 100644 --- a/doc.zih.tu-dresden.de/docs/software/visualization.md +++ b/doc.zih.tu-dresden.de/docs/software/visualization.md @@ -49,10 +49,10 @@ marie@login$ mpiexec -bind-to -help` or from [mpich wiki](https://wiki.mpich.org/mpich/index.php/Using_the_Hydra_Process_Manager#Process-core_Binding%7Cwiki.mpich.org). -In the following, we provide two examples on how to use `pvbatch` from within a jobfile and an +In the following, we provide two examples on how to use `pvbatch` from within a job file and an interactive allocation. -??? example "Example jobfile" +??? example "Example job file" ```Bash #!/bin/bash @@ -97,7 +97,7 @@ cards (GPUs) specified by the device index. For that, make sure to use the modul *-egl*, e.g., `ParaView/5.9.0-RC1-egl-mpi-Python-3.8`, and pass the option `--egl-device-index=$CUDA_VISIBLE_DEVICES`. -??? example "Example jobfile" +??? example "Example job file" ```Bash #!/bin/bash @@ -171,7 +171,7 @@ are outputed.* This contains the node name which your job and server runs on. However, since the node names of the cluster are not present in the public domain name system (only cluster-internally), you cannot just use this line as-is for connection with your client. **You first have to resolve** the name to an IP -address on ZIH systems: Suffix the nodename with `-mn` to get the management network (ethernet) +address on ZIH systems: Suffix the node name with `-mn` to get the management network (ethernet) address, and pass it to a lookup-tool like `host` in another SSH session: ```console diff --git a/doc.zih.tu-dresden.de/util/check-bash-syntax.sh b/doc.zih.tu-dresden.de/util/check-bash-syntax.sh index 9f31effee3ebc3380af5ca892047aca6a9357139..ac0fcd4621741d7f094e29aaf772f283b64c284d 100755 --- a/doc.zih.tu-dresden.de/util/check-bash-syntax.sh +++ b/doc.zih.tu-dresden.de/util/check-bash-syntax.sh @@ -47,12 +47,12 @@ branch="origin/${CI_MERGE_REQUEST_TARGET_BRANCH_NAME:-preview}" if [ $all_files = true ]; then echo "Search in all bash files." - files=`git ls-tree --full-tree -r --name-only HEAD $basedir/docs/ | grep .sh || true` + files=`git ls-tree --full-tree -r --name-only HEAD $basedir/docs/ | grep '\.sh$' || true` elif [[ ! -z $file ]]; then files=$file else echo "Search in git-changed files." - files=`git diff --name-only "$(git merge-base HEAD "$branch")" | grep .sh || true` + files=`git diff --name-only "$(git merge-base HEAD "$branch")" | grep '\.sh$' || true` fi diff --git a/doc.zih.tu-dresden.de/util/grep-forbidden-patterns.sh b/doc.zih.tu-dresden.de/util/grep-forbidden-patterns.sh index 38e9015599922fdcec93fecebb9fd638cfa576d8..f3cfa673ce063a674cb2f850d7f7da252a6ab093 100755 --- a/doc.zih.tu-dresden.de/util/grep-forbidden-patterns.sh +++ b/doc.zih.tu-dresden.de/util/grep-forbidden-patterns.sh @@ -45,7 +45,7 @@ doc.zih.tu-dresden.de/docs/accessibility.md i [[:space:]]$ When referencing partitions, put keyword \"partition\" in front of partition name, e. g. \"partition ml\", not \"ml partition\". doc.zih.tu-dresden.de/docs/contrib/content_rules.md -i \(alpha\|ml\|haswell\|romeo\|gpu\|smp\|julia\|hpdlf\|scs5\)-\?\(interactive\)\?[^a-z]*partition +i \(alpha\|ml\|haswell\|romeo\|gpu\|smp\|julia\|hpdlf\|scs5\|dcv\)-\?\(interactive\)\?[^a-z]*partition Give hints in the link text. Words such as \"here\" or \"this link\" are meaningless. doc.zih.tu-dresden.de/docs/contrib/content_rules.md i \[\s\?\(documentation\|here\|more info\|this \(link\|page\|subsection\)\|slides\?\|manpage\)\s\?\] diff --git a/doc.zih.tu-dresden.de/wordlist.aspell b/doc.zih.tu-dresden.de/wordlist.aspell index 73af7da3010a0570c99b148f180440c20f8277cd..8bcd6a7c24872843e665bc7fc1ed91241284c780 100644 --- a/doc.zih.tu-dresden.de/wordlist.aspell +++ b/doc.zih.tu-dresden.de/wordlist.aspell @@ -67,7 +67,9 @@ dotfile dotfiles downtime downtimes +EasyBlocks EasyBuild +EasyConfig ecryptfs engl english @@ -90,6 +92,7 @@ Galilei Gauss Gaussian GBit +GDB GDDR GFLOPS gfortran @@ -137,6 +140,7 @@ img Infiniband init inode +Instrumenter IOPS IPs ISA @@ -170,6 +174,7 @@ MathWorks matlab MEGWARE mem +Memcheck MiB Microarchitecture MIMD @@ -177,6 +182,7 @@ Miniconda mkdocs MKL MNIST +MobaXTerm modenv modenvs modulefile @@ -235,6 +241,7 @@ pandarallel PAPI parallelization parallelize +parallelized parfor pdf perf @@ -254,10 +261,13 @@ pre Preload preloaded preloading +prepend preprocessing PSOCK +Pthread Pthreads pty +PuTTY pymdownx PythonAnaconda pytorch @@ -310,6 +320,7 @@ Slurm SLURMCluster SMP SMT +spython squeue srun ssd @@ -346,7 +357,9 @@ undistinguishable unencrypted uplink userspace +Valgrind Vampir +VampirServer VampirTrace VampirTrace's VASP