diff --git a/doc.zih.tu-dresden.de/docs/access/desktop_cloud_visualization.md b/doc.zih.tu-dresden.de/docs/access/desktop_cloud_visualization.md index 7395aad287f5c197ae8ba639491c493e87f2ffe9..e3fe6e8f25e5a59c876454807410c05c2494f8d3 100644 --- a/doc.zih.tu-dresden.de/docs/access/desktop_cloud_visualization.md +++ b/doc.zih.tu-dresden.de/docs/access/desktop_cloud_visualization.md @@ -29,7 +29,7 @@ Click on the `DCV` button. A new tab with the DCV client will be opened. - Check GPU support via: ```console hl_lines="4" -marie@compute$ glxinfo +marie@compute$ glxinfo | head name of display: :1 display: :1 screen: 0 direct rendering: Yes diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md index 7043d46041314aa175bdfb401fb3129e529903c2..f4cdcd9de79a45aa10a32f4e5bdb2b4edcde5419 100644 --- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md +++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm.md @@ -79,7 +79,7 @@ There are three basic Slurm commands for job submission and execution: 1. `salloc`: Obtain a Slurm job allocation (i.e., resources like CPUs, nodes and GPUs) for interactive use. Release the allocation when finished. -Using `srun` directly on the shell will be blocking and launch an +Executing a program with `srun` directly on the shell will be blocking and launch an [interactive job](#interactive-jobs). Apart from short test runs, it is recommended to submit your jobs to Slurm for later execution by using [batch jobs](#batch-jobs). For that, you can conveniently put the parameters in a [job file](#job-files), which you can submit using `sbatch @@ -94,7 +94,7 @@ can find it via `squeue --me`. The job ID allows you to On ZIH systems, `srun` is used to run your parallel application. The use of `mpirun` is provenly broken on partitions `ml` and `alpha` for jobs requiring more than one node. Especially when - using code from github projects, double-check it's configuration by looking for a line like + using code from github projects, double-check its configuration by looking for a line like 'submit command mpirun -n $ranks ./app' and replace it with 'srun ./app'. Otherwise, this may lead to wrong resource distribution and thus job failure, or tremendous @@ -196,7 +196,7 @@ marie@compute$ srun --overlap hostname taurusi6604.taurus.hrsk.tu-dresden.de ``` -!!! note "Using `module` commands" +!!! note "Using `module` commands in interactive mode" The [module commands](../software/modules.md) are made available by sourcing the files `/etc/profile` and `~/.bashrc`. This is done automatically by passing the parameter `-l` to your @@ -229,7 +229,7 @@ marie@login$ srun --ntasks=1 --pty --x11=first xeyes that probably means you still have an old host key for the target node in your `~.ssh/known_hosts` file (e.g. from pre-SCS5). This can be solved either by removing the entry - from your known_hosts or by simply deleting the `known_hosts` file altogether if you don't have + from your `known_hosts` or by simply deleting the `known_hosts` file altogether if you don't have important other entries in it. ## Batch Jobs @@ -335,7 +335,7 @@ marie@login$ srun ./my_application <args for master tasks> : ./my_application <a ``` Heterogeneous jobs can also be defined in job files. There, it is required to separate multiple -components by a line containing the directive `"#SBATCH hetjob`. +components by a line containing the directive `#SBATCH hetjob`. ```bash #!/bin/bash @@ -374,7 +374,7 @@ On the command line, use `squeue` to watch the scheduling queue. Invoke `squeue --me` to list only your jobs. -In it's last column, the `squeue` command will also tell why a job is not running. +In its last column, the `squeue` command will also tell why a job is not running. Possible reasons and their detailed descriptions are listed in the following table. More information about job parameters can be obtained with `scontrol -d show job <jobid>`. diff --git a/doc.zih.tu-dresden.de/docs/software/compilers.md b/doc.zih.tu-dresden.de/docs/software/compilers.md index d536abba9b4813fdefe32911e300b60e9bc67368..1ee00ce46b589b4b65cba4ded2af46a1b8e6b9a5 100644 --- a/doc.zih.tu-dresden.de/docs/software/compilers.md +++ b/doc.zih.tu-dresden.de/docs/software/compilers.md @@ -2,16 +2,17 @@ The following compilers are available on the ZIH system: -| | GNU Compiler Collection | Intel Compiler | PGI Compiler (Nvidia HPC SDK) | -|----------------------|-----------|------------|-------------| -| Further information | [GCC website](https://gcc.gnu.org/) | [C/C++](https://software.intel.com/en-us/c-compilers), [Fortran](https://software.intel.com/en-us/fortran-compilers) | [PGI website](https://www.pgroup.com) | -| Module name | GNU | intel | PGI | -| C Compiler | `gcc` | `icc` | `pgcc` | -| C++ Compiler | `g++` | `icpc` | `pgc++` | -| Fortran Compiler | `gfortran` | `ifort` | `pgfortran` | +| | GNU Compiler Collection | Clang Compiler | Intel Compiler | PGI Compiler (Nvidia HPC SDK) | +|----------------------|-------------------------|----------------|----------------|-------------------------------| +| Further information | [GCC website](https://gcc.gnu.org/) | [Clang documentation](https://clang.llvm.org/docs/UsersManual.html) | [C/C++](https://software.intel.com/en-us/c-compilers), [Fortran](https://software.intel.com/en-us/fortran-compilers) | [PGI website](https://www.pgroup.com) | +| Module name | GCC | Clang | iccifort | PGI | +| C Compiler | `gcc` | `clang` | `icc` | `pgcc` | +| C++ Compiler | `g++` | `clang++` | `icpc` | `pgc++` | +| Fortran Compiler | `gfortran` | - | `ifort` | `pgfortran` | For an overview of the installed compiler versions, please use `module spider <module name>` on the ZIH systems. +Additionally you can use `module av` and look below "compilers" to see all available compiler modules. All compilers support various language standards, at least up to ISO C11, ISO C++ 2014, and Fortran 2003. Please check the man pages to verify that your code can be compiled. diff --git a/doc.zih.tu-dresden.de/docs/software/modules.md b/doc.zih.tu-dresden.de/docs/software/modules.md index 74f67821cac0c8b030b06079f86e2514030fa5d6..9559fa3826bea23a7eff3eee20c91cc1be0f07e7 100644 --- a/doc.zih.tu-dresden.de/docs/software/modules.md +++ b/doc.zih.tu-dresden.de/docs/software/modules.md @@ -129,7 +129,7 @@ marie@compute$ module load modenv/ml ### modenv/scs5 (default) * SCS5 software -* usually optimized for Intel processors (Partitions: `haswell`, `broadwell`, `gpu2`, `julia`) +* usually optimized for Intel processors (partitions `haswell`, `broadwell`, `gpu2`, `julia`) ### modenv/ml @@ -142,7 +142,7 @@ Thus the 'machine code' of other modenvs breaks). ### modenv/hiera * uses a hierarchical module load scheme -* optimized software for AMD processors (Partitions: romeo, alpha) +* optimized software for AMD processors (partitions `romeo` and `alpha`) ### modenv/classic diff --git a/doc.zih.tu-dresden.de/docs/software/mpi_usage_error_detection.md b/doc.zih.tu-dresden.de/docs/software/mpi_usage_error_detection.md index a26a8c6ee9595129b32ee56db2040e7cbb11ca7a..b604bf5398681458ac416336ea7c42a0b3a25b15 100644 --- a/doc.zih.tu-dresden.de/docs/software/mpi_usage_error_detection.md +++ b/doc.zih.tu-dresden.de/docs/software/mpi_usage_error_detection.md @@ -79,8 +79,11 @@ MUST aware of this knowledge. Overhead is drastically reduced with this switch. After running your application with MUST you will have its output in the working directory of your application. The output is named `MUST_Output.html`. Open this files in a browser to analyze the -results. The HTML file is color coded: Entries in green represent notes and useful information. -Entries in yellow represent warnings, and entries in red represent errors. +results. The HTML file is color coded: + +- Entries in green represent notes and useful information +- Entries in yellow represent warnings +- Entries in red represent errors ## Further MPI Correctness Tools diff --git a/doc.zih.tu-dresden.de/docs/software/python_virtual_environments.md b/doc.zih.tu-dresden.de/docs/software/python_virtual_environments.md index 026e194ee8e2f28be8e24eae4862cd358427792e..129b1d9dd053415617e77f3abad603c2d6b68809 100644 --- a/doc.zih.tu-dresden.de/docs/software/python_virtual_environments.md +++ b/doc.zih.tu-dresden.de/docs/software/python_virtual_environments.md @@ -68,8 +68,8 @@ the environment as follows: ??? example - This is an example on partition `alpha`. The example creates a conda virtual environment, and - installs the package `torchvision` with conda. + This is an example on partition `alpha`. The example creates a python virtual environment, and + installs the package `torchvision` with pip. ```console marie@login$ srun --partition=alpha-interactive --nodes=1 --gres=gpu:1 --time=01:00:00 --pty bash marie@alpha$ ws_allocate -F scratch my_python_virtualenv 100 # use a workspace for the environment diff --git a/doc.zih.tu-dresden.de/docs/software/tensorflow.md b/doc.zih.tu-dresden.de/docs/software/tensorflow.md index df655fdb2dea0b2c8ab39f95b4544a261bb1c534..58b99bd1c302c0ed65619fc200602f2732f84df1 100644 --- a/doc.zih.tu-dresden.de/docs/software/tensorflow.md +++ b/doc.zih.tu-dresden.de/docs/software/tensorflow.md @@ -74,9 +74,9 @@ import TensorFlow: [...] marie@ml$ which python #check which python are you using /sw/installed/Python/3.7.2-GCCcore-8.2.0 - marie@ml$ virtualenv --system-site-packages /scratch/ws/1/python_virtual_environment/env + marie@ml$ virtualenv --system-site-packages /scratch/ws/1/marie-python_virtual_environment/env [...] - marie@ml$ source /scratch/ws/1/python_virtual_environment/env/bin/activate + marie@ml$ source /scratch/ws/1/marie-python_virtual_environment/env/bin/activate marie@ml$ python -c "import tensorflow as tf; print(tf.__version__)" [...] 2.3.1