diff --git a/doc.zih.tu-dresden.de/docs/archive/beegfs_on_demand.md b/doc.zih.tu-dresden.de/docs/archive/beegfs_on_demand.md index e221188dcd1c33ef66815d38bffd4a8c5866f48e..84a400f4906832a6b96deae582d0604b9e63a3fc 100644 --- a/doc.zih.tu-dresden.de/docs/archive/beegfs_on_demand.md +++ b/doc.zih.tu-dresden.de/docs/archive/beegfs_on_demand.md @@ -105,7 +105,7 @@ Show contents of the previously created file, for example, cat .beegfs_11054579 ``` -Note: don't forget to go over to your `home` directory where the file located +Note: don't forget to go over to your home directory where the file located Example output: diff --git a/doc.zih.tu-dresden.de/docs/archive/system_venus.md b/doc.zih.tu-dresden.de/docs/archive/system_venus.md index 56acf9b47081726c9662150f638ff430e099020c..d641e3d0380dfe93f00bc4e5e6d67bc2cacf18f1 100644 --- a/doc.zih.tu-dresden.de/docs/archive/system_venus.md +++ b/doc.zih.tu-dresden.de/docs/archive/system_venus.md @@ -21,7 +21,7 @@ hyperthreads. ### Filesystems -Venus uses the same `home` filesystem as all our other HPC installations. +Venus uses the same home filesystem as all our other HPC installations. For computations, please use `/scratch`. ## Usage diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/workspaces.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/workspaces.md index cae944e121ccc0692a4e71ad1950a4739dda540a..6180e5db831c8faf69a8752247ad5b8ee5ef6313 100644 --- a/doc.zih.tu-dresden.de/docs/data_lifecycle/workspaces.md +++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/workspaces.md @@ -184,7 +184,7 @@ well as a workspace that already contains data. ## Linking Workspaces in HOME It might be valuable to have links to personal workspaces within a certain directory, e.g., your -`home` directory. The command `ws_register DIR` will create and manage links to all personal +home directory. The command `ws_register DIR` will create and manage links to all personal workspaces within in the directory `DIR`. Calling this command will do the following: - The directory `DIR` will be created if necessary. diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md index e09dc2a62b53863b78a03a777932c6c99bf1600b..fecea7ad7a9db2d5395bad6963baba73b4314248 100644 --- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md +++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md @@ -363,13 +363,13 @@ First you have to start your data transfer job, which for example transfers your workspace to another. ```console -marie@login: export DATAMOVER_JOB=$(dtcp /scratch/ws/1/marie-source/input.txt /beegfs/ws/1/marie-target/. | awk '{print $4}') +marie@login$ export DATAMOVER_JOB=$(dtcp /scratch/ws/1/marie-source/input.txt /beegfs/ws/1/marie-target/. | awk '{print $4}') ``` Now you can refer to the job id of the Datamover jobs from your work load jobs. ```console -marie@login: srun --dependency afterok:${DATAMOVER_JOB} ls /beegfs/ws/1/marie-target +marie@login$ srun --dependency afterok:${DATAMOVER_JOB} ls /beegfs/ws/1/marie-target srun: job 23872871 queued and waiting for resources srun: job 23872871 has been allocated resources input.txt diff --git a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md index df7fc8b56a8a015b5a13a8c871b5163b2c1d473d..4bd9634db24b8ba81a02368a4f51c0b46004885f 100644 --- a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md +++ b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md @@ -73,8 +73,8 @@ Spark or `$FLINK_ROOT_DIR/conf` for Flink: marie@compute$ source framework-configure.sh flink $FLINK_ROOT_DIR/conf ``` -This places the configuration in a directory called `cluster-conf-<JOB_ID>` in your `home` -directory, where `<JOB_ID>` stands for the id of the Slurm job. After that, you can start in +This places the configuration in a directory called `cluster-conf-<JOB_ID>` in your home directory, +where `<JOB_ID>` stands for the id of the Slurm job. After that, you can start in the usual way: === "Spark" @@ -275,15 +275,12 @@ use as a starting point for convenience: [SparkExample.ipynb](misc/SparkExample. ## FAQ -Q: Command `source framework-configure.sh hadoop -$HADOOP_ROOT_DIR/etc/hadoop` gives the output: +Q: Command `source framework-configure.sh hadoop $HADOOP_ROOT_DIR/etc/hadoop` gives the output: `bash: framework-configure.sh: No such file or directory`. How can this be resolved? -A: Please try to re-submit or re-run the job and if that doesn't help -re-login to the ZIH system. +A: Please try to re-submit or re-run the job and if that doesn't help re-login to the ZIH system. -Q: There are a lot of errors and warnings during the set up of the -session +Q: There are a lot of errors and warnings during the set up of the session A: Please check the work capability on a simple example as shown in this documentation. diff --git a/doc.zih.tu-dresden.de/docs/software/debuggers.md b/doc.zih.tu-dresden.de/docs/software/debuggers.md index 0d4bda97f61fe6453d6027406ff88145c4204cfb..d57ceab704a534302ff24407e2c20bdce3dbd833 100644 --- a/doc.zih.tu-dresden.de/docs/software/debuggers.md +++ b/doc.zih.tu-dresden.de/docs/software/debuggers.md @@ -22,7 +22,7 @@ errors. | Licenses at ZIH | Free | 1024 (max. number of processes/threads) | | Official documentation | [GDB website](https://www.gnu.org/software/gdb/) | [Arm DDT website](https://developer.arm.com/tools-and-software/server-and-hpc/debug-and-profile/arm-forge/arm-ddt) | -## General Advices +## General Advice - You need to compile your code with the flag `-g` to enable debugging. This tells the compiler to include information about diff --git a/doc.zih.tu-dresden.de/docs/software/fem_software.md b/doc.zih.tu-dresden.de/docs/software/fem_software.md index 160aeded633f50e9abfdfae6d74a7627257ca565..d8bffb0a75eb7a13649e68baa3bda9407f65a9c4 100644 --- a/doc.zih.tu-dresden.de/docs/software/fem_software.md +++ b/doc.zih.tu-dresden.de/docs/software/fem_software.md @@ -162,7 +162,7 @@ parameter (for batch mode), `-F` for your project file, and can then either add ### Running Workbench in Parallel Unfortunately, the number of CPU cores you wish to use cannot simply be given as a command line -parameter to your `runwb2` call. Instead, you have to enter it into an XML file in your `home` +parameter to your `runwb2` call. Instead, you have to enter it into an XML file in your home directory. This setting will then be **used for all** your `runwb2` jobs. While it is also possible to edit this setting via the Mechanical GUI, experience shows that this can be problematic via X11-forwarding and we only managed to use the GUI properly via [DCV](virtual_desktops.md), so we diff --git a/doc.zih.tu-dresden.de/docs/software/scs5_software.md b/doc.zih.tu-dresden.de/docs/software/scs5_software.md index b017af11ab9d3beda0c7c88436d29d716db9ac39..134d81204ac7d612141207a031b9238c458f7b04 100644 --- a/doc.zih.tu-dresden.de/docs/software/scs5_software.md +++ b/doc.zih.tu-dresden.de/docs/software/scs5_software.md @@ -35,7 +35,7 @@ ml av There is a special module that is always loaded (sticky) called **modenv**. It determines the module environment you can see. -| Module Environemnt | Description | Status | +| Module Environment | Description | Status | |--------------------|---------------------------------------------|---------| | `modenv/scs5` | SCS5 software | default | | `modenv/ml` | Software for data analytics (partition ml) | | @@ -93,7 +93,7 @@ For instance, the "intel" toolchain has the following structure: | Toolchain | `intel` | |--------------|------------| | Compilers | icc, ifort | -| Mpi library | impi | +| MPI library | impi | | Math. library | imkl | On the other hand, the "foss" toolchain looks like this: @@ -101,7 +101,7 @@ On the other hand, the "foss" toolchain looks like this: | Toolchain | `foss` | |----------------|---------------------| | Compilers | GCC (gcc, gfortran) | -| Mpi library | OpenMPI | +| MPI library | OpenMPI | | Math. libraries | OpenBLAS, FFTW | If you want to combine the Intel compilers and MKL with OpenMPI, you'd have to use the "iomkl" @@ -110,7 +110,7 @@ toolchain: | Toolchain | `iomkl` | |--------------|------------| | Compilers | icc, ifort | -| Mpi library | OpenMPI | +| MPI library | OpenMPI | | Math library | imkl | There are also subtoolchains that skip a layer or two, e.g. "iccifort" only consists of the diff --git a/doc.zih.tu-dresden.de/docs/software/visualization.md b/doc.zih.tu-dresden.de/docs/software/visualization.md index 344ef59e9d9158001a2e85682ccaa7d02eb5e3b9..8116af22e79073237c10dfb113cd0910af824455 100644 --- a/doc.zih.tu-dresden.de/docs/software/visualization.md +++ b/doc.zih.tu-dresden.de/docs/software/visualization.md @@ -166,7 +166,7 @@ processes. If the default port 11111 is already in use, an alternative port can be specified via `-sp=port`. *Once the resources are allocated, the pvserver is started in parallel and connection information -are outputed.* +are output.* This contains the node name which your job and server runs on. However, since the node names of the cluster are not present in the public domain name system (only cluster-internally), you cannot just @@ -211,7 +211,7 @@ filesystems. #### Caveats -Connecting to the compute nodes will only work when you are **inside the TUD campus network**, +Connecting to the compute nodes will only work when you are **inside the TU Dresden campus network**, because otherwise, the private networks 172.24.\* will not be routed. That's why you either need to use [VPN](https://tu-dresden.de/zih/dienste/service-katalog/arbeitsumgebung/zugang_datennetz/vpn), or, when coming via the ZIH login gateway (`login1.zih.tu-dresden.de`), use an SSH tunnel. For the diff --git a/doc.zih.tu-dresden.de/util/grep-forbidden-patterns.sh b/doc.zih.tu-dresden.de/util/grep-forbidden-patterns.sh index c6467c026f68a063adf6944c1b8b68849dcb36a0..fe4138f970cf68fff0c54f034bed92033ad11f4b 100755 --- a/doc.zih.tu-dresden.de/util/grep-forbidden-patterns.sh +++ b/doc.zih.tu-dresden.de/util/grep-forbidden-patterns.sh @@ -54,7 +54,7 @@ Internal links should not contain \"/#\". i (.*/#.*) (http When referencing partitions, put keyword \"partition\" in front of partition name, e. g. \"partition ml\", not \"ml partition\". doc.zih.tu-dresden.de/docs/contrib/content_rules.md -i \(alpha\|ml\|haswell\|romeo\|gpu\|smp\|julia\|hpdlf\|scs5\|dcv\)-\?\(interactive\)\?[^a-z]*partition +i \(alpha\|ml\|haswell\|romeo\|gpu\|smp\|julia\|hpdlf\|scs5\|dcv\)-\?\(interactive\)\?[^a-z|]*partition Give hints in the link text. Words such as \"here\" or \"this link\" are meaningless. doc.zih.tu-dresden.de/docs/contrib/content_rules.md i \[\s\?\(documentation\|here\|more info\|\(this \)\?\(link\|page\|subsection\)\|slides\?\|manpage\)\s\?\] diff --git a/doc.zih.tu-dresden.de/wordlist.aspell b/doc.zih.tu-dresden.de/wordlist.aspell index 5e05a10aa5d8528189070fcd6b3cf99721ace069..109b5a04b0d3b401dd4fcd5b1c7910fc368100f3 100644 --- a/doc.zih.tu-dresden.de/wordlist.aspell +++ b/doc.zih.tu-dresden.de/wordlist.aspell @@ -1,4 +1,4 @@ -personal_ws-1.1 en 406 +personal_ws-1.1 en 423 Abaqus Addon Addons @@ -54,6 +54,9 @@ Dataheap datamover DataParallel dataset +datasets +Dataset +Datasets DCV ddl DDP @@ -200,9 +203,10 @@ modenv modenvs modulefile Montecito +mortem +Mortem mountpoint mpi -Mpi mpicc mpiCC mpicxx @@ -314,6 +318,7 @@ romeo RSA RSS RStudio +rsync Rsync runnable runtime @@ -335,11 +340,13 @@ scontrol scp scs SDK +sftp SFTP SGEMM SGI SHA SHMEM +situ SLES Slurm SLURMCluster