diff --git a/doc.zih.tu-dresden.de/docs/archive/install_jupyter.md b/doc.zih.tu-dresden.de/docs/archive/install_jupyter.md index 02b78701bd8fa5b5eb3fbb7ed2de2cae9639042e..a1d2966509244d71d32c7bfa22d74c18b45be628 100644 --- a/doc.zih.tu-dresden.de/docs/archive/install_jupyter.md +++ b/doc.zih.tu-dresden.de/docs/archive/install_jupyter.md @@ -148,7 +148,7 @@ c.NotebookApp.allow_remote_access = True #SBATCH --time=02:30:00 #SBATCH --mem=4000M #SBATCH -J "jupyter-notebook" # job-name -#SBATCH -A p_marie +#SBATCH -A p_number_crunch unset XDG_RUNTIME_DIR # might be required when interactive instead of sbatch to avoid 'Permission denied error' srun jupyter notebook diff --git a/doc.zih.tu-dresden.de/docs/contrib/content_rules.md b/doc.zih.tu-dresden.de/docs/contrib/content_rules.md index c4660744c8feca5681b94b2a48db23bccc0c5334..10f4dcd547cda2b872db67beef8221515755f671 100644 --- a/doc.zih.tu-dresden.de/docs/contrib/content_rules.md +++ b/doc.zih.tu-dresden.de/docs/contrib/content_rules.md @@ -134,7 +134,7 @@ We follow this rules regarding prompts: an example invocation, perhaps with output, should be given with the normal `console` code block. See also [Code Block description below](#code-blocks-and-syntax-highlighting). * Using some magic, the prompt as well as the output is identified and will not be copied! -* Stick to the [generic user name](#data-privacy-and-generic-user-name) `marie`. +* Stick to the [generic user name](#data-privacy-and-generic-names) `marie`. ### Code Blocks and Syntax Highlighting @@ -245,16 +245,17 @@ _Result_:  -### Data Privacy and Generic User Name +### Data Privacy and Generic Names -Where possible, replace login, project name and other private data with clearly arbitrary placeholders. -E.g., use the generic login `marie` and the corresponding project name `p_marie`. +Where possible, replace login, project name and other private data with clearly arbitrary +placeholders. In particular, use the generic login `marie` and the project title `p_number_crunch` +as placeholders. ```console marie@login$ ls -l -drwxr-xr-x 3 marie p_marie 4096 Jan 24 2020 code -drwxr-xr-x 3 marie p_marie 4096 Feb 12 2020 data --rw-rw---- 1 marie p_marie 4096 Jan 24 2020 readme.md +drwxr-xr-x 3 marie p_number_crunch 4096 Jan 24 2020 code +drwxr-xr-x 3 marie p_number_crunch 4096 Feb 12 2020 data +-rw-rw---- 1 marie p_number_crunch 4096 Jan 24 2020 readme.md ``` ### Placeholders diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md index 73322cf3031a2550ddec1223546b3b393579b8b5..2ce7e0a16ee6edeaa4d966cb624932f97635d2db 100644 --- a/doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md +++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/intermediate_archive.md @@ -20,7 +20,7 @@ Some more information: ## Access the Intermediate Archive For storing and restoring your data in/from the "Intermediate Archive" you can use the tool -[Datamover](../data_transfer/datamover.md). To use the DataMover you have to login to ZIH systems. +[Datamover](../data_transfer/datamover.md). To use the Datamover you have to login to ZIH systems. ### Store Data diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/longterm_preservation.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/longterm_preservation.md index 9a4c7e760282269792fbcb844935d37fd88f4bb3..de04504fdd68766f01c5b37887d7f4f03e45e4a3 100644 --- a/doc.zih.tu-dresden.de/docs/data_lifecycle/longterm_preservation.md +++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/longterm_preservation.md @@ -74,13 +74,65 @@ Below are some examples: ## Where can I get more information about management of research data? -Go to [http://www.forschungsdaten.org/en/](http://www.forschungsdaten.org/en/) to find more -information about managing research data. - -## I want to store my research data at ZIH. How can I do that? - -You can use the following services for long-term preservation of research data: - - - [Long-term archive](https://tu-dresden.de/zih/dienste/service-katalog/arbeitsumgebung/backup_archiv/archivierung_am_zih) - - [Long-term Archiving and Publication with OpARA (Open Access Repository and Archive)](https://tu-dresden.de/zih/dienste/service-katalog/arbeitsumgebung/backup_archiv/archivierung_am_zih#section-2-2) - - [intermediate archive](https://tu-dresden.de/zih/dienste/service-katalog/arbeitsumgebung/backup_archiv/archivierung_am_zih#section-2-1) +Please visit the wiki [forschungsdaten.org](https://www.forschungsdaten.org/en/) to learn more about +all of the different aspects of research data management. + +For questions or individual consultations regarding research data management in general or any of +its certain aspects, you can contact the +[Service Center Research Data](https://tu-dresden.de/forschung-transfer/services-fuer-forschende/kontaktstelle-forschungsdaten?set_language=en) +(Kontaktstelle Forschungsdaten) of TU Dresden. + +## I want to archive my research data at ZIH safely. How can I do that? + +For TU Dresden there exist two different services at ZIH for archiving research data. Both of +them ensure high data safety by duplicating data internally at two separate locations and +require some data preparation (e.g. packaging), but serve different use cases: + +### Storing very infrequently used data during the course of the project + +The intermediate archive is a tape storage easily accessible as a directory +(`/archive/<HRSK-project>/` or `/archive/<login>/`) using the +[export nodes](../data_transfer/export_nodes.md) +and +[Datamover tools](https://doc.zih.tu-dresden.de/data_transfer/datamover/) to move your data to. +For detailed information please visit the +[ZIH intermediate archive documentation](https://tu-dresden.de/zih/dienste/service-katalog/arbeitsumgebung/backup_archiv/archivierung_am_zih#section-2-1). + +!!! note + + The usage of the HRSK-project-related archive is preferable to the login-related archive, as + this enables assigning access rights and responsibility across multiple researchers, due to the + common staff turnover in research. + +The use of the intermediate archive usually is limited by the end of the corresponding +research project. Afterwards data is required to be removed, tidied up and submitted to a +long-term repository (see next section). + +The intermediate archive is the preferred service when you keep large, mostly unused data volumes +during the course of your research project; if you want or need to free storage capacities, but +you are still not able to define certain or relevant datasets for long-term archival. + +If you are able to identify complete and final datasets, which you probably won't use actively +anymore, then repositories as described in the next section may be the more appropriate selection. + +### Archiving data beyond the project lifetime, for 10 years and above + +According to good scientific practice (cf. +[DFG guidelines, #17](https://www.dfg.de/download/pdf/foerderung/rechtliche_rahmenbedingungen/gute_wissenschaftliche_praxis/kodex_gwp.pdf)) +and +[TU Dresden research data guidelines](https://tu-dresden.de/tu-dresden/qualitaetsmanagement/ressourcen/dateien/wisprax/Leitlinien-fuer-den-Umgang-mit-Forschungsdaten-an-der-TU-Dresden.pdf), +relevant research data needs to be archived at least for 10 years. The +[OpARA service](https://opara.zih.tu-dresden.de/xmlui/) (Open Access Repository and Archive) is the +joint research data repository service for Saxon universities to address this requirement. + +Data can be uploaded and, to comply to the demands of long-term understanding of data, additional +metadata and description must be added. Large datasets may be optionally imported beforehand. In +this case, please contact the +[TU Dresden Service Desk](mailto:servicedesk@tu-dresden.de?subject=OpARA:%20Data%20Import). +Optionally, data can also be **published** by OpARA. To ensure data quality, data submissions +undergo a review process. + +Beyond OpARA, it is also recommended to use discipline-specific data repositories for data +publications. Usually those are well known in a scientific community, and offer better fitting +options of data description and classification. Please visit [re3data.org](https://re3data.org) +to look up a suitable one for your discipline. diff --git a/doc.zih.tu-dresden.de/docs/data_transfer/datamover.md b/doc.zih.tu-dresden.de/docs/data_transfer/datamover.md index 0bd3fbe88e6a7957232a04a98c2c5eeb33a245ad..0891ca2a66f49b5e2f5c243fe4e86cdf07e1e2e9 100644 --- a/doc.zih.tu-dresden.de/docs/data_transfer/datamover.md +++ b/doc.zih.tu-dresden.de/docs/data_transfer/datamover.md @@ -1,7 +1,7 @@ # Datamover - Data Transfer Inside ZIH Systems -With the **datamover**, we provide a special data transfer machine for transferring data with best -transfer speed between the filesystems of ZIH systems. The datamover machine is not accessible +With the **Datamover**, we provide a special data transfer machine for transferring data with best +transfer speed between the filesystems of ZIH systems. The Datamover machine is not accessible through SSH as it is dedicated to data transfers. To move or copy files from one filesystem to another filesystem, you have to use the following commands: @@ -45,7 +45,7 @@ To identify the mount points of the different filesystems on the data transfer m !!! example "Copying data from `/beegfs/global0` to `/projects` filesystem." ``` console - marie@login$ dtcp -r /beegfs/global0/ws/marie-workdata/results /projects/p_marie/. + marie@login$ dtcp -r /beegfs/global0/ws/marie-workdata/results /projects/p_number_crunch/. ``` !!! example "Moving data from `/beegfs/global0` to `/warm_archive` filesystem." @@ -57,7 +57,7 @@ To identify the mount points of the different filesystems on the data transfer m !!! example "Archive data from `/beegfs/global0` to `/archiv` filesystem." ``` console - marie@login$ dttar -czf /archiv/p_marie/results.tgz /beegfs/global0/ws/marie-workdata/results + marie@login$ dttar -czf /archiv/p_number_crunch/results.tgz /beegfs/global0/ws/marie-workdata/results ``` !!! warning @@ -66,7 +66,7 @@ To identify the mount points of the different filesystems on the data transfer m !!! note The [warm archive](../data_lifecycle/warm_archive.md) and the `projects` filesystem are not writable from within batch jobs. - However, you can store the data in the `warm_archive` using the datamover. + However, you can store the data in the `warm_archive` using the Datamover. ## Transferring Files Between ZIH Systems and Group Drive diff --git a/doc.zih.tu-dresden.de/docs/data_transfer/overview.md b/doc.zih.tu-dresden.de/docs/data_transfer/overview.md index a8af87cc55814ca0afe5b30193589cf1905ce356..6e8a1bf1cc12e36e4aa15bd46b9eaf84e24171bc 100644 --- a/doc.zih.tu-dresden.de/docs/data_transfer/overview.md +++ b/doc.zih.tu-dresden.de/docs/data_transfer/overview.md @@ -14,9 +14,9 @@ copy data to/from ZIH systems. Please follow the link to the documentation on ## Data Transfer Inside ZIH Systems: Datamover -The recommended way for data transfer inside ZIH Systems is the **datamover**. It is a special +The recommended way for data transfer inside ZIH Systems is the **Datamover**. It is a special data transfer machine that provides the best transfer speed. To load, move, copy etc. files from one filesystem to another filesystem, you have to use commands prefixed with `dt`: `dtcp`, `dtwget`, `dtmv`, `dtrm`, `dtrsync`, `dttar`, `dtls`. These commands submit a job to the data transfer machines that execute the selected command. Please refer to the detailed documentation regarding the -[datamover](datamover.md). +[Datamover](datamover.md). diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/checkpoint_restart.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/checkpoint_restart.md index f05e5a3dd78795aca1541a7bf0cfaa02904ff545..f9aca1755f7f4db883d74b386791b88ffb0fdf28 100644 --- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/checkpoint_restart.md +++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/checkpoint_restart.md @@ -63,9 +63,13 @@ To use it, first add a `dmtcp_launch` before your application call in your batch of MPI applications, you have to add the parameters `--ib --rm` and put it between `srun` and your application call, e.g.: -```bash -srun dmtcp_launch --ib --rm ./my-mpi-application -``` +???+ my_script.sbatch + + ```bash + [...] + + srun dmtcp_launch --ib --rm ./my-mpi-application + ``` !!! note @@ -79,7 +83,7 @@ Then just substitute your usual `sbatch` call with `dmtcp_sbatch` and be sure to and `-i` parameters (don't forget you need to have loaded the `dmtcp` module). ```console -marie@login$ dmtcp_sbatch --time 2-00:00:00 --interval 28000,800 my_batchfile.sh +marie@login$ dmtcp_sbatch --time 2-00:00:00 --interval 28000,800 my_script.sbatch ``` With `-t, --time` you set the total runtime of your calculations. This will be replaced in the batch diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md index b6ec206cf1950f416e81318daab0c9e0e88ba45a..ebfd52972ac785b851a0c02758904a68dd09af8f 100644 --- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md +++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_examples.md @@ -109,7 +109,7 @@ for `sbatch/srun` in this case is `--gres=gpu:[NUM_PER_NODE]` (where `NUM_PER_NO #SBATCH --cpus-per-task=6 # use 6 threads per task #SBATCH --gres=gpu:1 # use 1 GPU per node (i.e. use one GPU per task) #SBATCH --time=01:00:00 # run for 1 hour - #SBATCH --account=p_marie # account CPU time to project p_marie + #SBATCH --account=p_number_crunch # account CPU time to project p_number_crunch srun ./your/cuda/application # start you application (probably requires MPI to use both nodes) ``` diff --git a/doc.zih.tu-dresden.de/docs/software/building_software.md b/doc.zih.tu-dresden.de/docs/software/building_software.md index c83932a16c1c0227cb160d4853cd1815626fc404..73952b06efde809b7e91e936be0fbf9b240f88a8 100644 --- a/doc.zih.tu-dresden.de/docs/software/building_software.md +++ b/doc.zih.tu-dresden.de/docs/software/building_software.md @@ -17,16 +17,16 @@ For instance, when using CMake and keeping your source in `/projects`, you could ```console # save path to your source directory: -marie@login$ export SRCDIR=/projects/p_marie/mysource +marie@login$ export SRCDIR=/projects/p_number_crunch/mysource # create a build directory in /scratch: -marie@login$ mkdir /scratch/p_marie/mysoftware_build +marie@login$ mkdir /scratch/p_number_crunch/mysoftware_build # change to build directory within /scratch: -marie@login$ cd /scratch/p_marie/mysoftware_build +marie@login$ cd /scratch/p_number_crunch/mysoftware_build # create Makefiles: -marie@login$ cmake -DCMAKE_INSTALL_PREFIX=/projects/p_marie/mysoftware $SRCDIR +marie@login$ cmake -DCMAKE_INSTALL_PREFIX=/projects/p_number_crunch/mysoftware $SRCDIR # build in a job: marie@login$ srun --mem-per-cpu=1500 --cpus-per-task=12 --pty make -j 12 diff --git a/doc.zih.tu-dresden.de/docs/software/data_analytics.md b/doc.zih.tu-dresden.de/docs/software/data_analytics.md index 036a1c7a454faf84b8352f5fb79dbfc09343cb89..c3cb4afe1be3d613a915e42f1db1020919ecfa3c 100644 --- a/doc.zih.tu-dresden.de/docs/software/data_analytics.md +++ b/doc.zih.tu-dresden.de/docs/software/data_analytics.md @@ -29,7 +29,7 @@ can be installed individually by each user. If possible, the use of recommended (e.g. for Python). Likewise, software can be used within [containers](containers.md). For the transfer of larger amounts of data into and within the system, the -[export nodes and datamover](../data_transfer/overview.md) should be used. +[export nodes and Datamover](../data_transfer/overview.md) should be used. Data is stored in the [workspaces](../data_lifecycle/workspaces.md). Software modules or virtual environments can also be installed in workspaces to enable collaborative work even within larger groups. diff --git a/doc.zih.tu-dresden.de/docs/software/data_analytics_with_python.md b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_python.md index a7d2781669fc909e0628c6518825542cf8f7ced8..cf8c1b559f4f496a729388a1e1f4353cdcd14733 100644 --- a/doc.zih.tu-dresden.de/docs/software/data_analytics_with_python.md +++ b/doc.zih.tu-dresden.de/docs/software/data_analytics_with_python.md @@ -219,7 +219,7 @@ from dask_jobqueue import SLURMCluster cluster = SLURMCluster(queue='alpha', cores=8, processes=2, - project='p_marie', + project='p_number_crunch', memory="8GB", walltime="00:30:00") @@ -242,7 +242,7 @@ from dask import delayed cluster = SLURMCluster(queue='alpha', cores=8, processes=2, - project='p_marie', + project='p_number_crunch', memory="80GB", walltime="00:30:00", extra=['--resources gpu=1']) @@ -294,7 +294,7 @@ for the Monte-Carlo estimation of Pi. #create a Slurm cluster, please specify your project - cluster = SLURMCluster(queue='alpha', cores=2, project='p_marie', memory="8GB", walltime="00:30:00", extra=['--resources gpu=1'], scheduler_options={"dashboard_address": f":{portdash}"}) + cluster = SLURMCluster(queue='alpha', cores=2, project='p_number_crunch', memory="8GB", walltime="00:30:00", extra=['--resources gpu=1'], scheduler_options={"dashboard_address": f":{portdash}"}) #submit the job to the scheduler with the number of nodes (here 2) requested: diff --git a/doc.zih.tu-dresden.de/docs/software/fem_software.md b/doc.zih.tu-dresden.de/docs/software/fem_software.md index 3f9bf79d54d36711560054101536c82dfbbfe000..8b8eb4cfe10c4476e48c4b30ac7f16b83589a38d 100644 --- a/doc.zih.tu-dresden.de/docs/software/fem_software.md +++ b/doc.zih.tu-dresden.de/docs/software/fem_software.md @@ -59,7 +59,7 @@ Slurm or [writing job files](../jobs_and_resources/slurm.md#job-files). #SBATCH --job-name=yyyy # give a name, what ever you want #SBATCH --mail-type=END,FAIL # send email when the job finished or failed #SBATCH --mail-user=<name>@mailbox.tu-dresden.de # set your email - #SBATCH --account=p_marie # charge compute time to project p_marie + #SBATCH --account=p_number_crunch # charge compute time to project p_number_crunch # Abaqus has its own MPI diff --git a/doc.zih.tu-dresden.de/docs/software/machine_learning.md b/doc.zih.tu-dresden.de/docs/software/machine_learning.md index 1f40e6199e88f6aa4fd68037a0f4b32113001913..e293b007a9c07fbaf41ba3ec7ce25f29024f44d7 100644 --- a/doc.zih.tu-dresden.de/docs/software/machine_learning.md +++ b/doc.zih.tu-dresden.de/docs/software/machine_learning.md @@ -155,7 +155,7 @@ The following HPC related software is installed on all nodes: There are many different datasets designed for research purposes. If you would like to download some of them, keep in mind that many machine learning libraries have direct access to public datasets without downloading it, e.g. [TensorFlow Datasets](https://www.tensorflow.org/datasets). If you -still need to download some datasets use [datamover](../data_transfer/datamover.md) machine. +still need to download some datasets use [Datamover](../data_transfer/datamover.md) machine. ### The ImageNet Dataset diff --git a/doc.zih.tu-dresden.de/docs/software/papi.md b/doc.zih.tu-dresden.de/docs/software/papi.md index 7460e3deef48bdf991e1b6fda36332cf0fc149b0..d8108bba3048da33661e0dd320a2807a0dd001aa 100644 --- a/doc.zih.tu-dresden.de/docs/software/papi.md +++ b/doc.zih.tu-dresden.de/docs/software/papi.md @@ -105,11 +105,11 @@ multiple events, please check which events can be measured concurrently using th The PAPI tools must be run on the compute node, using an interactive shell or job. !!! example "Example: Determine the events on the partition `romeo` from a login node" - Let us assume, that you are in project `p_marie`. Then, use the following commands: + Let us assume, that you are in project `p_number_crunch`. Then, use the following commands: ```console marie@login$ module load PAPI - marie@login$ salloc --account=p_marie --partition=romeo + marie@login$ salloc --account=p_number_crunch --partition=romeo [...] marie@compute$ srun papi_avail marie@compute$ srun papi_native_avail @@ -121,12 +121,12 @@ Instrument your application with either the high-level or low-level API. Load th compile your application against the PAPI library. !!! example - Assuming that you are in project `p_marie`, use the following commands: + Assuming that you are in project `p_number_crunch`, use the following commands: ```console marie@login$ module load PAPI marie@login$ gcc app.c -o app -lpapi - marie@login$ salloc --account=p_marie --partition=romeo + marie@login$ salloc --account=p_number_crunch --partition=romeo marie@compute$ srun ./app [...] # Exit with Ctrl+D diff --git a/doc.zih.tu-dresden.de/docs/software/private_modules.md b/doc.zih.tu-dresden.de/docs/software/private_modules.md index 6dd2d3d0498d78ca188c9af1af272fa3e6e6537d..00982700ec5bc35fe757660897cc1631453a820f 100644 --- a/doc.zih.tu-dresden.de/docs/software/private_modules.md +++ b/doc.zih.tu-dresden.de/docs/software/private_modules.md @@ -27,12 +27,12 @@ marie@compute$ cd privatemodules/<sw_name> ``` Project private module files for software that can be used by all members of your group should be -located in your global projects directory, e.g., `/projects/p_marie/privatemodules`. Thus, create +located in your global projects directory, e.g., `/projects/p_number_crunch/privatemodules`. Thus, create this directory: ```console -marie@compute$ mkdir --verbose --parents /projects/p_marie/privatemodules/<sw_name> -marie@compute$ cd /projects/p_marie/privatemodules/<sw_name> +marie@compute$ mkdir --verbose --parents /projects/p_number_crunch/privatemodules/<sw_name> +marie@compute$ cd /projects/p_number_crunch/privatemodules/<sw_name> ``` !!! note @@ -110,7 +110,7 @@ marie@login$ module use $HOME/privatemodules for your private module files and ```console -marie@login$ module use /projects/p_marie/privatemodules +marie@login$ module use /projects/p_number_crunch/privatemodules ``` for group private module files, respectively. diff --git a/doc.zih.tu-dresden.de/util/check-spelling.sh b/doc.zih.tu-dresden.de/util/check-spelling.sh index f6b3fca83d71283a6430f260f5a75bdbca3a7e2a..d97f93e20df73b9ea47e501e7196f605f0cacd48 100755 --- a/doc.zih.tu-dresden.de/util/check-spelling.sh +++ b/doc.zih.tu-dresden.de/util/check-spelling.sh @@ -7,7 +7,7 @@ basedir=`dirname "$scriptpath"` basedir=`dirname "$basedir"` wordlistfile=$(realpath $basedir/wordlist.aspell) branch="origin/${CI_MERGE_REQUEST_TARGET_BRANCH_NAME:-preview}" -files_to_skip=(doc.zih.tu-dresden.de/docs/accessibility.md doc.zih.tu-dresden.de/docs/data_protection_declaration.md doc.zih.tu-dresden.de/docs/legal_notice.md) +files_to_skip=(doc.zih.tu-dresden.de/docs/accessibility.md doc.zih.tu-dresden.de/docs/data_protection_declaration.md doc.zih.tu-dresden.de/docs/legal_notice.md doc.zih.tu-dresden.de/docs/access/key_fingerprints.md) aspellmode= if aspell dump modes | grep -q markdown; then aspellmode="--mode=markdown" diff --git a/doc.zih.tu-dresden.de/util/grep-forbidden-patterns.sh b/doc.zih.tu-dresden.de/util/grep-forbidden-patterns.sh index b2f8b3478d7d8aaa2247b392c97dc09d09348743..cacde0d9ee84f903a55d3109dcd330d3e43184ad 100755 --- a/doc.zih.tu-dresden.de/util/grep-forbidden-patterns.sh +++ b/doc.zih.tu-dresden.de/util/grep-forbidden-patterns.sh @@ -46,9 +46,9 @@ i ^[ |]*|$ Avoid spaces at end of lines. doc.zih.tu-dresden.de/docs/accessibility.md i [[:space:]]$ -When referencing projects, please use p_marie for consistency. +When referencing projects, please use p_number_crunch for consistency. -i \<p_ p_marie +i \<p_ p_number_crunch Avoid \`home\`. Use home without backticks instead. i \`home\` diff --git a/doc.zih.tu-dresden.de/wordlist.aspell b/doc.zih.tu-dresden.de/wordlist.aspell index 8a5013fe00988ccdc5b1500520d4151b71af6527..a808318d64a38981956ed1ac5fa5a7d1c05e703d 100644 --- a/doc.zih.tu-dresden.de/wordlist.aspell +++ b/doc.zih.tu-dresden.de/wordlist.aspell @@ -51,7 +51,7 @@ Dask dataframes DataFrames Dataheap -datamover +Datamover DataParallel dataset Dataset