Skip to content
Snippets Groups Projects
Commit 88affd44 authored by Jan Frenzel's avatar Jan Frenzel
Browse files

Merge branch 'preview' into cil

parents 3864f319 62e5acab
No related branches found
No related tags found
2 merge requests!575Automated merge from preview to main,!535Use plugin to validate internal URLs
Showing
with 110 additions and 53 deletions
...@@ -148,7 +148,7 @@ c.NotebookApp.allow_remote_access = True ...@@ -148,7 +148,7 @@ c.NotebookApp.allow_remote_access = True
#SBATCH --time=02:30:00 #SBATCH --time=02:30:00
#SBATCH --mem=4000M #SBATCH --mem=4000M
#SBATCH -J "jupyter-notebook" # job-name #SBATCH -J "jupyter-notebook" # job-name
#SBATCH -A p_marie #SBATCH -A p_number_crunch
unset XDG_RUNTIME_DIR # might be required when interactive instead of sbatch to avoid 'Permission denied error' unset XDG_RUNTIME_DIR # might be required when interactive instead of sbatch to avoid 'Permission denied error'
srun jupyter notebook srun jupyter notebook
......
...@@ -134,7 +134,7 @@ We follow this rules regarding prompts: ...@@ -134,7 +134,7 @@ We follow this rules regarding prompts:
an example invocation, perhaps with output, should be given with the normal `console` code block. an example invocation, perhaps with output, should be given with the normal `console` code block.
See also [Code Block description below](#code-blocks-and-syntax-highlighting). See also [Code Block description below](#code-blocks-and-syntax-highlighting).
* Using some magic, the prompt as well as the output is identified and will not be copied! * Using some magic, the prompt as well as the output is identified and will not be copied!
* Stick to the [generic user name](#data-privacy-and-generic-user-name) `marie`. * Stick to the [generic user name](#data-privacy-and-generic-names) `marie`.
### Code Blocks and Syntax Highlighting ### Code Blocks and Syntax Highlighting
...@@ -245,16 +245,17 @@ _Result_: ...@@ -245,16 +245,17 @@ _Result_:
![lines](misc/highlight_lines.png) ![lines](misc/highlight_lines.png)
### Data Privacy and Generic User Name ### Data Privacy and Generic Names
Where possible, replace login, project name and other private data with clearly arbitrary placeholders. Where possible, replace login, project name and other private data with clearly arbitrary
E.g., use the generic login `marie` and the corresponding project name `p_marie`. placeholders. In particular, use the generic login `marie` and the project title `p_number_crunch`
as placeholders.
```console ```console
marie@login$ ls -l marie@login$ ls -l
drwxr-xr-x 3 marie p_marie 4096 Jan 24 2020 code drwxr-xr-x 3 marie p_number_crunch 4096 Jan 24 2020 code
drwxr-xr-x 3 marie p_marie 4096 Feb 12 2020 data drwxr-xr-x 3 marie p_number_crunch 4096 Feb 12 2020 data
-rw-rw---- 1 marie p_marie 4096 Jan 24 2020 readme.md -rw-rw---- 1 marie p_number_crunch 4096 Jan 24 2020 readme.md
``` ```
### Placeholders ### Placeholders
......
...@@ -20,7 +20,7 @@ Some more information: ...@@ -20,7 +20,7 @@ Some more information:
## Access the Intermediate Archive ## Access the Intermediate Archive
For storing and restoring your data in/from the "Intermediate Archive" you can use the tool For storing and restoring your data in/from the "Intermediate Archive" you can use the tool
[Datamover](../data_transfer/datamover.md). To use the DataMover you have to login to ZIH systems. [Datamover](../data_transfer/datamover.md). To use the Datamover you have to login to ZIH systems.
### Store Data ### Store Data
......
...@@ -74,13 +74,65 @@ Below are some examples: ...@@ -74,13 +74,65 @@ Below are some examples:
## Where can I get more information about management of research data? ## Where can I get more information about management of research data?
Go to [http://www.forschungsdaten.org/en/](http://www.forschungsdaten.org/en/) to find more Please visit the wiki [forschungsdaten.org](https://www.forschungsdaten.org/en/) to learn more about
information about managing research data. all of the different aspects of research data management.
## I want to store my research data at ZIH. How can I do that? For questions or individual consultations regarding research data management in general or any of
its certain aspects, you can contact the
You can use the following services for long-term preservation of research data: [Service Center Research Data](https://tu-dresden.de/forschung-transfer/services-fuer-forschende/kontaktstelle-forschungsdaten?set_language=en)
(Kontaktstelle Forschungsdaten) of TU Dresden.
- [Long-term archive](https://tu-dresden.de/zih/dienste/service-katalog/arbeitsumgebung/backup_archiv/archivierung_am_zih)
- [Long-term Archiving and Publication with OpARA (Open Access Repository and Archive)](https://tu-dresden.de/zih/dienste/service-katalog/arbeitsumgebung/backup_archiv/archivierung_am_zih#section-2-2) ## I want to archive my research data at ZIH safely. How can I do that?
- [intermediate archive](https://tu-dresden.de/zih/dienste/service-katalog/arbeitsumgebung/backup_archiv/archivierung_am_zih#section-2-1)
For TU Dresden there exist two different services at ZIH for archiving research data. Both of
them ensure high data safety by duplicating data internally at two separate locations and
require some data preparation (e.g. packaging), but serve different use cases:
### Storing very infrequently used data during the course of the project
The intermediate archive is a tape storage easily accessible as a directory
(`/archive/<HRSK-project>/` or `/archive/<login>/`) using the
[export nodes](../data_transfer/export_nodes.md)
and
[Datamover tools](https://doc.zih.tu-dresden.de/data_transfer/datamover/) to move your data to.
For detailed information please visit the
[ZIH intermediate archive documentation](https://tu-dresden.de/zih/dienste/service-katalog/arbeitsumgebung/backup_archiv/archivierung_am_zih#section-2-1).
!!! note
The usage of the HRSK-project-related archive is preferable to the login-related archive, as
this enables assigning access rights and responsibility across multiple researchers, due to the
common staff turnover in research.
The use of the intermediate archive usually is limited by the end of the corresponding
research project. Afterwards data is required to be removed, tidied up and submitted to a
long-term repository (see next section).
The intermediate archive is the preferred service when you keep large, mostly unused data volumes
during the course of your research project; if you want or need to free storage capacities, but
you are still not able to define certain or relevant datasets for long-term archival.
If you are able to identify complete and final datasets, which you probably won't use actively
anymore, then repositories as described in the next section may be the more appropriate selection.
### Archiving data beyond the project lifetime, for 10 years and above
According to good scientific practice (cf.
[DFG guidelines, #17](https://www.dfg.de/download/pdf/foerderung/rechtliche_rahmenbedingungen/gute_wissenschaftliche_praxis/kodex_gwp.pdf))
and
[TU Dresden research data guidelines](https://tu-dresden.de/tu-dresden/qualitaetsmanagement/ressourcen/dateien/wisprax/Leitlinien-fuer-den-Umgang-mit-Forschungsdaten-an-der-TU-Dresden.pdf),
relevant research data needs to be archived at least for 10 years. The
[OpARA service](https://opara.zih.tu-dresden.de/xmlui/) (Open Access Repository and Archive) is the
joint research data repository service for Saxon universities to address this requirement.
Data can be uploaded and, to comply to the demands of long-term understanding of data, additional
metadata and description must be added. Large datasets may be optionally imported beforehand. In
this case, please contact the
[TU Dresden Service Desk](mailto:servicedesk@tu-dresden.de?subject=OpARA:%20Data%20Import).
Optionally, data can also be **published** by OpARA. To ensure data quality, data submissions
undergo a review process.
Beyond OpARA, it is also recommended to use discipline-specific data repositories for data
publications. Usually those are well known in a scientific community, and offer better fitting
options of data description and classification. Please visit [re3data.org](https://re3data.org)
to look up a suitable one for your discipline.
# Datamover - Data Transfer Inside ZIH Systems # Datamover - Data Transfer Inside ZIH Systems
With the **datamover**, we provide a special data transfer machine for transferring data with best With the **Datamover**, we provide a special data transfer machine for transferring data with best
transfer speed between the filesystems of ZIH systems. The datamover machine is not accessible transfer speed between the filesystems of ZIH systems. The Datamover machine is not accessible
through SSH as it is dedicated to data transfers. To move or copy files from one filesystem to through SSH as it is dedicated to data transfers. To move or copy files from one filesystem to
another filesystem, you have to use the following commands: another filesystem, you have to use the following commands:
...@@ -45,7 +45,7 @@ To identify the mount points of the different filesystems on the data transfer m ...@@ -45,7 +45,7 @@ To identify the mount points of the different filesystems on the data transfer m
!!! example "Copying data from `/beegfs/global0` to `/projects` filesystem." !!! example "Copying data from `/beegfs/global0` to `/projects` filesystem."
``` console ``` console
marie@login$ dtcp -r /beegfs/global0/ws/marie-workdata/results /projects/p_marie/. marie@login$ dtcp -r /beegfs/global0/ws/marie-workdata/results /projects/p_number_crunch/.
``` ```
!!! example "Moving data from `/beegfs/global0` to `/warm_archive` filesystem." !!! example "Moving data from `/beegfs/global0` to `/warm_archive` filesystem."
...@@ -57,7 +57,7 @@ To identify the mount points of the different filesystems on the data transfer m ...@@ -57,7 +57,7 @@ To identify the mount points of the different filesystems on the data transfer m
!!! example "Archive data from `/beegfs/global0` to `/archiv` filesystem." !!! example "Archive data from `/beegfs/global0` to `/archiv` filesystem."
``` console ``` console
marie@login$ dttar -czf /archiv/p_marie/results.tgz /beegfs/global0/ws/marie-workdata/results marie@login$ dttar -czf /archiv/p_number_crunch/results.tgz /beegfs/global0/ws/marie-workdata/results
``` ```
!!! warning !!! warning
...@@ -66,7 +66,7 @@ To identify the mount points of the different filesystems on the data transfer m ...@@ -66,7 +66,7 @@ To identify the mount points of the different filesystems on the data transfer m
!!! note !!! note
The [warm archive](../data_lifecycle/warm_archive.md) and the `projects` filesystem are not The [warm archive](../data_lifecycle/warm_archive.md) and the `projects` filesystem are not
writable from within batch jobs. writable from within batch jobs.
However, you can store the data in the `warm_archive` using the datamover. However, you can store the data in the `warm_archive` using the Datamover.
## Transferring Files Between ZIH Systems and Group Drive ## Transferring Files Between ZIH Systems and Group Drive
......
...@@ -14,9 +14,9 @@ copy data to/from ZIH systems. Please follow the link to the documentation on ...@@ -14,9 +14,9 @@ copy data to/from ZIH systems. Please follow the link to the documentation on
## Data Transfer Inside ZIH Systems: Datamover ## Data Transfer Inside ZIH Systems: Datamover
The recommended way for data transfer inside ZIH Systems is the **datamover**. It is a special The recommended way for data transfer inside ZIH Systems is the **Datamover**. It is a special
data transfer machine that provides the best transfer speed. To load, move, copy etc. files from one data transfer machine that provides the best transfer speed. To load, move, copy etc. files from one
filesystem to another filesystem, you have to use commands prefixed with `dt`: `dtcp`, `dtwget`, filesystem to another filesystem, you have to use commands prefixed with `dt`: `dtcp`, `dtwget`,
`dtmv`, `dtrm`, `dtrsync`, `dttar`, `dtls`. These commands submit a job to the data transfer `dtmv`, `dtrm`, `dtrsync`, `dttar`, `dtls`. These commands submit a job to the data transfer
machines that execute the selected command. Please refer to the detailed documentation regarding the machines that execute the selected command. Please refer to the detailed documentation regarding the
[datamover](datamover.md). [Datamover](datamover.md).
...@@ -63,9 +63,13 @@ To use it, first add a `dmtcp_launch` before your application call in your batch ...@@ -63,9 +63,13 @@ To use it, first add a `dmtcp_launch` before your application call in your batch
of MPI applications, you have to add the parameters `--ib --rm` and put it between `srun` and your of MPI applications, you have to add the parameters `--ib --rm` and put it between `srun` and your
application call, e.g.: application call, e.g.:
```bash ???+ my_script.sbatch
srun dmtcp_launch --ib --rm ./my-mpi-application
``` ```bash
[...]
srun dmtcp_launch --ib --rm ./my-mpi-application
```
!!! note !!! note
...@@ -79,7 +83,7 @@ Then just substitute your usual `sbatch` call with `dmtcp_sbatch` and be sure to ...@@ -79,7 +83,7 @@ Then just substitute your usual `sbatch` call with `dmtcp_sbatch` and be sure to
and `-i` parameters (don't forget you need to have loaded the `dmtcp` module). and `-i` parameters (don't forget you need to have loaded the `dmtcp` module).
```console ```console
marie@login$ dmtcp_sbatch --time 2-00:00:00 --interval 28000,800 my_batchfile.sh marie@login$ dmtcp_sbatch --time 2-00:00:00 --interval 28000,800 my_script.sbatch
``` ```
With `-t, --time` you set the total runtime of your calculations. This will be replaced in the batch With `-t, --time` you set the total runtime of your calculations. This will be replaced in the batch
......
...@@ -109,7 +109,7 @@ for `sbatch/srun` in this case is `--gres=gpu:[NUM_PER_NODE]` (where `NUM_PER_NO ...@@ -109,7 +109,7 @@ for `sbatch/srun` in this case is `--gres=gpu:[NUM_PER_NODE]` (where `NUM_PER_NO
#SBATCH --cpus-per-task=6 # use 6 threads per task #SBATCH --cpus-per-task=6 # use 6 threads per task
#SBATCH --gres=gpu:1 # use 1 GPU per node (i.e. use one GPU per task) #SBATCH --gres=gpu:1 # use 1 GPU per node (i.e. use one GPU per task)
#SBATCH --time=01:00:00 # run for 1 hour #SBATCH --time=01:00:00 # run for 1 hour
#SBATCH --account=p_marie # account CPU time to project p_marie #SBATCH --account=p_number_crunch # account CPU time to project p_number_crunch
srun ./your/cuda/application # start you application (probably requires MPI to use both nodes) srun ./your/cuda/application # start you application (probably requires MPI to use both nodes)
``` ```
......
...@@ -17,16 +17,16 @@ For instance, when using CMake and keeping your source in `/projects`, you could ...@@ -17,16 +17,16 @@ For instance, when using CMake and keeping your source in `/projects`, you could
```console ```console
# save path to your source directory: # save path to your source directory:
marie@login$ export SRCDIR=/projects/p_marie/mysource marie@login$ export SRCDIR=/projects/p_number_crunch/mysource
# create a build directory in /scratch: # create a build directory in /scratch:
marie@login$ mkdir /scratch/p_marie/mysoftware_build marie@login$ mkdir /scratch/p_number_crunch/mysoftware_build
# change to build directory within /scratch: # change to build directory within /scratch:
marie@login$ cd /scratch/p_marie/mysoftware_build marie@login$ cd /scratch/p_number_crunch/mysoftware_build
# create Makefiles: # create Makefiles:
marie@login$ cmake -DCMAKE_INSTALL_PREFIX=/projects/p_marie/mysoftware $SRCDIR marie@login$ cmake -DCMAKE_INSTALL_PREFIX=/projects/p_number_crunch/mysoftware $SRCDIR
# build in a job: # build in a job:
marie@login$ srun --mem-per-cpu=1500 --cpus-per-task=12 --pty make -j 12 marie@login$ srun --mem-per-cpu=1500 --cpus-per-task=12 --pty make -j 12
......
...@@ -29,7 +29,7 @@ can be installed individually by each user. If possible, the use of ...@@ -29,7 +29,7 @@ can be installed individually by each user. If possible, the use of
recommended (e.g. for Python). Likewise, software can be used within [containers](containers.md). recommended (e.g. for Python). Likewise, software can be used within [containers](containers.md).
For the transfer of larger amounts of data into and within the system, the For the transfer of larger amounts of data into and within the system, the
[export nodes and datamover](../data_transfer/overview.md) should be used. [export nodes and Datamover](../data_transfer/overview.md) should be used.
Data is stored in the [workspaces](../data_lifecycle/workspaces.md). Data is stored in the [workspaces](../data_lifecycle/workspaces.md).
Software modules or virtual environments can also be installed in workspaces to enable Software modules or virtual environments can also be installed in workspaces to enable
collaborative work even within larger groups. collaborative work even within larger groups.
......
...@@ -219,7 +219,7 @@ from dask_jobqueue import SLURMCluster ...@@ -219,7 +219,7 @@ from dask_jobqueue import SLURMCluster
cluster = SLURMCluster(queue='alpha', cluster = SLURMCluster(queue='alpha',
cores=8, cores=8,
processes=2, processes=2,
project='p_marie', project='p_number_crunch',
memory="8GB", memory="8GB",
walltime="00:30:00") walltime="00:30:00")
...@@ -242,7 +242,7 @@ from dask import delayed ...@@ -242,7 +242,7 @@ from dask import delayed
cluster = SLURMCluster(queue='alpha', cluster = SLURMCluster(queue='alpha',
cores=8, cores=8,
processes=2, processes=2,
project='p_marie', project='p_number_crunch',
memory="80GB", memory="80GB",
walltime="00:30:00", walltime="00:30:00",
extra=['--resources gpu=1']) extra=['--resources gpu=1'])
...@@ -294,7 +294,7 @@ for the Monte-Carlo estimation of Pi. ...@@ -294,7 +294,7 @@ for the Monte-Carlo estimation of Pi.
#create a Slurm cluster, please specify your project #create a Slurm cluster, please specify your project
cluster = SLURMCluster(queue='alpha', cores=2, project='p_marie', memory="8GB", walltime="00:30:00", extra=['--resources gpu=1'], scheduler_options={"dashboard_address": f":{portdash}"}) cluster = SLURMCluster(queue='alpha', cores=2, project='p_number_crunch', memory="8GB", walltime="00:30:00", extra=['--resources gpu=1'], scheduler_options={"dashboard_address": f":{portdash}"})
#submit the job to the scheduler with the number of nodes (here 2) requested: #submit the job to the scheduler with the number of nodes (here 2) requested:
......
...@@ -59,7 +59,7 @@ Slurm or [writing job files](../jobs_and_resources/slurm.md#job-files). ...@@ -59,7 +59,7 @@ Slurm or [writing job files](../jobs_and_resources/slurm.md#job-files).
#SBATCH --job-name=yyyy # give a name, what ever you want #SBATCH --job-name=yyyy # give a name, what ever you want
#SBATCH --mail-type=END,FAIL # send email when the job finished or failed #SBATCH --mail-type=END,FAIL # send email when the job finished or failed
#SBATCH --mail-user=<name>@mailbox.tu-dresden.de # set your email #SBATCH --mail-user=<name>@mailbox.tu-dresden.de # set your email
#SBATCH --account=p_marie # charge compute time to project p_marie #SBATCH --account=p_number_crunch # charge compute time to project p_number_crunch
# Abaqus has its own MPI # Abaqus has its own MPI
......
...@@ -155,7 +155,7 @@ The following HPC related software is installed on all nodes: ...@@ -155,7 +155,7 @@ The following HPC related software is installed on all nodes:
There are many different datasets designed for research purposes. If you would like to download some There are many different datasets designed for research purposes. If you would like to download some
of them, keep in mind that many machine learning libraries have direct access to public datasets of them, keep in mind that many machine learning libraries have direct access to public datasets
without downloading it, e.g. [TensorFlow Datasets](https://www.tensorflow.org/datasets). If you without downloading it, e.g. [TensorFlow Datasets](https://www.tensorflow.org/datasets). If you
still need to download some datasets use [datamover](../data_transfer/datamover.md) machine. still need to download some datasets use [Datamover](../data_transfer/datamover.md) machine.
### The ImageNet Dataset ### The ImageNet Dataset
......
...@@ -105,11 +105,11 @@ multiple events, please check which events can be measured concurrently using th ...@@ -105,11 +105,11 @@ multiple events, please check which events can be measured concurrently using th
The PAPI tools must be run on the compute node, using an interactive shell or job. The PAPI tools must be run on the compute node, using an interactive shell or job.
!!! example "Example: Determine the events on the partition `romeo` from a login node" !!! example "Example: Determine the events on the partition `romeo` from a login node"
Let us assume, that you are in project `p_marie`. Then, use the following commands: Let us assume, that you are in project `p_number_crunch`. Then, use the following commands:
```console ```console
marie@login$ module load PAPI marie@login$ module load PAPI
marie@login$ salloc --account=p_marie --partition=romeo marie@login$ salloc --account=p_number_crunch --partition=romeo
[...] [...]
marie@compute$ srun papi_avail marie@compute$ srun papi_avail
marie@compute$ srun papi_native_avail marie@compute$ srun papi_native_avail
...@@ -121,12 +121,12 @@ Instrument your application with either the high-level or low-level API. Load th ...@@ -121,12 +121,12 @@ Instrument your application with either the high-level or low-level API. Load th
compile your application against the PAPI library. compile your application against the PAPI library.
!!! example !!! example
Assuming that you are in project `p_marie`, use the following commands: Assuming that you are in project `p_number_crunch`, use the following commands:
```console ```console
marie@login$ module load PAPI marie@login$ module load PAPI
marie@login$ gcc app.c -o app -lpapi marie@login$ gcc app.c -o app -lpapi
marie@login$ salloc --account=p_marie --partition=romeo marie@login$ salloc --account=p_number_crunch --partition=romeo
marie@compute$ srun ./app marie@compute$ srun ./app
[...] [...]
# Exit with Ctrl+D # Exit with Ctrl+D
......
...@@ -27,12 +27,12 @@ marie@compute$ cd privatemodules/<sw_name> ...@@ -27,12 +27,12 @@ marie@compute$ cd privatemodules/<sw_name>
``` ```
Project private module files for software that can be used by all members of your group should be Project private module files for software that can be used by all members of your group should be
located in your global projects directory, e.g., `/projects/p_marie/privatemodules`. Thus, create located in your global projects directory, e.g., `/projects/p_number_crunch/privatemodules`. Thus, create
this directory: this directory:
```console ```console
marie@compute$ mkdir --verbose --parents /projects/p_marie/privatemodules/<sw_name> marie@compute$ mkdir --verbose --parents /projects/p_number_crunch/privatemodules/<sw_name>
marie@compute$ cd /projects/p_marie/privatemodules/<sw_name> marie@compute$ cd /projects/p_number_crunch/privatemodules/<sw_name>
``` ```
!!! note !!! note
...@@ -110,7 +110,7 @@ marie@login$ module use $HOME/privatemodules ...@@ -110,7 +110,7 @@ marie@login$ module use $HOME/privatemodules
for your private module files and for your private module files and
```console ```console
marie@login$ module use /projects/p_marie/privatemodules marie@login$ module use /projects/p_number_crunch/privatemodules
``` ```
for group private module files, respectively. for group private module files, respectively.
......
...@@ -7,7 +7,7 @@ basedir=`dirname "$scriptpath"` ...@@ -7,7 +7,7 @@ basedir=`dirname "$scriptpath"`
basedir=`dirname "$basedir"` basedir=`dirname "$basedir"`
wordlistfile=$(realpath $basedir/wordlist.aspell) wordlistfile=$(realpath $basedir/wordlist.aspell)
branch="origin/${CI_MERGE_REQUEST_TARGET_BRANCH_NAME:-preview}" branch="origin/${CI_MERGE_REQUEST_TARGET_BRANCH_NAME:-preview}"
files_to_skip=(doc.zih.tu-dresden.de/docs/accessibility.md doc.zih.tu-dresden.de/docs/data_protection_declaration.md doc.zih.tu-dresden.de/docs/legal_notice.md) files_to_skip=(doc.zih.tu-dresden.de/docs/accessibility.md doc.zih.tu-dresden.de/docs/data_protection_declaration.md doc.zih.tu-dresden.de/docs/legal_notice.md doc.zih.tu-dresden.de/docs/access/key_fingerprints.md)
aspellmode= aspellmode=
if aspell dump modes | grep -q markdown; then if aspell dump modes | grep -q markdown; then
aspellmode="--mode=markdown" aspellmode="--mode=markdown"
......
...@@ -46,9 +46,9 @@ i ^[ |]*|$ ...@@ -46,9 +46,9 @@ i ^[ |]*|$
Avoid spaces at end of lines. Avoid spaces at end of lines.
doc.zih.tu-dresden.de/docs/accessibility.md doc.zih.tu-dresden.de/docs/accessibility.md
i [[:space:]]$ i [[:space:]]$
When referencing projects, please use p_marie for consistency. When referencing projects, please use p_number_crunch for consistency.
i \<p_ p_marie i \<p_ p_number_crunch
Avoid \`home\`. Use home without backticks instead. Avoid \`home\`. Use home without backticks instead.
i \`home\` i \`home\`
......
...@@ -51,7 +51,7 @@ Dask ...@@ -51,7 +51,7 @@ Dask
dataframes dataframes
DataFrames DataFrames
Dataheap Dataheap
datamover Datamover
DataParallel DataParallel
dataset dataset
Dataset Dataset
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment