Skip to content
Snippets Groups Projects
Commit d98d3720 authored by Natalie Breidenbach's avatar Natalie Breidenbach
Browse files

Update data_analytics_with_python.md

parent 29d58273
No related branches found
No related tags found
2 merge requests!938Automated merge from preview to main,!936Update to Five-Cluster-Operation
......@@ -18,9 +18,9 @@ a research group and/or teaching class. For this purpose,
The interactive Python interpreter can also be used on ZIH systems via an interactive job:
```console
marie@login$ srun --partition=haswell --gres=gpu:1 --ntasks=1 --cpus-per-task=7 --pty --mem-per-cpu=8000 bash
marie@haswell$ module load Python
marie@haswell$ python
marie@login$ srun --gres=gpu:1 --ntasks=1 --cpus-per-task=7 --pty --mem-per-cpu=8000 bash
marie@compute$ module load Python
marie@compute$ python
Python 3.8.6 (default, Feb 17 2021, 11:48:51)
[GCC 10.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
......@@ -50,7 +50,7 @@ threads that can be used in parallel depends on the number of cores (parameter `
within the Slurm request, e.g.
```console
marie@login$ srun --partition=haswell --cpus-per-task=4 --mem=2G --hint=nomultithread --pty --time=8:00:00 bash
marie@login$ srun --cpus-per-task=4 --mem=2G --hint=nomultithread --pty --time=8:00:00 bash
```
The above request allows to use 4 parallel threads.
......@@ -239,7 +239,7 @@ from distributed import Client
from dask_jobqueue import SLURMCluster
from dask import delayed
cluster = SLURMCluster(queue='alpha',
cluster = SLURMCluster(
cores=8,
processes=2,
project='p_number_crunch',
......@@ -294,7 +294,7 @@ for the Monte-Carlo estimation of Pi.
#create a Slurm cluster, please specify your project
cluster = SLURMCluster(queue='alpha', cores=2, project='p_number_crunch', memory="8GB", walltime="00:30:00", extra=['--resources gpu=1'], scheduler_options={"dashboard_address": f":{portdash}"})
cluster = SLURMCluster(cores=2, project='p_number_crunch', memory="8GB", walltime="00:30:00", extra=['--resources gpu=1'], scheduler_options={"dashboard_address": f":{portdash}"})
#submit the job to the scheduler with the number of nodes (here 2) requested:
......@@ -439,7 +439,6 @@ For the multi-node case, use a script similar to this:
```bash
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=ml
#SBATCH --tasks-per-node=2
#SBATCH --cpus-per-task=1
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment