From 44e9a4844f8408d8eb551f1e78dade0913ec8ef3 Mon Sep 17 00:00:00 2001 From: Apurv Kulkarni <apurv.kulkarni@tu-dresden.de> Date: Mon, 15 Nov 2021 17:23:43 +0100 Subject: [PATCH] Corrected the config details. Config with 60GB doesn't work. --- .../docs/software/big_data_frameworks_spark.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md index 84f5935a1..5636a870a 100644 --- a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md +++ b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md @@ -36,11 +36,11 @@ Thus, Spark can be executed using different CPU architectures, e.g., Haswell and Let us assume that two nodes should be used for the computation. Use a `srun` command similar to the following to start an interactive session using the partition haswell. The following code -snippet shows a job submission to haswell nodes with an allocation of two nodes with 60 GB main +snippet shows a job submission to haswell nodes with an allocation of two nodes with 50 GB main memory exclusively for one hour: ```console -marie@login$ srun --partition=haswell --nodes=2 --mem=60g --exclusive --time=01:00:00 --pty bash -l +marie@login$ srun --partition=haswell --nodes=2 --mem=50g --exclusive --time=01:00:00 --pty bash -l ``` Once you have the shell, load Spark using the command @@ -129,7 +129,7 @@ example below: #SBATCH --partition=haswell #SBATCH --nodes=2 #SBATCH --exclusive - #SBATCH --mem=60G + #SBATCH --mem=50G #SBATCH --job-name="example-spark" ml Spark/3.0.1-Hadoop-2.7-Java-1.8-Python-3.7.4-GCCcore-8.3.0 -- GitLab