From c362af8d6f141bb8523bb94a70b8b08b3397a283 Mon Sep 17 00:00:00 2001 From: Jan Frenzel <jan.frenzel@tu-dresden.de> Date: Thu, 29 Jul 2021 15:25:54 +0200 Subject: [PATCH] Replaced command ml by module (load) in big_data_frameworks.md. --- .../docs/software/big_data_frameworks.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md index 83c6740c6..4447a1423 100644 --- a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md +++ b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks.md @@ -10,7 +10,7 @@ Big Data. These frameworks are also offered as software [modules](modules.md) on for both ml and scs5 partitions. You could check module availability with the command: ```bash -ml av Spark +module av Spark ``` **Aim** of this page is to introduce users on how to start working with @@ -59,7 +59,7 @@ srun -p ml -N 1 -n 1 -c 2 --gres=gpu:1 --time=01:00:00 --pty --mem-per-cpu=10000 Once you have the shell, load Spark using the following command: ```bash -ml Spark +module load Spark ``` Before the application can be started, the Spark cluster needs to be set @@ -196,8 +196,8 @@ If you want to use Spark and HDFS together (or in general more than one framework), a scheme similar to the following can be used: ```bash -ml Hadoop -ml Spark +module load Hadoop +module load Spark source framework-configure.sh hadoop $HADOOP_ROOT_DIR/etc/hadoop source framework-configure.sh spark $SPARK_HOME/conf start-dfs.sh -- GitLab