[Romeo](RomeNodes)**\<span style="font-size: 1em;">partitions to work
with R. Please use the ml partition only if you need GPUs! \</span>
## R console
This is a quickstart example. The `srun` command is used to submit a
real-time execution job designed for interactive use with output
monitoring. Please check [the page](Slurm) for details. The R language
available for both types of Taurus nodes/architectures x86 (scs5
software environment) and Power9 (ml software environment).
Haswell partition:
srun --partition=haswell --ntasks=1 --nodes=1 --cpus-per-task=4 --mem-per-cpu=2583 --time=01:00:00 --pty bash #job submission in haswell nodes with allocating: 1 task per node, 1 node, 4 CPUs per task with 2583 mb per CPU(core) on 1 hour
module load modenv/scs5 #Ensure that you are using the scs5 partition. Example output: The following have been reloaded with a version change: 1) modenv/ml => modenv/scs5
module available R/3.6 #Check all availble modules with R version 3.6. You could use also "ml av R" but it gives huge output.
module load R #Load default R module Example output: Module R/3.6.0-foss 2019a and 56 dependencies loaded.
which R #Checking of current version of R
R #Start R console
Here are the parameters of the job with all the details to show you the
correct and optimal way to do it. Please allocate the job with respect
to [hardware specification](HardwareTaurus)! Besides, it should be noted
that the value of the \<span>--mem-per-cpu\</span> parameter is
different for the different partitions. it is important to respect \<a
Please keep in mind that it is not currently recommended to use the
interactive x11 job with the desktop version of Rstudio, as described,
for example, [here](Slurm#Interactive_X11_47GUI_Jobs) or in introduction
HPC-DA slides. This method is unstable.
## Install packages in R
By default, user-installed packages are stored in the
\<span>/$HOME/R\</span>/ folder inside a subfolder depending on the
architecture (on Taurus: x86 or PowerPC). Install packages using the
shell:
srun -p haswell -N 1 -n 1 -c 4 --mem-per-cpu=2583 --time=01:00:00 --pty bash #job submission to the haswell nodes with allocating: 1 task per node, 1 node, 4 CPUs per task with 2583 mb per CPU(core) in 1 hour
module purge
module load modenv/scs5 #Changing the environment. Example output: The following have been reloaded with a version change: 1) modenv/ml => modenv/scs5
module load R #Load R module Example output: Module R/3.6.0-foss-2019a and 56 dependencies loaded.
and the [MPI](https://en.wikipedia.org/wiki/Message_Passing_Interface)
(Message Passing Interface) as a "backend" for its parallel operations.
Parallel R codes submitting a multinode MPI R job to SLURM is very
similar to \<a href="Slurm#Binding_and_Distribution_of_Tasks"
target="\_blank">submitting an MPI Job\</a> since both are running
multicore jobs on multiple nodes. Below is an example of running R
script with the Rmpi on Taurus:
#!/bin/bash
#SBATCH --partition=haswell #specify the partition
#SBATCH --ntasks=16 #This parameter determines how many processes will be spawned. Please use >= 8.
#SBATCH --cpus-per-task=1
#SBATCH --time=00:10:00
#SBATCH -o test_Rmpi.out
#SBATCH -e test_Rmpi.err
module purge
module load modenv/scs5
module load R
mpirun -n 1 R CMD BATCH Rmpi.R #specify the absolute path to the R script, like: /scratch/ws/max1234-Work/R/Rmpi.R
# when finished writing, submit with sbatch <script_name>
\<span class="WYSIWYG_TT"> **-ntasks**\</span> SLURM option is the best
and simplest way to run your application with MPI. The number of nodes
required to complete this number of tasks will then be selected. Each
MPI rank is assigned 1 core(CPU).
However, in some specific cases, you can specify the number of nodes and
the number of necessary tasks per node:
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --tasks-per-node=16
#SBATCH --cpus-per-task=1
module purge
module load modenv/scs5
module load R
time mpirun -quiet -np 1 R CMD BATCH --no-save --no-restore Rmpi_c.R #this command will calculate the time of completion for your script
The illustration above shows the binding of an MPI-job. Use an
[example](%ATTACHURL%/Rmpi_c.R) from the attachment. In which 32 global
ranks are distributed over 2 nodes with 16 cores(CPUs) each. Each MPI
rank has 1 core assigned to it.
To use Rmpi and MPI please use one of these partitions: **Haswell**,
**Broadwell** or **Rome**.\<br />**%RED%Important:<span
class="twiki-macro ENDCOLOR"></span>**\<span
style`"font-size: 1em;"> Please allocate the required number of nodes and cores according to the hardware specification: 1 Haswell's node: 2 x [Intel Xeon (12 cores)]; 1 Broadwell's Node: 2 x [Intel Xeon (14 cores)]; 1 Rome's node: 2 x [AMD EPYC (64 cores)]. Please also check the </span><a href="HardwareTaurus" target="_blank">hardware specification</a><span style="font-size: 1em;"> (number of nodes etc). The =sinfo`
command gives you a quick overview of the status of partitions.\</span>
\<span style="font-size: 1em;">Please use \</span>\<span>mpirun\</span>
command \<span style="font-size: 1em;">to run the Rmpi script. It is a
wrapper that enables the communication between processes running on
different machines. \</span>\<span style="font-size: 1em;">We recommend
always use \</span>\<span style="font-size: 1em;">"\</span>\<span>-np