Skip to content
Snippets Groups Projects
Commit e413ef32 authored by Jan Frenzel's avatar Jan Frenzel Committed by Taras Lazariv
Browse files

Apply 5 suggestion(s) to 4 file(s)

parent 9b6e9103
No related branches found
No related tags found
5 merge requests!392Merge preview into contrib guide for browser users,!356Merge preview in main,!355Merge preview in main,!342Fix checks,!333Draft: update NGC containers
...@@ -62,7 +62,7 @@ Check the status of the job with `squeue -u \<username>`. ...@@ -62,7 +62,7 @@ Check the status of the job with `squeue -u \<username>`.
## Mount BeeGFS Filesystem ## Mount BeeGFS Filesystem
You can mount BeeGFS filesystem on the partition ml (PowerPC architecture) or on the You can mount BeeGFS filesystem on the partition ml (PowerPC architecture) or on the
partition haswell (x86_64 architecture), more information [here](../jobs_and_resources/partitions_and_limits.md). partition haswell (x86_64 architecture), more information about [partitions](../jobs_and_resources/partitions_and_limits.md).
### Mount BeeGFS Filesystem on the Partition `ml` ### Mount BeeGFS Filesystem on the Partition `ml`
......
...@@ -62,9 +62,9 @@ Normal compute nodes are perfect for this task. ...@@ -62,9 +62,9 @@ Normal compute nodes are perfect for this task.
**OpenMP jobs:** SMP-parallel applications can only run **within a node**, so it is necessary to **OpenMP jobs:** SMP-parallel applications can only run **within a node**, so it is necessary to
include the [batch system](slurm.md) options `-N 1` and `-n 1`. Using `--cpus-per-task N` Slurm will include the [batch system](slurm.md) options `-N 1` and `-n 1`. Using `--cpus-per-task N` Slurm will
start one task and you will have `N` CPUs. The maximum number of processors for an SMP-parallel start one task and you will have `N` CPUs. The maximum number of processors for an SMP-parallel
program is 896 on partition `julia`, see [here](partitions_and_limits.md). program is 896 on partition `julia`, see [partitions](partitions_and_limits.md).
**GPUs** partitions are best suited for **repetitive** and **highly-parallel** computing tasks. If Partitions with GPUs are best suited for **repetitive** and **highly-parallel** computing tasks. If
you have a task with potential [data parallelism](../software/gpu_programming.md) most likely that you have a task with potential [data parallelism](../software/gpu_programming.md) most likely that
you need the GPUs. Beyond video rendering, GPUs excel in tasks such as machine learning, financial you need the GPUs. Beyond video rendering, GPUs excel in tasks such as machine learning, financial
simulations and risk modeling. Use the partitions `gpu2` and `ml` only if you need GPUs! Otherwise simulations and risk modeling. Use the partitions `gpu2` and `ml` only if you need GPUs! Otherwise
......
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
[Apache Spark](https://spark.apache.org/), [Apache Flink](https://flink.apache.org/) [Apache Spark](https://spark.apache.org/), [Apache Flink](https://flink.apache.org/)
and [Apache Hadoop](https://hadoop.apache.org/) are frameworks for processing and integrating and [Apache Hadoop](https://hadoop.apache.org/) are frameworks for processing and integrating
Big Data. These frameworks are also offered as software [modules](modules.md) on both `ml` and Big Data. These frameworks are also offered as software [modules](modules.md) in both `ml` and
`scs5` software environments. You can check module versions and availability with the command `scs5` software environments. You can check module versions and availability with the command
```console ```console
......
...@@ -35,7 +35,7 @@ The following have been reloaded with a version change: 1) modenv/scs5 => moden ...@@ -35,7 +35,7 @@ The following have been reloaded with a version change: 1) modenv/scs5 => moden
There are tools provided by IBM, that work on partition ML and are related to AI tasks. There are tools provided by IBM, that work on partition ML and are related to AI tasks.
For more information see our [Power AI documentation](power_ai.md). For more information see our [Power AI documentation](power_ai.md).
## Partition `alpha` ## Partition: Alpha
Another partition for machine learning tasks is Alpha. It is mainly dedicated to Another partition for machine learning tasks is Alpha. It is mainly dedicated to
[ScaDS.AI](https://scads.ai/) topics. Each node on Alpha has 2x AMD EPYC CPUs, 8x NVIDIA A100-SXM4 [ScaDS.AI](https://scads.ai/) topics. Each node on Alpha has 2x AMD EPYC CPUs, 8x NVIDIA A100-SXM4
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment