diff --git a/doc.zih.tu-dresden.de/docs/archive/beegfs_on_demand.md b/doc.zih.tu-dresden.de/docs/archive/beegfs_on_demand.md index 610670c6961b0d73a68e92b6487ad47a6804776f..8c2235f933fb41f5e590e880fdeb92ce6e950dfc 100644 --- a/doc.zih.tu-dresden.de/docs/archive/beegfs_on_demand.md +++ b/doc.zih.tu-dresden.de/docs/archive/beegfs_on_demand.md @@ -62,7 +62,7 @@ Check the status of the job with `squeue -u \<username>`. ## Mount BeeGFS Filesystem You can mount BeeGFS filesystem on the partition ml (PowerPC architecture) or on the -partition haswell (x86_64 architecture), more information [here](../jobs_and_resources/partitions_and_limits.md). +partition haswell (x86_64 architecture), more information about [partitions](../jobs_and_resources/partitions_and_limits.md). ### Mount BeeGFS Filesystem on the Partition `ml` diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/overview.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/overview.md index edb48c108690f54884d8c7837d1b4de2e56a62c8..9c207348f51ab54005a4a423ac2e6e8429ce9e6a 100644 --- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/overview.md +++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/overview.md @@ -62,9 +62,9 @@ Normal compute nodes are perfect for this task. **OpenMP jobs:** SMP-parallel applications can only run **within a node**, so it is necessary to include the [batch system](slurm.md) options `-N 1` and `-n 1`. Using `--cpus-per-task N` Slurm will start one task and you will have `N` CPUs. The maximum number of processors for an SMP-parallel -program is 896 on partition `julia`, see [here](partitions_and_limits.md). +program is 896 on partition `julia`, see [partitions](partitions_and_limits.md). -**GPUs** partitions are best suited for **repetitive** and **highly-parallel** computing tasks. If +Partitions with GPUs are best suited for **repetitive** and **highly-parallel** computing tasks. If you have a task with potential [data parallelism](../software/gpu_programming.md) most likely that you need the GPUs. Beyond video rendering, GPUs excel in tasks such as machine learning, financial simulations and risk modeling. Use the partitions `gpu2` and `ml` only if you need GPUs! Otherwise diff --git a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md index 2cb43c19cb9ad53ed20884451bfbd2a2370e1fe1..9bc564d05a310005edc1d5564549db8da08ee415 100644 --- a/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md +++ b/doc.zih.tu-dresden.de/docs/software/big_data_frameworks_spark.md @@ -6,7 +6,7 @@ [Apache Spark](https://spark.apache.org/), [Apache Flink](https://flink.apache.org/) and [Apache Hadoop](https://hadoop.apache.org/) are frameworks for processing and integrating -Big Data. These frameworks are also offered as software [modules](modules.md) on both `ml` and +Big Data. These frameworks are also offered as software [modules](modules.md) in both `ml` and `scs5` software environments. You can check module versions and availability with the command ```console diff --git a/doc.zih.tu-dresden.de/docs/software/machine_learning.md b/doc.zih.tu-dresden.de/docs/software/machine_learning.md index 7245ac128d81624b6dd9be39de8098cc6145010a..f2e5f24aa9f4f8e5f8fb516310b842584d30a614 100644 --- a/doc.zih.tu-dresden.de/docs/software/machine_learning.md +++ b/doc.zih.tu-dresden.de/docs/software/machine_learning.md @@ -35,7 +35,7 @@ The following have been reloaded with a version change: 1) modenv/scs5 => moden There are tools provided by IBM, that work on partition ML and are related to AI tasks. For more information see our [Power AI documentation](power_ai.md). -## Partition `alpha` +## Partition: Alpha Another partition for machine learning tasks is Alpha. It is mainly dedicated to [ScaDS.AI](https://scads.ai/) topics. Each node on Alpha has 2x AMD EPYC CPUs, 8x NVIDIA A100-SXM4