From fc8c2da6f84c634168b2f20c150263ffdcc2c42b Mon Sep 17 00:00:00 2001
From: Alexander Grund <alexander.grund@tu-dresden.de>
Date: Wed, 6 Mar 2024 09:59:30 +0100
Subject: [PATCH] Revert changes to spec.md

Hevily outdated and not user facing anyways
---
 doc.zih.tu-dresden.de/docs/software/spec.md | 22 ++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/doc.zih.tu-dresden.de/docs/software/spec.md b/doc.zih.tu-dresden.de/docs/software/spec.md
index 34af0bac2..f567f8de6 100644
--- a/doc.zih.tu-dresden.de/docs/software/spec.md
+++ b/doc.zih.tu-dresden.de/docs/software/spec.md
@@ -32,9 +32,9 @@ Once the target partition is determined, follow SPEC's
 [Installation Guide](https://www.spec.org/hpg/hpc2021/Docs/install-guide-linux.html).
 It is straight-forward and easy to use.
 
-???+ tip "Building for partition `power9`"
+???+ tip "Building for partition `ml`"
 
-    The partition `power9` is a Power9 architecture. Thus, you need to provide the `-e ppc64le` switch
+    The partition `ml` is a Power9 architecture. Thus, you need to provide the `-e ppc64le` switch
     when installing.
 
 ???+ tip "Building with NVHPC for partition `alpha`"
@@ -52,8 +52,8 @@ listed there.
 The behavior in terms of how to build, run and report the benchmark in a particular environment is
 controlled by a configuration file. There are a few examples included in the source code.
 Here you can apply compiler tuning and porting, specify the runtime environment and describe the
-system under test.
-Configurations are available, respectively:
+system under test. SPEChpc 2021 has been deployed on the partitions `haswell`, `ml` and
+`alpha`. Configurations are available, respectively:
 
 - [gnu-taurus.cfg](misc/spec_gnu-taurus.cfg)
 - [nvhpc-ppc.cfg](misc/spec_nvhpc-ppc.cfg)
@@ -89,7 +89,7 @@ configuration and controls it's runtime behavior. For all options, see SPEC's do
 First, execute `source shrc` in your SPEC installation directory. Then use a job script to submit a
 job with the benchmark or parts of it.
 
-In the following there are job scripts shown for partitions `haswell`, `power9` and `alpha`,
+In the following there are job scripts shown for partitions `haswell`, `ml` and `alpha`,
 respectively. You can use them as a template in order to reproduce results or to transfer the
 execution to a different partition.
 
@@ -128,7 +128,7 @@ execution to a different partition.
     ```bash linenums="1"
     #!/bin/bash
     #SBATCH --account=<p_number_crunch>
-    #SBATCH --partition=power9
+    #SBATCH --partition=ml
     #SBATCH --exclusive
     #SBATCH --nodes=1
     #SBATCH --ntasks=6
@@ -141,7 +141,7 @@ execution to a different partition.
     #SBATCH --hint=nomultithread
 
     module --force purge
-    module load NVHPC OpenMPI/4.0.5-NVHPC-21.2-CUDA-11.2.1
+    module load modenv/ml NVHPC OpenMPI/4.0.5-NVHPC-21.2-CUDA-11.2.1
 
     ws=</scratch/ws/spec/installation>
     cd ${ws}
@@ -178,7 +178,7 @@ execution to a different partition.
     #SBATCH --hint=nomultithread
 
     module --force purge
-    module load NVHPC OpenMPI
+    module load modenv/hiera NVHPC OpenMPI
 
     ws=</scratch/ws/spec/installation>
     cd ${ws}
@@ -314,12 +314,12 @@ execution to a different partition.
 
     For OpenACC, NVHPC was in the process of adding OpenMP array reduction support which is needed
     for the `pot3d` benchmark. An Nvidia driver version of 450.80.00 or higher is required. Since
-    the driver version on partiton `power9` is 440.64.00, it is not supported and not possible to run
+    the driver version on partiton `ml` is 440.64.00, it is not supported and not possible to run
     the `pot3d` benchmark in OpenACC mode here.
 
 !!! note "Workaround"
 
-    As for the partition `power9`, you can only wait until the OS update to CentOS 8 is carried out,
+    As for the partition `ml`, you can only wait until the OS update to CentOS 8 is carried out,
     as no driver update will be done beforehand. As a workaround, you can do one of the following:
 
     - Exclude the `pot3d` benchmark.
@@ -329,7 +329,7 @@ execution to a different partition.
 
 !!! warning "Wrong resource distribution"
 
-    When working with multiple nodes on partition `power9` or `alpha`, the Slurm parameter
+    When working with multiple nodes on partition `ml` or `alpha`, the Slurm parameter
     `$SLURM_NTASKS_PER_NODE` does not work as intended when used in conjunction with `mpirun`.
 
 !!! note "Explanation"
-- 
GitLab