Skip to content
Snippets Groups Projects
Commit fc8c2da6 authored by Alexander Grund's avatar Alexander Grund
Browse files

Revert changes to spec.md

Hevily outdated and not user facing anyways
parent 2306fbe0
No related branches found
No related tags found
2 merge requests!1008Automated merge from preview to main,!998Update references to old GPU clusters
......@@ -32,9 +32,9 @@ Once the target partition is determined, follow SPEC's
[Installation Guide](https://www.spec.org/hpg/hpc2021/Docs/install-guide-linux.html).
It is straight-forward and easy to use.
???+ tip "Building for partition `power9`"
???+ tip "Building for partition `ml`"
The partition `power9` is a Power9 architecture. Thus, you need to provide the `-e ppc64le` switch
The partition `ml` is a Power9 architecture. Thus, you need to provide the `-e ppc64le` switch
when installing.
???+ tip "Building with NVHPC for partition `alpha`"
......@@ -52,8 +52,8 @@ listed there.
The behavior in terms of how to build, run and report the benchmark in a particular environment is
controlled by a configuration file. There are a few examples included in the source code.
Here you can apply compiler tuning and porting, specify the runtime environment and describe the
system under test.
Configurations are available, respectively:
system under test. SPEChpc 2021 has been deployed on the partitions `haswell`, `ml` and
`alpha`. Configurations are available, respectively:
- [gnu-taurus.cfg](misc/spec_gnu-taurus.cfg)
- [nvhpc-ppc.cfg](misc/spec_nvhpc-ppc.cfg)
......@@ -89,7 +89,7 @@ configuration and controls it's runtime behavior. For all options, see SPEC's do
First, execute `source shrc` in your SPEC installation directory. Then use a job script to submit a
job with the benchmark or parts of it.
In the following there are job scripts shown for partitions `haswell`, `power9` and `alpha`,
In the following there are job scripts shown for partitions `haswell`, `ml` and `alpha`,
respectively. You can use them as a template in order to reproduce results or to transfer the
execution to a different partition.
......@@ -128,7 +128,7 @@ execution to a different partition.
```bash linenums="1"
#!/bin/bash
#SBATCH --account=<p_number_crunch>
#SBATCH --partition=power9
#SBATCH --partition=ml
#SBATCH --exclusive
#SBATCH --nodes=1
#SBATCH --ntasks=6
......@@ -141,7 +141,7 @@ execution to a different partition.
#SBATCH --hint=nomultithread
module --force purge
module load NVHPC OpenMPI/4.0.5-NVHPC-21.2-CUDA-11.2.1
module load modenv/ml NVHPC OpenMPI/4.0.5-NVHPC-21.2-CUDA-11.2.1
ws=</scratch/ws/spec/installation>
cd ${ws}
......@@ -178,7 +178,7 @@ execution to a different partition.
#SBATCH --hint=nomultithread
module --force purge
module load NVHPC OpenMPI
module load modenv/hiera NVHPC OpenMPI
ws=</scratch/ws/spec/installation>
cd ${ws}
......@@ -314,12 +314,12 @@ execution to a different partition.
For OpenACC, NVHPC was in the process of adding OpenMP array reduction support which is needed
for the `pot3d` benchmark. An Nvidia driver version of 450.80.00 or higher is required. Since
the driver version on partiton `power9` is 440.64.00, it is not supported and not possible to run
the driver version on partiton `ml` is 440.64.00, it is not supported and not possible to run
the `pot3d` benchmark in OpenACC mode here.
!!! note "Workaround"
As for the partition `power9`, you can only wait until the OS update to CentOS 8 is carried out,
As for the partition `ml`, you can only wait until the OS update to CentOS 8 is carried out,
as no driver update will be done beforehand. As a workaround, you can do one of the following:
- Exclude the `pot3d` benchmark.
......@@ -329,7 +329,7 @@ execution to a different partition.
!!! warning "Wrong resource distribution"
When working with multiple nodes on partition `power9` or `alpha`, the Slurm parameter
When working with multiple nodes on partition `ml` or `alpha`, the Slurm parameter
`$SLURM_NTASKS_PER_NODE` does not work as intended when used in conjunction with `mpirun`.
!!! note "Explanation"
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment