Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
hpc-compendium
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Deploy
Releases
Package Registry
Container Registry
Model registry
Operate
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
ZIH
hpcsupport
hpc-compendium
Commits
47cc5c2b
Commit
47cc5c2b
authored
2 years ago
by
Noah Löwer
Browse files
Options
Downloads
Patches
Plain Diff
- applied suggestions
parent
453150b1
No related branches found
No related tags found
2 merge requests
!808
Automated merge from preview to main
,
!767
Add new file spec.md to software - menu item Performance Engineering Tools
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
doc.zih.tu-dresden.de/docs/software/spec.md
+15
-12
15 additions, 12 deletions
doc.zih.tu-dresden.de/docs/software/spec.md
with
15 additions
and
12 deletions
doc.zih.tu-dresden.de/docs/software/spec.md
+
15
−
12
View file @
47cc5c2b
...
@@ -7,14 +7,15 @@ system 'Taurus' (partition `haswell`) is the benchmark's reference system denoti
...
@@ -7,14 +7,15 @@ system 'Taurus' (partition `haswell`) is the benchmark's reference system denoti
The tool includes nine real-world scientific applications (see
The tool includes nine real-world scientific applications (see
[
benchmark table
](
https://www.spec.org/hpc2021/docs/result-fields.html#benchmarks
)
)
[
benchmark table
](
https://www.spec.org/hpc2021/docs/result-fields.html#benchmarks
)
)
with different workload sizes ranging from tiny, small, medium to large, and different parallelization models
with different workload sizes ranging from tiny, small, medium to large, and different parallelization
including MPI only, MPI+OpenACC, MPI+OpenMP and MPI+OpenMP with target offloading. With this
models including MPI only, MPI+OpenACC, MPI+OpenMP and MPI+OpenMP with target offloading. With this
benchmark suite you can compare the performance of different HPC systems. Furthermore, it can also be used to
benchmark suite you can compare the performance of different HPC systems and furthermore, evaluate
evaluate parallel strategies for applications on a target HPC system.
parallel strategies for applications on a target HPC system. When you e.g. want to implement an
When you e.g. want to implement an algorithm, port an application to another platform or integrate acceleration into
algorithm, port an application to another platform or integrate acceleration into your code,
your code, you can determine from which target system and parallelization model your application
you can determine from which target system and parallelization model your application
performance could benefit most, or if deployment and acceleration schemes are even possible on a
performance could benefit most. Or this way you can check whether an acceleration scheme can be
given system.
deployed and run on a given system, since there could be software issues restricting a capable
hardware (see this
[
cuda issue
](
#cuda-reduction-operation-error
)
).
Since TU Dresden is a member of the SPEC consortium, the HPC benchmarks can be requested by anyone
Since TU Dresden is a member of the SPEC consortium, the HPC benchmarks can be requested by anyone
interested. Please contact
interested. Please contact
...
@@ -203,13 +204,15 @@ medium or large, test or reference).
...
@@ -203,13 +204,15 @@ medium or large, test or reference).
!!! success "Solution"
!!! success "Solution"
-
Use the correct MPI module
1.
Use the correct MPI module
- The MPI module in use must be compiled with the same compiler that was used to build the
- The MPI module in use must be compiled with the same compiler that was used to build the
benchmark binaries. Check
wi
th `module avail` and choose a
suitable
module.
benchmark binaries. Check th
e results of
`module avail` and choose a
corresponding
module.
-
Rebuild the binaries
1.
Rebuild the binaries
- Rebuild the binaries using the same compiler as for the compilation of the MPI module of
- Rebuild the binaries using the same compiler as for the compilation of the MPI module of
choice.
choice.
- Build your own MPI module
1. Request a new module
- Ask the HPC support to install a compatible MPI module.
1. Build your own MPI module (as a last step)
- Download and build a private MPI module using the same compiler as for building the
- Download and build a private MPI module using the same compiler as for building the
benchmark binaries.
benchmark binaries.
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment