Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
hpc-compendium
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Deploy
Releases
Package registry
Container Registry
Model registry
Operate
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
ZIH
hpcsupport
hpc-compendium
Commits
a1e7811c
Commit
a1e7811c
authored
3 years ago
by
Martin Schroschk
Browse files
Options
Downloads
Plain Diff
Merge branch 'issue-88' into 'preview'
Brief review Closes
#88
See merge request
!473
parents
d63acbb2
e393aed3
No related branches found
No related tags found
2 merge requests
!483
Automated merge from preview to main
,
!473
Brief review
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
doc.zih.tu-dresden.de/docs/archive/system_altix.md
+1
-1
1 addition, 1 deletion
doc.zih.tu-dresden.de/docs/archive/system_altix.md
doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
+14
-18
14 additions, 18 deletions
doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
with
15 additions
and
19 deletions
doc.zih.tu-dresden.de/docs/archive/system_altix.md
+
1
−
1
View file @
a1e7811c
...
...
@@ -72,7 +72,7 @@ The current SGI Altix is based on the dual core Intel Itanium 2
processor (code name "Montecito"). One core has the following basic
properties:
|
|
|
|
Component
|
Count
|
|-------------------------------------|----------------------------|
| clock rate | 1.6 GHz |
| integer units | 6 |
...
...
This diff is collapsed.
Click to expand it.
doc.zih.tu-dresden.de/docs/jobs_and_resources/sd_flex.md
+
14
−
18
View file @
a1e7811c
# Large Shared-Memory Node - HPE Superdome Flex
# HPE Superdome Flex
The HPE Superdome Flex is a large shared memory node. It is especially well suited for data
intensive application scenarios, for example to process extremely large data sets completely in main
memory or in very fast NVMe memory.
## Configuration Details
-
Hostname:
`taurussmp8`
-
Access to all shared filesystems
...
...
@@ -10,29 +16,19 @@
## Local Temporary NVMe Storage
There are 370 TB of NVMe devices installed. For immediate access for all projects, a volume of 87 TB
of fast NVMe storage is available at
`/nvme/1/<projectname>`
.
For testing, we have set a
quota of
100 GB per project on this NVMe storage.
of fast NVMe storage is available at
`/nvme/1/<projectname>`
.
A
quota of
100 GB per project on this NVMe storage
is set
.
With a more detailed proposal
on how this unique system (large shared memory + NVMe storage) can
speed up their computations, a project's quota can be increased or dedicated volumes of up to the
full capacity can be set up.
With a more detailed proposal
to
[
hpcsupport@zih.tu-dresden.de
](
mailto:hpcsupport@zih.tu-dresden.de
)
on how this unique system (large shared memory + NVMe storage) can speed up their computations, a
project's quota can be increased or dedicated volumes of up to the
full capacity can be set up.
## Hints for Usage
-
g
ranularity should be a socket (28 cores)
-
c
an be used for OpenMP applications with large memory demands
-
G
ranularity should be a socket (28 cores)
-
C
an be used for OpenMP applications with large memory demands
-
To use OpenMPI it is necessary to export the following environment
variables, so that OpenMPI uses shared memory instead of Infiniband
for message transport.
`export OMPI_MCA_pml=ob1; export OMPI_MCA_mtl=^mxm`
-
Use
`I_MPI_FABRICS=shm`
so that Intel MPI doesn't even consider
using Infiniband devices itself, but only shared-memory instead
## Open for Testing
-
At the moment we have set a quota of 100 GB per project on this NVMe
storage. As soon as the first projects come up with proposals how
this unique system (large shared memory + NVMe storage) can speed up
their computations, we will gladly increase this limit, for selected
projects.
-
Test users might have to clean-up their
`/nvme`
storage within 4 weeks
to make room for large projects.
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment