Skip to content
Snippets Groups Projects
Commit a1e7811c authored by Martin Schroschk's avatar Martin Schroschk
Browse files

Merge branch 'issue-88' into 'preview'

Brief review

Closes #88

See merge request !473
parents d63acbb2 e393aed3
No related branches found
No related tags found
2 merge requests!483Automated merge from preview to main,!473Brief review
...@@ -72,7 +72,7 @@ The current SGI Altix is based on the dual core Intel Itanium 2 ...@@ -72,7 +72,7 @@ The current SGI Altix is based on the dual core Intel Itanium 2
processor (code name "Montecito"). One core has the following basic processor (code name "Montecito"). One core has the following basic
properties: properties:
| | | | Component | Count |
|-------------------------------------|----------------------------| |-------------------------------------|----------------------------|
| clock rate | 1.6 GHz | | clock rate | 1.6 GHz |
| integer units | 6 | | integer units | 6 |
......
# Large Shared-Memory Node - HPE Superdome Flex # HPE Superdome Flex
The HPE Superdome Flex is a large shared memory node. It is especially well suited for data
intensive application scenarios, for example to process extremely large data sets completely in main
memory or in very fast NVMe memory.
## Configuration Details
- Hostname: `taurussmp8` - Hostname: `taurussmp8`
- Access to all shared filesystems - Access to all shared filesystems
...@@ -10,29 +16,19 @@ ...@@ -10,29 +16,19 @@
## Local Temporary NVMe Storage ## Local Temporary NVMe Storage
There are 370 TB of NVMe devices installed. For immediate access for all projects, a volume of 87 TB There are 370 TB of NVMe devices installed. For immediate access for all projects, a volume of 87 TB
of fast NVMe storage is available at `/nvme/1/<projectname>`. For testing, we have set a quota of of fast NVMe storage is available at `/nvme/1/<projectname>`. A quota of
100 GB per project on this NVMe storage. 100 GB per project on this NVMe storage is set.
With a more detailed proposal on how this unique system (large shared memory + NVMe storage) can With a more detailed proposal to [hpcsupport@zih.tu-dresden.de](mailto:hpcsupport@zih.tu-dresden.de)
speed up their computations, a project's quota can be increased or dedicated volumes of up to the on how this unique system (large shared memory + NVMe storage) can speed up their computations, a
full capacity can be set up. project's quota can be increased or dedicated volumes of up to the full capacity can be set up.
## Hints for Usage ## Hints for Usage
- granularity should be a socket (28 cores) - Granularity should be a socket (28 cores)
- can be used for OpenMP applications with large memory demands - Can be used for OpenMP applications with large memory demands
- To use OpenMPI it is necessary to export the following environment - To use OpenMPI it is necessary to export the following environment
variables, so that OpenMPI uses shared memory instead of Infiniband variables, so that OpenMPI uses shared memory instead of Infiniband
for message transport. `export OMPI_MCA_pml=ob1; export OMPI_MCA_mtl=^mxm` for message transport. `export OMPI_MCA_pml=ob1; export OMPI_MCA_mtl=^mxm`
- Use `I_MPI_FABRICS=shm` so that Intel MPI doesn't even consider - Use `I_MPI_FABRICS=shm` so that Intel MPI doesn't even consider
using Infiniband devices itself, but only shared-memory instead using Infiniband devices itself, but only shared-memory instead
## Open for Testing
- At the moment we have set a quota of 100 GB per project on this NVMe
storage. As soon as the first projects come up with proposals how
this unique system (large shared memory + NVMe storage) can speed up
their computations, we will gladly increase this limit, for selected
projects.
- Test users might have to clean-up their `/nvme` storage within 4 weeks
to make room for large projects.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment