Skip to content
Snippets Groups Projects
Commit 4ebc062c authored by Michael Müller's avatar Michael Müller
Browse files

Merge branch 'sdflex-fc' into 'preview'

Fix checks

See merge request zih/hpc-compendium/hpc-compendium!146
parents 22b4e3f2 810621cc
No related branches found
No related tags found
3 merge requests!322Merge preview into main,!319Merge preview into main,!146Fix checks
...@@ -8,35 +8,32 @@ ...@@ -8,35 +8,32 @@
protocols) protocols)
- 370 TB of fast NVME storage available at `/nvme/<projectname>` - 370 TB of fast NVME storage available at `/nvme/<projectname>`
### Local temporary NVMe storage ## Local temporary NVMe storage
There are 370 TB of NVMe devices installed. For immediate access for all There are 370 TB of NVMe devices installed. For immediate access for all projects, a volume of 87 TB
projects, a volume of 87 TB of fast NVMe storage is available at of fast NVMe storage is available at `/nvme/1/<projectname>`. For testing, we have set a quota of 100
/nvme/1/\<projectname>. For testing, we have set a quota of 100 GB per GB per project on this NVMe storage.This is
project on this NVMe storage.This is
With a more detailled proposal on how this unique system (large shared With a more detailled proposal on how this unique system (large shared memory + NVMe storage) can
memory + NVMe storage) can speed up their computations, a project's speed up their computations, a project's quota can be increased or dedicated volumes of up to the
quota can be increased or dedicated volumes of up to the full capacity full capacity can be set up.
can be set up.
## Hints for usage ## Hints for usage
- granularity should be a socket (28 cores) - granularity should be a socket (28 cores)
- can be used for OpenMP applications with large memory demands - can be used for OpenMP applications with large memory demands
- To use OpenMPI it is necessary to export the following environment - To use OpenMPI it is necessary to export the following environment
variables, so that OpenMPI uses shared memory instead of Infiniband variables, so that OpenMPI uses shared memory instead of Infiniband
for message transport. \<pre>export OMPI_MCA_pml=ob1; export for message transport. `export OMPI_MCA_pml=ob1; export OMPI_MCA_mtl=^mxm`
OMPI_MCA_mtl=^mxm\</pre> - Use `I_MPI_FABRICS=shm` so that Intel MPI doesn't even consider
- Use `I_MPI_FABRICS=shm` so that Intel MPI doesn't even consider using InfiniBand devices itself, but only shared-memory instead
using InfiniBand devices itself, but only shared-memory instead
## Open for Testing ## Open for Testing
- At the moment we have set a quota of 100 GB per project on this NVMe - At the moment we have set a quota of 100 GB per project on this NVMe
storage. As soon as the first projects come up with proposals how storage. As soon as the first projects come up with proposals how
this unique system (large shared memory + NVMe storage) can speed up this unique system (large shared memory + NVMe storage) can speed up
their computations, we will gladly increase this limit, for selected their computations, we will gladly increase this limit, for selected
projects. projects.
- Test users might have to clean-up their /nvme storage within 4 weeks - Test users might have to clean-up their /nvme storage within 4 weeks
to make room for large projects. to make room for large projects.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment