Newer
Older
This is the documentation of the HPC systems and services provided at
to incorporate more information with increasing experience and with every question you ask us.
If the provided HPC systems and services helped to advance your research, please cite us. Why this
is important and acknowledgment examples can be found in the section
[Acknowledgement](https://doc.zih.tu-dresden.de/application/acknowledgement/).
The HPC team invites you to take part in the improvement of these pages by correcting or adding
useful information. Your contributions are highly welcome!
The easiest way for you to contribute is to report issues via
the GitLab
[issue tracking system](https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium/-/issues).
Please check for any already existing issue before submitting your issue in order to avoid duplicate
issues.
Please also find out the other ways you could contribute in our
[guidelines how to contribute](contrib/howto_contribute.md).
Non-documentation issues and requests need to be send to
[hpc-support@tu-dresden.de](mailto:hpc-support@tu-dresden.de).
* **2023-12-07** [Maintenance finished: CPU cluster `Romeo` is now available](jobs_and_resources/romeo.md)
* **2023-12-01** [Maintenance finished: GPU cluster `Alpha Centauri` is now available](jobs_and_resources/alpha_centauri.md)
* **2023-11-25** [Data transfer available for Barnard via Dataport Nodes](data_transfer/dataport_nodes.md)
* **2023-11-14** [End of life of `scratch` and `ssd` filesystems is January 3 2024](data_lifecycle/file_systems.md)
* **2023-11-14** [End of life of Taurus system is December 11 2023](jobs_and_resources/hardware_overview.md)
* **2023-11-14** [Update on maintenance dates and work w.r.t. redesign of HPC systems](jobs_and_resources/hardware_overview.md)
* **2023-11-06** [Substantial update on "How-To: Migration to Barnard"](jobs_and_resources/barnard.md)
* **2023-10-16** [Open MPI 4.1.x - Workaround for MPI-IO Performance Loss](jobs_and_resources/mpi_issues.md#performance-loss-with-mpi-io-module-ompio)
* **2023-06-01** [New hardware and complete re-design](jobs_and_resources/hardware_overview.md#architectural-re-design-2023)
* **2023-01-04** [New hardware: NVIDIA Arm HPC Developer Kit](jobs_and_resources/arm_hpc_devkit.md)
## Training and Courses
We offer a rich and colorful bouquet of courses from classical *HPC introduction* to various
*Performance Analysis* and *Machine Learning* trainings. Please refer to the page
[Training Offers](https://tu-dresden.de/zih/hochleistungsrechnen/nhr-training)
for a detailed overview of the courses and the respective dates at ZIH.
* [HPC introduction slides](misc/HPC-Introduction.pdf) Sep. 2023
Furthermore, Center for Scalable Data Analytics and Artificial Intelligence
[ScaDS.AI](https://scads.ai) Dresden/Leipzig offers various trainings with HPC focus.
Current schedule and registration is available at the
[ScaDS.AI trainings page](https://scads.ai/transfer-2/teaching-and-training/).