Skip to content
Snippets Groups Projects
Commit 84690f3b authored by Ulf Markwardt's avatar Ulf Markwardt
Browse files

Aktualisieren doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview_2023.md

parent 0d9f1bf9
No related branches found
No related tags found
2 merge requests!850Automated merge from preview to main,!845Barnard
...@@ -29,20 +29,11 @@ All clusters have access to these shared parallel file systems: ...@@ -29,20 +29,11 @@ All clusters have access to these shared parallel file systems:
## Barnard - Intel Sapphire Rapids CPUs ## Barnard - Intel Sapphire Rapids CPUs
- 630 nodes, each with - 630 diskless nodes, each with
- 2 x Intel(R) Xeon(R) CPU E5-2680 v3 (12 cores) @ 2.50 GHz, Multithreading disabled - 2 x Intel(R) Xeon(R) CPU E5-2680 v3 (52 cores) @ 2.50 GHz, Multithreading enabled
- 128 GB local memory on SSD - 512 GB RAM
- Varying amounts of main memory (selected automatically by the batch system for you according to - Hostnames: `n1[001-630].barnard.hpc.tu-dresden.de`
your job requirements) - Login nodes: `login[1-4].barnard.hpc.tu-dresden.de`
* 594 nodes with 2.67 GB RAM per core (64 GB in total): `taurusi[6001-6540,6559-6612]`
- 18 nodes with 10.67 GB RAM per core (256 GB in total): `taurusi[6541-6558]`
- Hostnames: `taurusi[6001-6612]`
- Slurm Partition: `haswell`
??? hint "Node topology"
![Node topology](misc/i4000.png)
{: align=center}
## AMD Rome CPUs + NVIDIA A100 ## AMD Rome CPUs + NVIDIA A100
...@@ -52,8 +43,8 @@ All clusters have access to these shared parallel file systems: ...@@ -52,8 +43,8 @@ All clusters have access to these shared parallel file systems:
- 2 x AMD EPYC CPU 7352 (24 cores) @ 2.3 GHz, Multithreading available - 2 x AMD EPYC CPU 7352 (24 cores) @ 2.3 GHz, Multithreading available
- 1 TB RAM - 1 TB RAM
- 3.5 TB local memory on NVMe device at `/tmp` - 3.5 TB local memory on NVMe device at `/tmp`
- Hostnames: `taurusi[8001-8034]` - Hostnames: `taurusi[8001-8034]` -> `n[1-37].alpha.hpc.tu-dresden.de`
- Slurm partition: `alpha` - Login nodes: `login[1-2].alpha.hpc.tu-dresden.de`
- Further information on the usage is documented on the site [Alpha Centauri Nodes](alpha_centauri.md) - Further information on the usage is documented on the site [Alpha Centauri Nodes](alpha_centauri.md)
## Island 7 - AMD Rome CPUs ## Island 7 - AMD Rome CPUs
...@@ -62,8 +53,8 @@ All clusters have access to these shared parallel file systems: ...@@ -62,8 +53,8 @@ All clusters have access to these shared parallel file systems:
- 2 x AMD EPYC CPU 7702 (64 cores) @ 2.0 GHz, Multithreading available - 2 x AMD EPYC CPU 7702 (64 cores) @ 2.0 GHz, Multithreading available
- 512 GB RAM - 512 GB RAM
- 200 GB local memory on SSD at `/tmp` - 200 GB local memory on SSD at `/tmp`
- Hostnames: `taurusi[7001-7192]` - Hostnames: `taurusi[7001-7192]` -> `n[1-190].romeo.hpc.tu-dresden.de`
- Slurm partition: `romeo` - Login nodes: `login[1-2].romeo.hpc.tu-dresden.de`
- Further information on the usage is documented on the site [AMD Rome Nodes](rome_nodes.md) - Further information on the usage is documented on the site [AMD Rome Nodes](rome_nodes.md)
## Large SMP System HPE Superdome Flex ## Large SMP System HPE Superdome Flex
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment