Document /tmp on Barnard
With Barnard we have a change that can surprise many users: Diskless nodes. Contrary to https://gitlab.hrz.tu-chemnitz.de/zih/hpcsupport/hpc-compendium/-/blob/preview/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview_2023.md not ALL nodes seem to be diskless though, some have a ~2TB local disk as indicated by the SLURM feature "local_disk"
It should be generally documented how to request a node with a local disk through SLURM and that /tmp
might not be a suitable storage in general and even be full when starting a job.
Examples:
s3248973@n1604 ~ $ df -h /tmp
Dateisystem Größe Benutzt Verf. Verw% Eingehängt auf
/dev/nvme0n1p1 1,8T 1,3G 1,7T 1% /tmp
s3248973@n1180 ~ $ df -h /tmp/
Dateisystem Größe Benutzt Verf. Verw% Eingehängt auf
/dev/mapper/live-rw 5,0G 5,0G 45M 100% /
See the HPC introduction which mentions this:
local disks only on a few nodes:
– Rome, AlphaCentauri, smp8, ML
– on Barnard use feature :--constraint=local_disk