Skip to content
Snippets Groups Projects
Commit fcc0b2f8 authored by Martin Schroschk's avatar Martin Schroschk
Browse files

Add way to gather max. mem from the job

parent a5b551f8
No related branches found
No related tags found
2 merge requests!478Automated merge from preview to main,!464Add way to gather max. mem from the job
...@@ -60,3 +60,37 @@ More information about profiling with Slurm: ...@@ -60,3 +60,37 @@ More information about profiling with Slurm:
- [Slurm Profiling](http://slurm.schedmd.com/hdf5_profile_user_guide.html) - [Slurm Profiling](http://slurm.schedmd.com/hdf5_profile_user_guide.html)
- [`sh5util`](http://slurm.schedmd.com/sh5util.html) - [`sh5util`](http://slurm.schedmd.com/sh5util.html)
====== Memory Consumption of a Job ======
If you are only interested in the maximal memory consumption of your job, you
don't need profiling at all. This information can be retrieved from within
[batch files](slurm.md#batch_jobs) as follows:
```bash
#!/bin/bash
#SBATCH [...]
module purge
module load [...]
srun a.exe
# Retrieve max. memory for this job for all nodes
srun max_mem.sh
```
The script max_mem.sh is:
```bash
#!/bin/bash
echo -n "$(hostname): "
cat /sys/fs/cgroup/memory/slurm/uid_${SLURM_JOB_UID}/job_${SLURM_JOB_ID}/memory.max_usage_in_bytes
```
**Remarks**:
* Make sure that `max_mem.sh` is executable (e.g., `chmod +x max_mem.sh`) and add the path to this script if it is not within the same directory.
* The `srun` command is necessary to gather the max. memory from all nodes within this job. Otherwise, you would only get the data from one node.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment