Skip to content
Snippets Groups Projects

Add way to gather max. mem from the job

Merged Martin Schroschk requested to merge max-mem into preview
1 file
+ 35
0
Compare changes
  • Side-by-side
  • Inline
@@ -60,3 +60,38 @@ More information about profiling with Slurm:
@@ -60,3 +60,38 @@ More information about profiling with Slurm:
- [Slurm Profiling](http://slurm.schedmd.com/hdf5_profile_user_guide.html)
- [Slurm Profiling](http://slurm.schedmd.com/hdf5_profile_user_guide.html)
- [`sh5util`](http://slurm.schedmd.com/sh5util.html)
- [`sh5util`](http://slurm.schedmd.com/sh5util.html)
 
 
## Memory Consumption of a Job
 
 
If you are only interested in the maximal memory consumption of your job, you don't need profiling
 
at all. This information can be retrieved from within [job files](slurm.md#batch-jobs) as follows:
 
 
```bash
 
#!/bin/bash
 
 
#SBATCH [...]
 
 
module purge
 
module load [...]
 
 
srun a.exe
 
 
# Retrieve max. memory for this job for all nodes
 
srun max_mem.sh
 
```
 
 
The script `max_mem.sh` is:
 
 
```bash
 
#!/bin/bash
 
 
echo -n "$(hostname): "
 
cat /sys/fs/cgroup/memory/slurm/uid_${SLURM_JOB_UID}/job_${SLURM_JOB_ID}/memory.max_usage_in_bytes
 
```
 
 
!!! note
 
 
* Make sure that the script `max_mem.sh` is executable (e.g., `chmod +x max_mem.sh`) and add the
 
path to this script if it is not within the same directory.
 
* The `srun` command is necessary to gather the max. memory from all nodes within this job.
 
Otherwise, you would only get the data from one node.
Loading