diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_profiling.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_profiling.md
index 273a87710602b62feb97c342335b4c44f30ad09e..821dfe7ad2bf1126afb10122cc3504159dd27899 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_profiling.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/slurm_profiling.md
@@ -60,3 +60,37 @@ More information about profiling with Slurm:
 
 - [Slurm Profiling](http://slurm.schedmd.com/hdf5_profile_user_guide.html)
 - [`sh5util`](http://slurm.schedmd.com/sh5util.html)
+
+
+====== Memory Consumption of a Job ======
+
+If you are only interested in the maximal memory consumption of your job, you
+don't need profiling at all. This information can be retrieved from within 
+[batch files](slurm.md#batch_jobs) as follows:
+
+```bash
+#!/bin/bash
+
+#SBATCH [...]
+
+module purge
+module load [...]
+
+srun a.exe
+
+# Retrieve max. memory for this job for all nodes
+srun max_mem.sh
+```
+
+The script max_mem.sh is:
+
+```bash
+#!/bin/bash
+
+echo -n "$(hostname): "
+cat /sys/fs/cgroup/memory/slurm/uid_${SLURM_JOB_UID}/job_${SLURM_JOB_ID}/memory.max_usage_in_bytes
+```
+
+**Remarks**:
+  * Make sure that `max_mem.sh` is executable (e.g., `chmod +x max_mem.sh`) and add the path to this script if it is not within the same directory.
+  * The `srun` command is necessary to gather the max. memory from all nodes within this job. Otherwise, you would only get the data from one node.