diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md
index 218bd3d4b186efcd583c3fb6c092b4e0dbad3180..5080156389d87e9588ceeaf0cac2ec8ede6c42ce 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md
@@ -21,6 +21,10 @@ users and the ZIH.
 - Direct access to these nodes is granted via IP whitelisting (contact
   hpcsupport@zih.tu-dresden.de) - otherwise use TU Dresden VPN.
 
+!!! warning "Run time limit"
+
+    Any process on login nodes is stopped after 5 minutes.
+
 ## AMD Rome CPUs + NVIDIA A100
 
 - 32 nodes, each with
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/partitions_and_limits.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/partitions_and_limits.md
index 1b0b7e4343c271fca4782e1de6b9038c9e771895..1b1910b7131176b387c3b3f398241e30cffdb8ea 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/partitions_and_limits.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/partitions_and_limits.md
@@ -8,10 +8,21 @@ smaller jobs. Thus, restrictions w.r.t. [memory](#memory-limits) and
 
 ## Runtime Limits
 
+!!! warning "Runtime limits on login nodes"
+
+    There is a time limit set for processes on login nodes. If you run applications outside of a
+    compute job, it will be stopped automatically after 5 minutes with
+
+    ```
+    CPU time limit exceeded
+    ```
+
+    Please start a job using the [batch system](slurm.md).
+
 !!! note "Runtime limits are enforced."
 
-    This means, a job will be canceled as soon as it exceeds its requested limit. Currently, the
-    maximum run time is 7 days.
+    A job is canceled as soon as it exceeds its requested limit. Currently, the maximum run time is
+    7 days.
 
 Shorter jobs come with multiple advantages:
 
@@ -43,8 +54,7 @@ not capable of checkpoint/restart can be adapted. Please refer to the section
 
 !!! note "Memory limits are enforced."
 
-    This means that jobs which exceed their per-node memory limit will be killed automatically by
-    the batch system.
+    Jobs which exceed their per-node memory limit are killed automatically by the batch system.
 
 Memory requirements for your job can be specified via the `sbatch/srun` parameters: