From c7a98d424fba9acfdd54f030484a18137dfaf145 Mon Sep 17 00:00:00 2001
From: Martin Schroschk <martin.schroschk@tu-dresden.de>
Date: Mon, 7 Oct 2024 07:25:29 +0200
Subject: [PATCH] Remove outdated note on Taurus split up

---
 doc.zih.tu-dresden.de/docs/jobs_and_resources/julia.md | 9 ---------
 1 file changed, 9 deletions(-)

diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/julia.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/julia.md
index fee65e563..e193e54aa 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/julia.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/julia.md
@@ -4,15 +4,6 @@ The HPE Superdome Flex is a large shared memory node. It is especially well suit
 intensive application scenarios, for example to process extremely large data sets completely in main
 memory or in very fast NVMe memory.
 
-## Becoming a Stand-Alone Cluster
-
-The former HPC system Taurus is partly switched-off and partly split up into separate clusters
-until the end of 2023. One such upcoming separate cluster is what you have known as partition
-`julia` so far. Since February 2024, `Julia` is now a stand-alone cluster with
-
-* homogenous hardware resources available at `julia.hpc.tu-dresden.de`,
-* and own Slurm batch system.
-
 ## Hardware Resources
 
 The hardware specification is documented on the page
-- 
GitLab