diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md
index 3a9c2100078df20071aabde8fea3bf453fa0bc18..239b4f25b2586eb6a229d896b0f1eab3b569056f 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md
@@ -11,37 +11,43 @@ The hardware specification is documented on the page
 
     Please read the following information carefully and follow the provided instructions.
 
-There is a last step to go in order to finalize the
-[process of becoming `Alpha Centauri` a Stand-Alone Cluster](#becoming-a-stand-alone-cluster). This
-last step of the migration process is to include `Alpha Centauri` into the Infiniband infrastructure
+We are in the 
+[process of becoming `Alpha Centauri` a Stand-Alone Cluster](#becoming-a-stand-alone-cluster). Planned now is the integration of the cluster into the Infiniband infrastructure
 of the new cluster [`Barnard`](barnard.md).
 
 !!! hint "Maintenance Work"
 
-    On **June 10+11**, we will shut down and migrate `Alpha Centauri` to the Barnard Infiniband
-    infrastructure. As consequences,
-        * BeeGFS will no longer be available,
-        * all `Barnard` filesystems (`/home`, `/software`, `/data/horse`, `/data/walrus`) can be
-          reached faster,
-        * the new Lustre filesystem `/data/octopus` will be dedicated to `Alpha` users.
+    On **June 4+5**, we will shut down and migrate `Alpha Centauri` to the Barnard Infiniband
+    infrastructure. 
 
-    We already have started migrating your data from `/beegfs` to `/data/octopus`.
+As consequences,
+
+* BeeGFS will no longer be available,
+* all `Barnard` filesystems (`/home`, `/software`, `/data/horse`, `/data/walrus`) can be
+  used normally.
+        
+For your convenience, we already have started migrating your data from `/beegfs` to 
+`/data/horse/beegfs`. Starting with the downtime, we again synchronize these data.
 
 !!! hint "User Action Required"
+    
+    The less we have to synchronize the faster the overall process. So clean-up
+    as much as possible as soon as possible.
+    
+Important for your work is:
+
+* Do not add terabytes of data to `/beegfs` if you cannot "consume" it before June 4.
+* After final successful data transfer to `/data/horse/beegfs` you then have to
+  move it to normal workspaces on `/data/horse`.
+* Be prepared to adapt your workflows to the new paths.
 
-    Important for your work now is:
+What happens afterward:
 
-        * On **May 3** we will mount BeeGFS read-only on Alpha Centauri and start the final sync
-          from `/beegfs` to `/data/octopus` for your convenience.
-        * Afterwards, the BeeGFS storage will be migrated:
-          * complete deletion of all user data
-          * complete recabling of their Infiniband connection
-          * Software+Firmware updates
-          * set-up of a new WEKA filesystem for high I/O demands (on the BeeGFS hardware)
+  * complete deletion of all user data in `/beegfs`
+  * complete recabling of the storage nodes (BeeGFS hardware)
+  * Software+Firmware updates
+  * set-up of a new WEKA filesystem for high I/O demands on the same hardware
 
-        * Do not add terabytes of data to /beegfs if you cannot "consume" it before May 5.
-        * Instead, you can already create workspaces for /data/octopus on login1.barnard and bring your data there for computations after May 6.
-        * If you have millions of files you might consider the removal of data.
 
 In case of any question regarding this maintenance or required action, please do not hesitate to
 contact the [HPC support team](../support/support.md).