diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/barnard_test.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/barnard_test.md
index 98d4eab612f7699fb84de7de1af4bdd0bcb4d2ee..0a0004d7eb70090662d53705ac02c4ef116a1f0f 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/barnard_test.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/barnard_test.md
@@ -6,140 +6,158 @@ and workflows for production there. For general hints please refer to these site
 * [Details on architecture](/jobs_and_resources/architecture_2023),
 * [Description of the migration](migration_2023.md).
 
-!!! "Feedback welcome"
-     Please provide your feedback directly via our ticket system. For better processing,
-     please add "Barnard:" as a prefix to the subject of the [support ticket](../support/support).
-
+We value your feedback. Please provide it directly via our ticket system. For better processing,
+please add "Barnard:" as a prefix to the subject of the [support ticket](../support/support).
 
 Here, you can find few hints which might help you with the first steps.
 
 ## Login to Barnard
 
-* All users and projects from Taurus now can work on Barnard.
-* They can use `login[2-4].barnard.hpc.tu-dresden.de` to access the system
+All users and projects from Taurus now can work on Barnard.
+
+They can use `login[2-4].barnard.hpc.tu-dresden.de` to access the system
 from campus (or VPN). [Fingerprints](/access/key_fingerprints/#barnard)
 
-* All users have **new HOME** file systems, this means you have to do two things:
-    
-    1. ??? Install your public ssh key on the system
-        
-        - Please create a new SSH keypair with ed25519 encryption, secured with
-            a passphrase. Please refer to this
-            [page for instructions](../../access/ssh_login#before-your-first-connection).
-        - After login, add the public key to your `.ssh/authorized_keys` file
+All users have **new empty HOME** file systems, this means you have first have to...
+
+??? "... install your public ssh key on the system"
+
+    - Please create a new SSH keypair with ed25519 encryption, secured with
+        a passphrase. Please refer to this
+        [page for instructions](../../access/ssh_login#before-your-first-connection).
+    - After login, add the public key to your `.ssh/authorized_keys` file
             on Barnard.
-    
-   1. ??? "Transfer Data to New Home Directory"
 
-        Your personal (old) home directory at Taurus will not be automatically transferred to the new Barnard
-        system. **You are responsible for this task.** Please do not copy your entire home, but consider
-        this opportunity for cleaning up you data. E.g., it might make sense to delete outdated scripts, old
-        log files, etc., and move other files to an archive filesystem. Thus, please transfer only selected
-        directories and files that you need on the new system.
+## Data Management
+
+* The `/project` filesystem is the same on Taurus and Barnard
+(mounted read-only on the compute nodes).
+* The new work filesystem is `/data/horse`.
+* The slower `/data/walrus` can be considered as a substitute for the old
+  `/warm_archive`- mounted **read-only** on the compute nodes.
+  It can be used to store e.g. results.
+
+These `/data/horse` and `/data/walrus` can be accesed via workspaces. Please refer to the
+[workspace page](../../data_lifecycle/workspaces/), if you are not familiar with workspaces.
+
+??? "Tips on workspaces"
+    * To list all available workspace filessystem, invoke the command  `ws_list -l`."
+    * Please use the command `dtinfo` to get the current mount points:
+    ```
+    marie@login1> dtinfo
+    [...]
+    directory on datamover      mounting clusters   directory on cluster
+
+    /data/old/home              Taurus              /home
+    /data/old/lustre/scratch2   Taurus              /scratch
+    /data/old/lustre/ssd        Taurus              /lustre/ssd
+    [...]
+    ```
+
+!!! Warning
+
+    All old filesystems fill be shutdown by the end of 2023.
+ 
+    To work with your data from Taurus you might have to move/copy them to the new storages.
 
-        The well-known [datamover tools](../../data_transfer/datamover/) are available to run such transfer
-        jobs under Slurm. The steps are as follows:
+For this, we have four new [datamover nodes](/data_transfer/datamover) that have mounted all storages
+of the old and new system. (Do not use the datamovers from Taurus!)
 
-        1. Login to Barnard: `ssh login[1-4].barnard.tu-dresden.de`
-        1. The command `dtinfo` will provide you the mountpoints
+??? "Migration from Home Directory"
 
-            ```console
-            marie@barnard$ dtinfo
-            [...]
-            directory on datamover      mounting clusters   directory on cluster
+    Your personal (old) home directory at Taurus will not be automatically transferred to the new Barnard
+    system. **You are responsible for this task.** Please do not copy your entire home, but consider
+    this opportunity for cleaning up you data. E.g., it might make sense to delete outdated scripts, old
+    log files, etc., and move other files to an archive filesystem. Thus, please transfer only selected
+    directories and files that you need on the new system.
 
-            /data/old/home              Taurus              /home
-            /data/old/lustre/scratch2   Taurus              /scratch
-            /data/old/lustre/ssd        Taurus              /lustre/ssd
-            [...]
-            ```
+    The well-known [datamover tools](../../data_transfer/datamover/) are available to run such transfer
+    jobs under Slurm. The steps are as follows:
 
-        1. Use the `dtls` command to list your files on the old home directory: `marie@barnard$ dtls
-        /data/old/home/marie`
-        1. Use `dtcp` command to invoke a transfer job, e.g.,
+    1. Login to Barnard: `ssh login[1-4].barnard.tu-dresden.de`
+    1. The command `dtinfo` will provide you the mountpoints
 
         ```console
-        marie@barnard$ dtcp --recursive /data/old/home/marie/<useful data> /home/marie/
+        marie@barnard$ dtinfo
+        [...]
+        directory on datamover      mounting clusters   directory on cluster
+
+        /data/old/home              Taurus              /home
+        /data/old/lustre/scratch2   Taurus              /scratch
+        /data/old/lustre/ssd        Taurus              /lustre/ssd
+        [...]
         ```
 
-        **Note**, please adopt the source and target paths to your needs. All available options can be
-        queried via `dtinfo --help`.
+    1. Use the `dtls` command to list your files on the old home directory: `marie@barnard$ dtls
+    /data/old/home/marie`
+    1. Use `dtcp` command to invoke a transfer job, e.g.,
 
-        !!! warning
+    ```console
+    marie@barnard$ dtcp --recursive /data/old/home/marie/<useful data> /home/marie/
+    ```
 
-            Please be aware that there is **no synchronisation process** between your home directories at
-            Taurus and Barnard. Thus, with the very first transfer, they will become divergent.
+    **Note**, please adopt the source and target paths to your needs. All available options can be
+    queried via `dtinfo --help`.
 
-            We recommand to **take some minutes for planing the transfer process**. Do not act with
-            precipitation.
+    !!! warning
 
-## Data Management
+        Please be aware that there is **no synchronisation process** between your home directories at
+        Taurus and Barnard. Thus, after the very first transfer, they will become divergent.
 
-* The `/project` filesystem is the same on Taurus and Barnard
-(mounted read-only on the compute nodes).
-* The **new work filesystem** is `/data/horse`. 
-* The slower `/data/walrus` shall substitute the old `/warm_archive` - mounted **read-only** on
-  the compute nodes. It can be used to store e.g. results.
+        We recommand to **take some minutes for planing the transfer process**. Do not act with
+        precipitation.
 
-These two (horse + walrus) can be accesed via workspaces. Please refer to the
-[workspace page](../../data_lifecycle/workspaces/), if you are not familiar with workspaces. To list
-all available workspace filessystem, invoke the command  `ws_list -l`.
+??? "Migration from `/lustre/ssd` or `/beegfs`"
 
-!!! Note
+    **You** are entirely responsible for the transfer of these data to the new location.
+    Start the dtrsync process as soon as possible. (And maybe repeat it at a later time.) 
 
-    **To work with your data from Taurus you might have to move/copy them to the new storages.**
+??? "Migration from `/lustre/scratch2` aka `/scratch`"
 
-For this, we have four new [datamover nodes](/data_transfer/datamover) that have mounted all storages
-of the old and new system. (Do not use the datamovers from Taurus!)
+    We are synchronizing this (**last: October 18**) to `/data/horse/lustre/scratch2/`.
 
-Please use the command `dtinfo` to get the current mount points:
+    Please do **NOT** copy those data yourself. Instead check if it is already sychronized
+    to `/data/walrus/warm_archive/ws`.
 
-```
-marie@login1> dtinfo
-PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST
-datamover*    up   infinite      1  down* service5
-datamover*    up   infinite      3   idle service[1-4]
---------------------------------------------------------------------------------
-directory on datamover      mounting clusters   directory on cluster
+    In case you need to update this (Gigabytes, not Terabytes!) please run `dtrsync` like in
+    `dtrsync -a /data/old/lustre/scratch2/ws/0/my-workspace/newest/  /data/horse/lustre/scratch2/ws/0/my-workspace/newest/`   
 
-/data/old/home              Taurus              /home
-/data/old/lustre/scratch2   Taurus              /scratch
-/data/old/lustre/ssd        Taurus              /lustre/ssd
-/data/old/beegfs            Taurus              /beegfs
-/data/old/warm_archive      Taurus              /warm_archive
-/data/horse                 Barnard             /data/horse
-/data/walrus                Barnard             /data/walrus
---------------------------------------------------------------------------------
-```
+??? "Migration from `/warm_archive`"
 
-* In May we have copied all workspaces from `/scratch/ws` data to
-` /data/horse/lustre/scratch2/ws`. This replication took a **few weeks**. Ideally you
-can now just **move** the content to a newly created workspace. 
-A second synchronization has started on **October, 18** and is nearly done.
+    We are preparing another sync from `/warm_archive` to `The process of syncing data from `/warm_archive` to `/data/walrus/warm_archive` is still ongoing.
 
-* Please manually copy your needed data from your `beegfs` or `ssd` workspaces. These
-old storages will be purged, probably by the end of November.
+    Please do **NOT** copy those data yourself. Instead check if it is already sychronized
+    to `/data/walrus/warm_archive/ws`.
 
-The process of syncing data from `/warm_archive` to `/data/walrus` is still ongoing.
+    In case you need to update this (Gigabytes, not Terabytes!) please run `dtrsync` like in
+    `dtrsync -a /data/old/warm_archive/ws/my-workspace/newest/  /data/walrus/warm_archive/ws/my-workspace/newest/`
+
+When the last compute system will have been migrated the old file systems will be
+set write-protected and we start a final synchronization (sratch+walrus).
+The target directories for synchronization `/data/horse/lustre/scratch2/ws` and
+`/data/walrus/warm_archive/ws/` will not be deleted automatically in the mean time.
 
 ## Software
 
-Please use `module spider` to identify the software modules you need to load. Like
+Please use `module spider` to identify the software modules you need to load.Like
 on Taurus.
 
+ The default release version is 23.10.
+
 ## Slurm
 
 * We are running the most recent Slurm version.
 * You must not use the old partition names.
 * Not all things are tested.
 
-## Updates after your feedback
+## Updates after your feedback (state: October 19)
 
-* A **second synchronization** from `/scratch` has started on **October, 18** and is nearly done.
-* The **data tranfer tools** now work fine. 
+* A **second synchronization** from `/scratch` has started on **October, 18**, and is
+  now nearly done.
+* A first, and incomplete synchronization from `/warm_archive` has been done (see above).
+  With support from NEC we are transferring the rest in the next weeks.
+* The **data transfer tools** now work fine.
 * After fixing too tight security restrictions, **all users can login** now.
 * **ANSYS** now starts: please check if your specific use case works.
 * **login1** is under construction, do not use it at the moment. Workspace creation does
   not work there.
-
-