diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/workspaces.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/workspaces.md
index 46ff6483a19cb37e73f6403fc5d300bb6fb9fc95..924d98077b2489ba5f2516f3e21fe49004747ad2 100644
--- a/doc.zih.tu-dresden.de/docs/data_lifecycle/workspaces.md
+++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/workspaces.md
@@ -44,7 +44,7 @@ beegfs
 
     The default filesystem is `scratch`. If you prefer another filesystem (cf. section
     [List Available Filesystems](#list-available-filesystems)), you have to explictly
-    provide the option `--filesystem=<fs>` to the workspace commands.
+    provide the option `--filesystem=<filesystem>` to the workspace commands.
 
 ### List Current Workspaces
 
@@ -171,10 +171,10 @@ Options:
 !!! example "Workspace allocation on specific filesystem"
 
     In order to allocate a workspace on a non-default filesystem, the option
-    `--filesystem <filesystem>` is required.
+    `--filesystem=<filesystem>` is required.
 
     ```console
-    marie@login$ ws_allocate --filesystem scratch_fast test-workspace 3
+    marie@login$ ws_allocate --filesystem=scratch_fast test-workspace 3
     Info: creating workspace.
     /lustre/ssd/ws/marie-test-workspace
     remaining extensions  : 2
@@ -188,7 +188,7 @@ Options:
     day starting 7 days prior to expiration. We strongly recommend setting this e-mail reminder.
 
     ```console
-    marie@login$ ws_allocate --reminder 7 --mailaddress marie.testuser@tu-dresden.de test-workspace 90
+    marie@login$ ws_allocate --reminder=7 --mailaddress=marie.testuser@tu-dresden.de test-workspace 90
     Info: creating workspace.
     /scratch/ws/marie-test-workspace
     remaining extensions  : 10
@@ -209,7 +209,7 @@ group workspaces.
 The lifetime of a workspace is finite and different filesystems (storage systems) have different
 maximum durations. A workspace can be extended multiple times, depending on the filesystem.
 
-| Filesystem (use with parameter `--filesystem=<fs>`) | Duration, days | Extensions | [Filesystem Feature](../jobs_and_resources/slurm.md#filesystem-features) | Remarks |
+| Filesystem (use with parameter `--filesystem=<filesystem>`) | Duration, days | Extensions | [Filesystem Feature](../jobs_and_resources/slurm.md#filesystem-features) | Remarks |
 |:-------------------------------------|---------------:|-----------:|:-------------------------------------------------------------------------|:--------|
 | `scratch` (default)                  | 100            | 10         | `fs_lustre_scratch2`                                                     | Scratch filesystem (`/lustre/scratch2`, symbolic link: `/scratch`) with high streaming bandwidth, based on spinning disks |
 | `ssd`                                | 30             | 2          | `fs_lustre_ssd`                                                          | High-IOPS filesystem (`/lustre/ssd`, symbolic link: `/ssd`) on SSDs. |
@@ -433,7 +433,7 @@ the following example (which works [for the program g16](../software/nanoscale_s
 
     # Allocate workspace for this job. Adjust time span to time limit of the job (-d <N>).
     WSNAME=computation_$SLURM_JOB_ID
-    export WSDDIR=$(ws_allocate -F ssd -n ${WSNAME} -d 2)
+    export WSDDIR=$(ws_allocate --filesystem=ssd --name=${WSNAME} --duration=2)
     echo ${WSDIR}
 
     # Check allocation
@@ -476,7 +476,7 @@ For a series of jobs or calculations that work on the same data, you should allo
 once, e.g., in `scratch` for 100 days:
 
 ```console
-marie@login$ ws_allocate -F scratch my_scratchdata 100
+marie@login$ ws_allocate --filesystem=scratch my_scratchdata 100
 Info: creating workspace.
 /scratch/ws/marie-my_scratchdata
 remaining extensions  : 2
@@ -505,7 +505,7 @@ this is mounted read-only on the compute nodes, so you cannot use it as a work d
 jobs!
 
 ```console
-marie@login$ ws_allocate -F warm_archive my_inputdata 365
+marie@login$ ws_allocate --filesystem=warm_archive my_inputdata 365
 /warm_archive/ws/marie-my_inputdata
 remaining extensions  : 2
 remaining time in days: 365
@@ -551,7 +551,7 @@ to others (if in the same group) via `ws_list -g`.
     in the project `p_number_crunch`, she can allocate a so-called group workspace.
 
     ```console
-    marie@login$ ws_allocate --group --name numbercrunch --duration 30
+    marie@login$ ws_allocate --group --name=numbercrunch --duration=30
     Info: creating workspace.
     /scratch/ws/0/marie-numbercrunch
     remaining extensions  : 10
@@ -610,6 +610,11 @@ wrong name. Use only the short name that is listed after `id:` when using `ws_li
 **Q**: I forgot to specify an e-mail alert when allocating my workspace. How can I add the
 e-mail alert functionality to an existing workspace?
 
-**A**: You can add the e-mail alert by "overwriting" the workspace settings via `ws_allocate -x -m
-<mail address> -r <days> -n <ws-name> -d <duration> -F <fs>`. (This will lower the remaining
-extensions by one.)
+**A**: You can add the e-mail alert by "overwriting" the workspace settings via
+
+```console
+marie@login$ ws_allocate --extension --mailaddress=<mail address> --reminder=<days> \
+             --name=<workspace-name> --duration=<duration> --filesystem=<filesystem>
+```
+
+This will lower the remaining extensions by one.
diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/barnard_test.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/barnard_test.md
index 59a33d2049e94f10a792a1155e30505c8a2442d6..1529565f8555712da22f15e16141d8be3ad7d301 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/barnard_test.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/barnard_test.md
@@ -6,85 +6,45 @@ and workflows for production there. For general hints please refer to these site
 * [Details on architecture](/jobs_and_resources/architecture_2023),
 * [Description of the migration](migration_2023.md).
 
-Please provide your feedback directly via our ticket system. For better processing,
+We value your feedback. Please provide it directly via our ticket system. For better processing,
 please add "Barnard:" as a prefix to the subject of the [support ticket](../support/support).
 
 Here, you can find few hints which might help you with the first steps.
 
 ## Login to Barnard
 
-* All users and projects from Taurus now can work on Barnard.
-* They can use `login[1-4].barnard.hpc.tu-dresden.de` to access the system
+All users and projects from Taurus now can work on Barnard.
+
+They can use `login[2-4].barnard.hpc.tu-dresden.de` to access the system
 from campus (or VPN). [Fingerprints](/access/key_fingerprints/#barnard)
 
-* All users have *new* home file systems, this means:
+All users have **new empty HOME** file systems, this means you have first have to...
+
+??? "... install your public ssh key on the system"
+
     - Please create a new SSH keypair with ed25519 encryption, secured with
-    a passphrase. Please refer to this
-    [page for instructions](../../access/ssh_login#before-your-first-connection).
+        a passphrase. Please refer to this
+        [page for instructions](../../access/ssh_login#before-your-first-connection).
     - After login, add the public key to your `.ssh/authorized_keys` file
-    on Barnard.
+            on Barnard.
 
 ## Data Management
 
 * The `/project` filesystem is the same on Taurus and Barnard
 (mounted read-only on the compute nodes).
-* The **new work filesystem** is `/data/horse`. The slower `/data/walrus` can be used
-to store e.g. results. Both can be accesed via workspaces. Please refer to the
-[workspace page](../../data_lifecycle/workspaces/), if you are not familiar with workspaces. To list
-all available workspace filessystem, invoke the command  `ws_list -l`.
-
-!!! Note
-
-    **To work with your data from Taurus you might have to move/copy them to the new storages.**
+* The new work filesystem is `/data/horse`.
+* The slower `/data/walrus` can be considered as a substitute for the old
+  `/warm_archive`- mounted **read-only** on the compute nodes.
+  It can be used to store e.g. results.
 
-For this, we have four new [datamover nodes](/data_transfer/datamover) that have mounted all storages
-of the old and new system. (Do not use the datamovers from Taurus!)
-
-Please use the command `dtinfo` to get the current mount points:
-
-```
-marie@login1> dtinfo
-PARTITION  AVAIL  TIMELIMIT  NODES  STATE NODELIST
-datamover*    up   infinite      1  down* service5
-datamover*    up   infinite      3   idle service[1-4]
---------------------------------------------------------------------------------
-directory on datamover      mounting clusters   directory on cluster
-
-/data/old/home              Taurus              /home
-/data/old/lustre/scratch2   Taurus              /scratch
-/data/old/lustre/ssd        Taurus              /lustre/ssd
-/data/old/beegfs            Taurus              /beegfs
-/data/old/warm_archive      Taurus              /warm_archive
-/data/horse                 Barnard             /data/horse
-/data/walrus                Barnard             /data/walrus
---------------------------------------------------------------------------------
-```
-
-* In May (!) we have copied all workspaces from `/scratch/ws` data to
-` /data/horse/lustre/scratch2/ws`. This replication took a **few weeks**. Ideally you
-can now just *move* their *content* to a newly created workspace. - Of course,
-everything newer than May is not there.
-* Please manually copy your needed data from your `beegfs` or `ssd` workspaces. These
-old storages will be purged, probably by the end of November.
-
-The process of syncing data from `/warm_archive` to `/data/walrus` is still ongoing.
-
-### Transfer Data to New Home Directory
-
-Your personal (old) home directory at Taurus will not be automatically transferred to the new Barnard
-system. **You are responsible for this task.** Please do not copy your entire home, but consider
-this opportunity for cleaning up you data. E.g., it might make sense to delete outdated scripts, old
-log files, etc., and move other files to an archive filesystem. Thus, please transfer only selected
-directories and files that you need on the new system.
-
-The well-known [datamover tools](../../data_transfer/datamover/) are available to run such transfer
-jobs under Slurm. The steps are as follows:
-
-1. Login to Barnard: `ssh login[1-4].barnard.tu-dresden.de`
-1. The command `dtinfo` will provide you the mountpoints
+These `/data/horse` and `/data/walrus` can be accesed via workspaces. Please refer to the
+[workspace page](../../data_lifecycle/workspaces/), if you are not familiar with workspaces.
 
-    ```console
-    marie@barnard$ dtinfo
+??? "Tips on workspaces"
+    * To list all available workspace filessystem, invoke the command  `ws_list -l`."
+    * Please use the command `dtinfo` to get the current mount points:
+    ```
+    marie@login1> dtinfo
     [...]
     directory on datamover      mounting clusters   directory on cluster
 
@@ -94,32 +54,110 @@ jobs under Slurm. The steps are as follows:
     [...]
     ```
 
-1. Use the `dtls` command to list your files on the old home directory: `marie@barnard$ dtls
-   /data/old/home/marie`
-1. Use `dtcp` command to invoke a transfer job, e.g.,
+!!! Warning
 
-   ```console
-   marie@barnard$ dtcp --recursive /data/old/home/marie/<useful data> /home/marie/
-   ```
+    All old filesystems fill be shutdown by the end of 2023.
 
-   **Note**, please adopt the source and target paths to your needs. All available options can be
-   queried via `dtinfo --help`.
+    To work with your data from Taurus you might have to move/copy them to the new storages.
 
-!!! warning
+For this, we have four new [datamover nodes](/data_transfer/datamover) that have mounted all storages
+of the old and new system. (Do not use the datamovers from Taurus!)
+
+??? "Migration from Home Directory"
+
+    Your personal (old) home directory at Taurus will not be automatically transferred to the new Barnard
+    system. **You are responsible for this task.** Please do not copy your entire home, but consider
+    this opportunity for cleaning up you data. E.g., it might make sense to delete outdated scripts, old
+    log files, etc., and move other files to an archive filesystem. Thus, please transfer only selected
+    directories and files that you need on the new system.
+
+    The well-known [datamover tools](../../data_transfer/datamover/) are available to run such transfer
+    jobs under Slurm. The steps are as follows:
 
-    Please be aware that there is **no synchronisation process** between your home directories at
-    Taurus and Barnard. Thus, with the very first transfer, they will become divergent.
+    1. Login to Barnard: `ssh login[1-4].barnard.tu-dresden.de`
+    1. The command `dtinfo` will provide you the mountpoints
 
-    We recommand to **take some minutes for planing the transfer process**. Do not act with
-    precipitation.
+        ```console
+        marie@barnard$ dtinfo
+        [...]
+        directory on datamover      mounting clusters   directory on cluster
+
+        /data/old/home              Taurus              /home
+        /data/old/lustre/scratch2   Taurus              /scratch
+        /data/old/lustre/ssd        Taurus              /lustre/ssd
+        [...]
+        ```
+
+    1. Use the `dtls` command to list your files on the old home directory: `marie@barnard$ dtls
+    /data/old/home/marie`
+    1. Use `dtcp` command to invoke a transfer job, e.g.,
+
+    ```console
+    marie@barnard$ dtcp --recursive /data/old/home/marie/<useful data> /home/marie/
+    ```
+
+    **Note**, please adopt the source and target paths to your needs. All available options can be
+    queried via `dtinfo --help`.
+
+    !!! warning
+
+        Please be aware that there is **no synchronisation process** between your home directories at
+        Taurus and Barnard. Thus, after the very first transfer, they will become divergent.
+
+        We recommand to **take some minutes for planing the transfer process**. Do not act with
+        precipitation.
+
+??? "Migration from `/lustre/ssd` or `/beegfs`"
+
+    **You** are entirely responsible for the transfer of these data to the new location.
+    Start the dtrsync process as soon as possible. (And maybe repeat it at a later time.)
+
+??? "Migration from `/lustre/scratch2` aka `/scratch`"
+
+    We are synchronizing this (**last: October 18**) to `/data/horse/lustre/scratch2/`.
+
+    Please do **NOT** copy those data yourself. Instead check if it is already sychronized
+    to `/data/walrus/warm_archive/ws`.
+
+    In case you need to update this (Gigabytes, not Terabytes!) please run `dtrsync` like in
+    `dtrsync -a /data/old/lustre/scratch2/ws/0/my-workspace/newest/  /data/horse/lustre/scratch2/ws/0/my-workspace/newest/`
+
+??? "Migration from `/warm_archive`"
+
+    We are preparing another sync from `/warm_archive` to `The process of syncing data from `/warm_archive` to `/data/walrus/warm_archive` is still ongoing.
+
+    Please do **NOT** copy those data yourself. Instead check if it is already sychronized
+    to `/data/walrus/warm_archive/ws`.
+
+    In case you need to update this (Gigabytes, not Terabytes!) please run `dtrsync` like in
+    `dtrsync -a /data/old/warm_archive/ws/my-workspace/newest/  /data/walrus/warm_archive/ws/my-workspace/newest/`
+
+When the last compute system will have been migrated the old file systems will be
+set write-protected and we start a final synchronization (sratch+walrus).
+The target directories for synchronization `/data/horse/lustre/scratch2/ws` and
+`/data/walrus/warm_archive/ws/` will not be deleted automatically in the mean time.
 
 ## Software
 
-Please use `module spider` to identify the software modules you need to load. Like
+Please use `module spider` to identify the software modules you need to load.Like
 on Taurus.
 
+ The default release version is 23.10.
+
 ## Slurm
 
 * We are running the most recent Slurm version.
 * You must not use the old partition names.
 * Not all things are tested.
+
+## Updates after your feedback (state: October 19)
+
+* A **second synchronization** from `/scratch` has started on **October, 18**, and is
+  now nearly done.
+* A first, and incomplete synchronization from `/warm_archive` has been done (see above).
+  With support from NEC we are transferring the rest in the next weeks.
+* The **data transfer tools** now work fine.
+* After fixing too tight security restrictions, **all users can login** now.
+* **ANSYS** now starts: please check if your specific use case works.
+* **login1** is under construction, do not use it at the moment. Workspace creation does
+  not work there.