diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md index 9f949354a1a1a1e7b24d2bcf5aa50e15496a9348..0eeefc7bcc3267de8fa03227cd279fd8700f7ce6 100644 --- a/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md +++ b/doc.zih.tu-dresden.de/docs/data_lifecycle/file_systems.md @@ -102,8 +102,29 @@ directory will be distributed over 20 OSTs. ## Warm Archive +!!! warning + This is under construction. The functionality is not there, yet. + +The warm archive is intended a storage space for the duration of a running HPC-DA project. It can +NOT substitute a long-term archive. It consists of 20 storage nodes with a net capacity of 10 PB. +Within Taurus (including the HPC-DA nodes), the management software "Quobyte" enables access via + +- native quobyte client - read-only from compute nodes, read-write + from login and nvme nodes +- S3 - read-write from all nodes, +- Cinder (from OpenStack cluster). + +For external access, you can use: + +- S3 to `<bucket>.s3.taurusexport.hrsk.tu-dresden.de` +- or normal file transfer via our taurusexport nodes (see [DataManagement](overview.md)). + +An HPC-DA project can apply for storage space in the warm archive. This is limited in capacity and +duration. TODO + + ## Recommendations for File System Usage To work as efficient as possible, consider the following points diff --git a/doc.zih.tu-dresden.de/docs/data_lifecycle/warm_archive.md b/doc.zih.tu-dresden.de/docs/data_lifecycle/warm_archive.md deleted file mode 100644 index c98087cc4a6ec262abc2409f5c1f83b5b4d973fb..0000000000000000000000000000000000000000 --- a/doc.zih.tu-dresden.de/docs/data_lifecycle/warm_archive.md +++ /dev/null @@ -1,20 +0,0 @@ -# Warm Archive - -**This page is under construction. The functionality is not there, yet.** - -The warm archive is intended a storage space for the duration of a running HPC-DA project. It can -NOT substitute a long-term archive. It consists of 20 storage nodes with a net capacity of 10 PB. -Within Taurus (including the HPC-DA nodes), the management software "Quobyte" enables access via - -- native quobyte client - read-only from compute nodes, read-write - from login and nvme nodes -- S3 - read-write from all nodes, -- Cinder (from OpenStack cluster). - -For external access, you can use: - -- S3 to `<bucket>.s3.taurusexport.hrsk.tu-dresden.de` -- or normal file transfer via our taurusexport nodes (see [DataManagement](overview.md)). - -An HPC-DA project can apply for storage space in the warm archive. This is limited in capacity and -duration. diff --git a/doc.zih.tu-dresden.de/mkdocs.yml b/doc.zih.tu-dresden.de/mkdocs.yml index bc9ba2245dd373e0ca0b0e28ef0e515448a767d4..3a35cda61737cfe95558f5497b4b12d0e9ebd3b1 100644 --- a/doc.zih.tu-dresden.de/mkdocs.yml +++ b/doc.zih.tu-dresden.de/mkdocs.yml @@ -78,7 +78,6 @@ nav: - BeeGFS: data_lifecycle/bee_gfs.md - Intermediate Archive: data_lifecycle/intermediate_archive.md - Filesystems: data_lifecycle/file_systems.md - - Warm Archive: data_lifecycle/warm_archive.md - HPC Storage Concept 2019: data_lifecycle/hpc_storage_concept2019.md - Preservation of Research Data: data_lifecycle/preservation_research_data.md - Structuring Experiments: data_lifecycle/experiments.md