| `Lustre` | `/scratch/` | 4 PB | global | No | Only accessible via **todo link: workspaces - WorkSpaces**. Not made for billions of files! |
| `Lustre` | `/lustre/ssd` | 40 TB | global | No | Only accessible via **todo link: workspaces - WorkSpaces**. For small I/O operations |
| `BeeGFS` | `/beegfs/global0` | 232 TB | global | No | Only accessible via **todo link: workspaces - WorkSpaces**. Fastest available file system, only for large parallel applications running with millions of small I/O operations |
| `ext4` | `/tmp` | 95.0 GB | local | No | is cleaned up after the job automatically |
## Warm Archive
!!! warning
This is under construction. The functionality is not there, yet.
The warm archive is intended a storage space for the duration of a running HPC-DA project. It can
NOT substitute a long-term archive. It consists of 20 storage nodes with a net capacity of 10 PB.
Within Taurus (including the HPC-DA nodes), the management software "Quobyte" enables access via
- native quobyte client - read-only from compute nodes, read-write
from login and nvme nodes
- S3 - read-write from all nodes,
- Cinder (from OpenStack cluster).
For external access, you can use:
- S3 to `<bucket>.s3.taurusexport.hrsk.tu-dresden.de`
- or normal file transfer via our taurusexport nodes (see [DataManagement](overview.md)).
An HPC-DA project can apply for storage space in the warm archive. This is limited in capacity and
duration.
TODO
## Recommendations for File System Usage
To work as efficient as possible, consider the following points
- Save source code etc. in `/home` or /projects/...
- Store checkpoints and other temporary data in `/scratch/ws/...`
- Compilation in `/dev/shm` or `/tmp`
Getting high I/O-bandwitdh
- Use many clients
- Use many processes (writing in the same file at the same time is possible)
- Use large I/O transfer blocks
## Cheat Sheet for Debugging File System Issues
Every Taurus-User should normaly be able to perform the following commands to get some intel about
their data.
### General
For the first view, you can easily use the "df-command".
```Bash
df
```
Alternativly you can use the "findmnt"-command, which is also able to perform an `df` by adding the
"-D"-parameter.
```Bash
findmnt -D
```
Optional you can use the `-t`-parameter to specify the fs-type or the `-o`-parameter to alter the
output.
We do **not recommend** the usage of the "du"-command for this purpose. It is able to cause issues
for other users, while reading data from the filesystem.
### BeeGFS
Commands to work with the BeeGFS file system.
#### Capacity and file system health
View storage and inode capacity and utilization for metadata and storage targets.
```Bash
beegfs-df -p /beegfs/global0
```
The `-p` parameter needs to be the mountpoint of the file system and is mandatory.
List storage and inode capacity, reachability and consistency information of each storage target.
| `Lustre` | `/scratch/` | 4 PB | global | No | Only accessible via **todo link: workspaces - WorkSpaces**. Not made for billions of files! |
| `Lustre` | `/lustre/ssd` | 40 TB | global | No | Only accessible via **todo link: workspaces - WorkSpaces**. For small I/O operations |
| `BeeGFS` | `/beegfs/global0` | 232 TB | global | No | Only accessible via **todo link: workspaces - WorkSpaces**. Fastest available file system, only for large parallel applications running with millions of small I/O operations |
| `ext4` | `/tmp` | 95.0 GB | local | No | is cleaned up after the job automatically |
## Warm Archive
!!! warning
This is under construction. The functionality is not there, yet.
The warm archive is intended a storage space for the duration of a running HPC-DA project. It can
NOT substitute a long-term archive. It consists of 20 storage nodes with a net capacity of 10 PB.
Within Taurus (including the HPC-DA nodes), the management software "Quobyte" enables access via
- native quobyte client - read-only from compute nodes, read-write
from login and nvme nodes
- S3 - read-write from all nodes,
- Cinder (from OpenStack cluster).
For external access, you can use:
- S3 to `<bucket>.s3.taurusexport.hrsk.tu-dresden.de`
- or normal file transfer via our taurusexport nodes (see [DataManagement](overview.md)).
An HPC-DA project can apply for storage space in the warm archive. This is limited in capacity and
duration.
TODO
## Recommendations for File System Usage
To work as efficient as possible, consider the following points
- Save source code etc. in `/home` or /projects/...
- Store checkpoints and other temporary data in `/scratch/ws/...`
- Compilation in `/dev/shm` or `/tmp`
Getting high I/O-bandwitdh
- Use many clients
- Use many processes (writing in the same file at the same time is possible)
- Use large I/O transfer blocks
## Cheat Sheet for Debugging File System Issues
Every Taurus-User should normaly be able to perform the following commands to get some intel about
their data.
### General
For the first view, you can easily use the "df-command".
```Bash
df
```
Alternativly you can use the "findmnt"-command, which is also able to perform an `df` by adding the
"-D"-parameter.
```Bash
findmnt -D
```
Optional you can use the `-t`-parameter to specify the fs-type or the `-o`-parameter to alter the
output.
We do **not recommend** the usage of the "du"-command for this purpose. It is able to cause issues
for other users, while reading data from the filesystem.
### BeeGFS
Commands to work with the BeeGFS file system.
#### Capacity and file system health
View storage and inode capacity and utilization for metadata and storage targets.
```Bash
beegfs-df -p /beegfs/global0
```
The `-p` parameter needs to be the mountpoint of the file system and is mandatory.
List storage and inode capacity, reachability and consistency information of each storage target.