Skip to content
Snippets Groups Projects
Commit ec6d46fa authored by Michael Müller's avatar Michael Müller
Browse files

Merge branch 'FileSystems.md' into 'preview'

Fixed links and markdown syntax

See merge request zih/hpc-compendium/hpc-compendium!86
parents be848510 d28fdb63
No related branches found
No related tags found
3 merge requests!322Merge preview into main,!319Merge preview into main,!86Fixed links and markdown syntax
...@@ -15,20 +15,18 @@ directory: ...@@ -15,20 +15,18 @@ directory:
`.bashrc_mars` to `$HOME/.bash_history_<machine_name>`. Setting `.bashrc_mars` to `$HOME/.bash_history_<machine_name>`. Setting
HISTSIZE and HISTFILESIZE to 10000 helps as well. HISTSIZE and HISTFILESIZE to 10000 helps as well.
- Further, you may use private module files to simplify the process of - Further, you may use private module files to simplify the process of
loading the right installation directories, see [private loading the right installation directories, see
modules](#AnchorPrivateModule). **todo link: private modules - AnchorPrivateModule**.
### Global /projects file system ### Global /projects file system
For project data, we have a global project directory, that allows better For project data, we have a global project directory, that allows better
collaboration between the members of an HPC project. However, for collaboration between the members of an HPC project. However, for
compute nodes /projects is mounted as read-only, because it is not a compute nodes /projects is mounted as read-only, because it is not a
filesystem for parallel I/O. See below and also check the [HPC filesystem for parallel I/O. See below and also check the
introduction](%PUBURL%/Compendium/WebHome/HPC-Introduction.pdf) for more **todo link: HPC introduction - %PUBURL%/Compendium/WebHome/HPC-Introduction.pdf** for more
details. details.
#AnchorBackup
### Backup and snapshots of the file system ### Backup and snapshots of the file system
- Backup is **only** available in the `/home` and the `/projects` file - Backup is **only** available in the `/home` and the `/projects` file
...@@ -87,17 +85,17 @@ In case a project is above it's limits please... ...@@ -87,17 +85,17 @@ In case a project is above it's limits please...
- *systematically*handle your important data: - *systematically*handle your important data:
- For later use (weeks...months) at the HPC systems, build tar - For later use (weeks...months) at the HPC systems, build tar
archives with meaningful names or IDs and store e.g. them in an archives with meaningful names or IDs and store e.g. them in an
[archive](IntermediateArchive). [archive](IntermediateArchive.md).
- refer to the hints for [long term preservation for research - refer to the hints for [long term preservation for research
data](PreservationResearchData). data](PreservationResearchData.md).
## Work directories ## Work directories
| File system | Usable directory | Capacity | Availability | Backup | Remarks | | File system | Usable directory | Capacity | Availability | Backup | Remarks |
|:------------|:------------------|:---------|:-------------|:-------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------| |:------------|:------------------|:---------|:-------------|:-------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `Lustre` | `/scratch/` | 4 PB | global | No | Only accessible via [workspaces](WorkSpaces). Not made for billions of files! | | `Lustre` | `/scratch/` | 4 PB | global | No | Only accessible via **todo link: workspaces - WorkSpaces**. Not made for billions of files! |
| `Lustre` | `/lustre/ssd` | 40 TB | global | No | Only accessible via [workspaces](WorkSpaces). For small I/O operations | | `Lustre` | `/lustre/ssd` | 40 TB | global | No | Only accessible via **todo link: workspaces - WorkSpaces**. For small I/O operations |
| `BeeGFS` | `/beegfs/global0` | 232 TB | global | No | Only accessible via [workspaces](WorkSpaces). Fastest available file system, only for large parallel applications running with millions of small I/O operations | | `BeeGFS` | `/beegfs/global0` | 232 TB | global | No | Only accessible via **todo link: workspaces - WorkSpaces**. Fastest available file system, only for large parallel applications running with millions of small I/O operations |
| `ext4` | `/tmp` | 95.0 GB | local | No | is cleaned up after the job automatically | | `ext4` | `/tmp` | 95.0 GB | local | No | is cleaned up after the job automatically |
### Large files in /scratch ### Large files in /scratch
...@@ -120,6 +118,10 @@ number in this directory with: ...@@ -120,6 +118,10 @@ number in this directory with:
affect existing files. But all files that **will be created** in this affect existing files. But all files that **will be created** in this
directory will be distributed over 20 OSTs. directory will be distributed over 20 OSTs.
## Warm archive
TODO
## Recommendations for file system usage ## Recommendations for file system usage
To work as efficient as possible, consider the following points To work as efficient as possible, consider the following points
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment