diff --git a/doc.zih.tu-dresden.de/docs/data_transfer/overview.md b/doc.zih.tu-dresden.de/docs/data_transfer/overview.md index 6e8a1bf1cc12e36e4aa15bd46b9eaf84e24171bc..c82a68ba50c9ab75c2fdc31d647d8a8827db694c 100644 --- a/doc.zih.tu-dresden.de/docs/data_transfer/overview.md +++ b/doc.zih.tu-dresden.de/docs/data_transfer/overview.md @@ -1,6 +1,6 @@ # Data Transfer -## Data Transfer to/from ZIH Systems: Export Nodes +## Data Transfer to/from ZIH Systems: Dataport Nodes There are at least three tools for exchanging data between your local workstation and ZIH systems: `scp`, `rsync`, and `sftp`. Please refer to the offline or online man pages of @@ -8,9 +8,14 @@ There are at least three tools for exchanging data between your local workstatio [rsync](https://man7.org/linux/man-pages/man1/rsync.1.html), and [sftp](https://man7.org/linux/man-pages/man1/sftp.1.html) for detailed information. -No matter what tool you prefer, it is crucial that the **export nodes** are used as preferred way to +No matter what tool you prefer, it is crucial that the **dataport nodes** are used as preferred way to copy data to/from ZIH systems. Please follow the link to the documentation on -[export nodes](export_nodes.md) for further reference and examples. +[dataport nodes](dataport_nodes.md) for further reference and examples. + +!!! warning "Note" + + The former **export nodes** are still available as long as the outdated filesystems (`scratch`, + etc.) are accessible. Their end of life is considered for May 2024. ## Data Transfer Inside ZIH Systems: Datamover diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md index 3a400076e75bcc1631e740998749ebac0aedc84f..66cd2e834ae0c988dcbfde4193b684a525180490 100644 --- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md +++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/hardware_overview.md @@ -113,23 +113,27 @@ only from their respective login nodes. -## Login and Export Nodes +## Login and Dataport Nodes !!! Note " **On December 11 2023 Taurus will be decommissioned for good**." Do not use Taurus for production anymore. -- 4 Login-Nodes `tauruslogin[3-6].hrsk.tu-dresden.de` - - Each login node is equipped with 2x Intel(R) Xeon(R) CPU E5-2680 v3 with 24 cores in total @ - 2.50 GHz, Multithreading disabled, 64 GB RAM, 128 GB SSD local disk - - IPs: 141.30.73.\[102-105\] -- 2 Data-Transfer-Nodes `taurusexport[3-4].hrsk.tu-dresden.de` +- Login-Nodes + - Individual for each cluster. See sections below. +- 2 Data-Transfer-Nodes + - 2 servers without interactive login, only available via file transfer protocols + (`rsync`, `ftp`) + - `dataport[3-4].hpc.tu-dresden.de` + - IPs: 141.30.73.\[4,5\] + - Further information on the usage is documented on the site + [dataport Nodes](../data_transfer/dataport_nodes.md) +- *outdated*: 2 Data-Transfer-Nodes `taurusexport[3-4].hrsk.tu-dresden.de`<!--TODO: remove after release in May 2024--> - DNS Alias `taurusexport.hrsk.tu-dresden.de` - 2 Servers without interactive login, only available via file transfer protocols (`rsync`, `ftp`) - - IPs: 141.30.73.\[82,83\] - - Further information on the usage is documented on the site - [Export Nodes](../data_transfer/export_nodes.md) + - available as long as outdated filesystems (e.g. `scratch`) are accessible + ## Barnard diff --git a/doc.zih.tu-dresden.de/docs/quickstart/getting_started.md b/doc.zih.tu-dresden.de/docs/quickstart/getting_started.md index fe871b3d9adf2c22f5a7ac84edb5b2a953522f50..c9495588d57e726c00a31da961508e5096439170 100644 --- a/doc.zih.tu-dresden.de/docs/quickstart/getting_started.md +++ b/doc.zih.tu-dresden.de/docs/quickstart/getting_started.md @@ -195,31 +195,31 @@ The approach depends on the data volume: up to 100 MB or above. transfer section. ### Transferring Data *To/From* ZIH HPC Systems - +<!-- [NT] currently not available ???+ example "`scp` for transferring data to ZIH HPC systems" Copy the file `example.R` from your local machine to a workspace on the ZIH systems: ```console - marie@local$ scp /home/marie/Documents/example.R marie@export.hpc.tu-dresden.de:/data/horse/ws/your_workspace/ + marie@local$ scp /home/marie/Documents/example.R marie@dataport1.hpc.tu-dresden.de:/data/horse/ws/your_workspace/ Password: example.R 100% 312 32.2KB/s 00:00`` ``` - Note, the target path contains `export.hpc.tu-dresden.de`, which is one of the - so called [export nodes](../data_transfer/export_nodes.md) that allows for data transfer from/to the outside. + Note, the target path contains `dataport1.hpc.tu-dresden.de`, which is one of the + so called [dataport nodes](../data_transfer/dataport_nodes.md) that allows for data transfer from/to the outside. ???+ example "`scp` to transfer data from ZIH HPC systems to local machine" Copy the file `results.csv` from a workspace on the ZIH HPC systems to your local machine: ```console - marie@local$ scp marie@export.hpc.tu-dresden.de:/data/horse/ws/marie-test-workspace/results.csv /home/marie/Documents/ + marie@local$ scp marie@dataport1.hpc.tu-dresden.de:/data/horse/ws/marie-test-workspace/results.csv /home/marie/Documents/ ``` Feel free to explore further [examples](http://bropages.org/scp) of the `scp` command - and possibilities of the [export nodes](../data_transfer/export_nodes.md). - + and possibilities of the [dataport nodes](../data_transfer/dataport_nodes.md). +--> !!! caution "Terabytes of data" If you are planning to move terabytes or even more from an outside machine into ZIH systems, diff --git a/doc.zih.tu-dresden.de/docs/software/gpu_programming.md b/doc.zih.tu-dresden.de/docs/software/gpu_programming.md index 9c25859d509a0d621a5b1d6413e00391dd29a1ca..84eb94c667e550c55e6f8c73cb3672a95c88b278 100644 --- a/doc.zih.tu-dresden.de/docs/software/gpu_programming.md +++ b/doc.zih.tu-dresden.de/docs/software/gpu_programming.md @@ -260,7 +260,7 @@ metrics and `--export-profile` to generate a report file, like this: marie@compute$ nvprof --analysis-metrics --export-profile <output>.nvvp ./application [options] ``` -[Transfer the report file to your local system](../data_transfer/export_nodes.md) and analyze it in +[Transfer the report file to your local system](../data_transfer/dataport_nodes.md) and analyze it in the Visual Profiler (`nvvp`) locally. This will give the smoothest user experience. Alternatively, you can use [X11-forwarding](../access/ssh_login.md). Refer to the documentation for details about the individual @@ -317,7 +317,7 @@ needs, this analysis may be sufficient to identify optimizations targets. The graphical user interface version can be used for a thorough analysis of your previously generated report file. For an optimal user experience, we recommend a local installation of NVIDIA Nsight Systems. In this case, you can -[transfer the report file to your local system](../data_transfer/export_nodes.md). +[transfer the report file to your local system](../data_transfer/dataport_nodes.md). Alternatively, you can use [X11-forwarding](../access/ssh_login.md). The graphical user interface is usually available as `nsys-ui`. @@ -361,7 +361,7 @@ manually. This report file can be analyzed in the graphical user interface profiler. Again, we recommend you generate a report file on a compute node and -[transfer the report file to your local system](../data_transfer/export_nodes.md). +[transfer the report file to your local system](../data_transfer/dataport_nodes.md). Alternatively, you can use [X11-forwarding](../access/ssh_login.md). The graphical user interface is usually available as `ncu-ui` or `nv-nsight-cu`.