Commit cfb09462 authored by Martin Schroschk's avatar Martin Schroschk
Browse files

Merge branch 'nameing_scheme' into 'preview'

Apply naming scheme

See merge request zih/hpc-compendium/hpc-compendium!168
parents a8ce2c3d e7610db8
......@@ -17,5 +17,5 @@ during the access procedure. Accept the host verifying and enter your password.
by login nodes in your Taurus home directory. This method requires two conditions: Linux OS,
workstation within the campus network. For other options and details check the Login page.
Useful links: [Access]**todo link**, [Project Request Form](application/RequestForResources.md),
Useful links: [Access]**todo link**, [Project Request Form](../application/request_for_resources.md),
[Terms Of Use]**todo link**
# Desktop Cloud Visualization (DCV)
NICE DCV enables remote accessing OpenGL-3D-applications running on the server (taurus) using the
server's GPUs. If you don't need GL acceleration, you might also want to try our [WebVNC](WebVNC.md)
server's GPUs. If you don't need GL acceleration, you might also want to try our [WebVNC](web_vnc.md)
solution.
Note that with the 2017 version (and later), while there is still a separate client available, it is
......@@ -12,13 +12,13 @@ https://www.nice-software.com/download/nice-dcv-2017
## Access with JupyterHub
**todo**
**Check out the [new documentation about virtual desktops](../software/VirtualDesktops.md).**
**Check out the** [new documentation about virtual desktops](../software/virtual_desktops.md).
Click here, to start a session on our JupyterHub:
[https://taurus.hrsk.tu-dresden.de/jupyter/hub/spawn#/\~(partition\~'dcv\~cpuspertask\~'6\~gres\~'gpu\*3a1\~mempercpu\~'2583\~environment\~'production)](https://taurus.hrsk.tu-dresden.de/jupyter/hub/spawn#/~(partition~'dcv~cpuspertask~'6~gres~'gpu*3a1~mempercpu~'2583~environment~'test))\<br
/> This link starts your session on the dcv partition (taurusi210\[7-8\]) with a GPU, 6 CPU cores
and 2583 MB memory per core. Optionally you can modify many different SLURM parameters. For this
follow the general [JupyterHub](../software/JupyterHub.md) documentation.
follow the general [JupyterHub](../software/jupyterhub.md) documentation.
Your browser now should load into the JupyterLab application which looks like this:
......
......@@ -76,4 +76,4 @@ A JupyterHub installation offering IPython Notebook is available under:
<https://taurus.hrsk.tu-dresden.de/jupyter>
See the documentation under [JupyterHub](../software/JupyterHub.md).
See the documentation under [JupyterHub](../software/jupyterhub.md).
......@@ -10,9 +10,9 @@ Also, we have prepared a script that makes launching the VNC server much easier.
## Method with JupyterHub
**Check out the [new documentation about virtual desktops](../software/VirtualDesktops.md).**
**Check out the [new documentation about virtual desktops](../software/virtual_desktops.md).**
The [JupyterHub](../software/JupyterHub.md) service is now able to start a VNC session based on the
The [JupyterHub](../software/jupyterhub.md) service is now able to start a VNC session based on the
Singularity container mentioned here.
Quickstart: 1 Click here to start a session immediately: \<a
......
......@@ -15,7 +15,7 @@ also trial accounts have to fill in the application form.)\<br />**
It is invariably possible to apply for more/different resources. Whether additional resources are
granted or not depends on the current allocations and on the availablility of the installed systems.
The terms of use of the HPC systems are only [available in German](TermsOfUse.md) - at the
The terms of use of the HPC systems are only [available in German](terms_of_use.md) - at the
moment.
## Online Project Application
......
......@@ -45,7 +45,7 @@ general project Details.\<br />Any project have:
<span class="twiki-macro IMAGE" type="frame" align="right"
caption="picture 4: hardware" width="170" zoom="on
">%ATTACHURL%/request_step3_machines.png</span> This step inquire the
required hardware. You can find the specifications [here](../archive/Hardware.md).
required hardware. You can find the specifications [here](../archive/hardware.md).
\<br />For your guidance:
- gpu => taurus
......
......@@ -11,10 +11,10 @@ This file system is currently mounted at
We kindly ask our users to remove their large data from the file system.
Files worth keeping can be moved
- to the new [Intermediate Archive](../data_management/IntermediateArchive.md) (max storage
- to the new [Intermediate Archive](../data_lifecycle/intermediate_archive.md) (max storage
duration: 3 years) - see
[MigrationHints](#migration-from-cxfs-to-the-intermediate-archive) below,
- or to the [Log-term Archive](../data_management/PreservationResearchData.md) (tagged with
- or to the [Log-term Archive](../data_lifecycle/preservation_research_data.md) (tagged with
metadata).
To run the file system without support comes with the risk of losing
......
......@@ -3,15 +3,15 @@
Here, you can find basic information about the hardware installed at ZIH. We try to keep this list
up-to-date.
- [BULL HPC-Cluster Taurus](TaurusII.md)
- [SGI Ultraviolet (UV)](HardwareVenus.md)
- [BULL HPC-Cluster Taurus](taurus_ii.md)
- [SGI Ultraviolet (UV)](hardware_venus.md)
Hardware hosted by ZIH:
Former systems
- [PC-Farm Deimos](HardwareDeimos.md)
- [SGI Altix](HardwareAltix.md)
- [PC-Farm Atlas](HardwareAtlas.md)
- [PC-Cluster Triton](HardwareTriton.md)
- [HPC-Windows-Cluster Titan](HardwareTitan.md)
- [PC-Farm Deimos](hardware_deimos.md)
- [SGI Altix](hardware_altix.md)
- [PC-Farm Atlas](hardware_atlas.md)
- [PC-Cluster Triton](hardware_triton.md)
- [HPC-Windows-Cluster Titan](hardware_titan.md)
......@@ -13,7 +13,7 @@ installed at ZIH:
|Uranus |512 |506|4 GB|
|Neptun |128 |128 |1 GB|
The jobs for these partitions (except Neptun) are scheduled by the [Platform LSF](PlatformLSF.md)
The jobs for these partitions (except Neptun) are scheduled by the [Platform LSF](platform_lsf.md)
batch system running on `mars.hrsk.tu-dresden.de`. The actual placement of a submitted job may
depend on factors like memory size, number of processors, time limit.
......
......@@ -13,17 +13,17 @@ following hardware is installed:
|nodes with 128 GB RAM | 12 |
|nodes with 512 GB RAM | 8 |
Mars and Deimos users: Please read the [migration hints](MigrateToAtlas.md).
Mars and Deimos users: Please read the [migration hints](migrate_to_atlas.md).
All nodes share the `/home` and `/fastfs` file system with our other HPC systems. Each
node has 180 GB local disk space for scratch mounted on `/tmp` . The jobs for the compute nodes are
scheduled by the [Platform LSF](PlatformLSF.md) batch system from the login nodes
scheduled by the [Platform LSF](platform_lsf.md) batch system from the login nodes
`atlas.hrsk.tu-dresden.de` .
A QDR Infiniband interconnect provides the communication and I/O infrastructure for low latency /
high throughput data traffic.
Users with a login on the [SGI Altix](HardwareAltix.md) can access their home directory via NFS
Users with a login on the [SGI Altix](hardware_altix.md) can access their home directory via NFS
below the mount point `/hpc_work`.
## CPU AMD Opteron 6274
......
......@@ -16,14 +16,14 @@ installed:
All nodes share a 68 TB on DDN hardware. Each node has per core 40 GB local disk space for scratch
mounted on `/tmp` . The jobs for the compute nodes are scheduled by the
[Platform LSF](PlatformLSF.md)
[Platform LSF](platform_lsf.md)
batch system from the login nodes `deimos.hrsk.tu-dresden.de` .
Two separate Infiniband networks (10 Gb/s) with low cascading switches provide the communication and
I/O infrastructure for low latency / high throughput data traffic. An additional gigabit Ethernet
network is used for control and service purposes.
Users with a login on the [SGI Altix](HardwareAltix.md) can access their home directory via NFS
Users with a login on the [SGI Altix](hardware_altix.md) can access their home directory via NFS
below the mount point `/hpc_work`.
## CPU
......
......@@ -13,7 +13,7 @@ the following hardware is installed:
|RAM per node |4 GB |
All nodes share a 4.4 TB SAN. Each node has additional local disk space mounted on `/scratch`. The
jobs for the compute nodes are scheduled by a [Platform LSF](PlatformLSF.md) batch system running on
jobs for the compute nodes are scheduled by a [Platform LSF](platform_lsf.md) batch system running on
the login node `phobos.hrsk.tu-dresden.de`.
Two separate Infiniband networks (10 Gb/s) with low cascading switches provide the infrastructure
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment