diff --git a/doc.zih.tu-dresden.de/docs/software/machine_learning.md b/doc.zih.tu-dresden.de/docs/software/machine_learning.md index 635eda55791493e137c8af8a81b5844e3f4acd66..ecbb9e146276aff67d6079579f2163fa6d7dbf74 100644 --- a/doc.zih.tu-dresden.de/docs/software/machine_learning.md +++ b/doc.zih.tu-dresden.de/docs/software/machine_learning.md @@ -1,29 +1,30 @@ # Machine Learning This is an introduction of how to run machine learning applications on ZIH systems. -For machine learning purposes, we recommend to use the [Alpha](#alpha-partition) and/or -[ML](#ml-partition) partitions. +For machine learning purposes, we recommend to use the partitions [Alpha](#alpha-partition) and/or +[ML](#ml-partition). ## ML Partition -The compute nodes of the ML partition are built on the base of [Power9 architecture](https://www.ibm.com/it-infrastructure/power/power9) -from IBM. The system was created for AI challenges, analytics and working with -data-intensive workloads and accelerated databases. +The compute nodes of the partition ML are built on the base of +[Power9 architecture](https://www.ibm.com/it-infrastructure/power/power9) from IBM. The system was created +for AI challenges, analytics and working with data-intensive workloads and accelerated databases. The main feature of the nodes is the ability to work with the [NVIDIA Tesla V100](https://www.nvidia.com/en-gb/data-center/tesla-v100/) GPU with **NV-Link** -support that allows a total bandwidth with up to 300 gigabytes per second (GB/sec). Each node on the -ML partition has 6x Tesla V-100 GPUs. You can find a detailed specification of the partition in our +support that allows a total bandwidth with up to 300 GB/s. Each node on the +partition ML has 6x Tesla V-100 GPUs. You can find a detailed specification of the partition in our [Power9 documentation](../jobs_and_resources/power9.md). !!! note - The ML partition is based on the Power9 architecture, which means that the software built + + The partition ML is based on the Power9 architecture, which means that the software built for x86_64 will not work on this partition. Also, users need to use the modules which are - specially made for the ml partition (from `modenv/ml`). + specially build for this architecture (from `modenv/ml`). ### Modules -On the ML partition load the module environment: +On the partition ML load the module environment: ```console marie@ml$ module load modenv/ml @@ -32,19 +33,19 @@ The following have been reloaded with a version change: 1) modenv/scs5 => moden ### Power AI -There are tools provided by IBM, that work on `ml` partition and are related to AI tasks. +There are tools provided by IBM, that work on partition ML and are related to AI tasks. For more information see our [Power AI documentation](power_ai.md). -## Alpha partition +## Alpha Partition -Another partition for machine learning tasks is Alpha. It is mainly dedicated to [ScaDS.AI](https://scads.ai/) -topics. Each node on Alpha has 2x AMD EPYC CPUs, 8x NVIDIA A100-SXM4 GPUs, 1TB RAM and 3.5TB local -space (`/tmp`) on an NVMe device. You can find more details of the partition in our [Alpha Centauri](../jobs_and_resources/alpha_centauri.md) -documentation. +Another partition for machine learning tasks is Alpha. It is mainly dedicated to +[ScaDS.AI](https://scads.ai/) topics. Each node on Alpha has 2x AMD EPYC CPUs, 8x NVIDIA A100-SXM4 +GPUs, 1 TB RAM and 3.5 TB local space (`/tmp`) on an NVMe device. You can find more details of the +partition in our [Alpha Centauri](../jobs_and_resources/alpha_centauri.md) documentation. ### Modules -On the **Alpha** partition load the module environment: +On the partition **Alpha** load the module environment: ```console marie@alpha$ module load modenv/hiera @@ -52,8 +53,9 @@ The following have been reloaded with a version change: 1) modenv/ml => modenv/ ``` !!! note - On Alpha, the most recent modules are build in hiera. Alternative modules might be build in - scs5. + + On partition Alpha, the most recent modules are build in `hiera`. Alternative modules might be + build in `scs5`. ## Machine Learning via Console @@ -71,30 +73,31 @@ R also supports machine learning via console. It does not require a virtual envi different package management. For more details on machine learning or data science with R see -[data analytics with R](../data_analytics_with_r/#r-console). +[data analytics with R](data_analytics_with_r.md#r-console). ## Machine Learning with Jupyter The [Jupyter Notebook](https://jupyter.org/) is an open-source web application that allows you to -create documents containing live code, equations, visualizations, and narrative text. [JupyterHub](../access/jupyterhub.md) -allows to work with machine learning frameworks (e.g. TensorFlow or PyTorch) on ZIH systems and to -run your Jupyter notebooks on HPC nodes. +create documents containing live code, equations, visualizations, and narrative text. +[JupyterHub](../access/jupyterhub.md) allows to work with machine learning frameworks (e.g. +TensorFlow or PyTorch) on ZIH systems and to run your Jupyter notebooks on HPC nodes. After accessing JupyterHub, you can start a new session and configure it. For machine learning -purposes, select either **Alpha** or **ML** partition and the resources, your application requires. +purposes, select either partition **Alpha** or **ML** and the resources, your application requires. -In your session you can use [Python](data_analytics_with_python.md/#jupyter-notebooks), [R](data_analytics_with_r.md/#r-in-jupyterhub) -or [RStudio](data_analytics_with_rstudio.md) for your machine learning and data science topics. +In your session you can use [Python](data_analytics_with_python.md#jupyter-notebooks), +[R](data_analytics_with_r.md#r-in-jupyterhub) or [RStudio](data_analytics_with_rstudio.md) for your +machine learning and data science topics. ## Machine Learning with Containers -Some machine learning tasks require using containers. In the HPC domain, the [Singularity](https://singularity.hpcng.org/) -container system is a widely used tool. Docker containers can also be used by Singularity. You can -find further information on working with containers on ZIH systems in our -[containers documentation](containers.md). +Some machine learning tasks require using containers. In the HPC domain, the +[Singularity](https://singularity.hpcng.org/) container system is a widely used tool. Docker +containers can also be used by Singularity. You can find further information on working with +containers on ZIH systems in our [containers documentation](containers.md). -There are two sources for containers for Power9 architecture with -TensorFlow and PyTorch on the board: +There are two sources for containers for Power9 architecture with TensorFlow and PyTorch on the +board: * [TensorFlow-ppc64le](https://hub.docker.com/r/ibmcom/tensorflow-ppc64le): Community-supported `ppc64le` docker container for TensorFlow. @@ -102,6 +105,7 @@ TensorFlow and PyTorch on the board: Official Docker container with TensorFlow, PyTorch and many other packages. !!! note + You could find other versions of software in the container on the "tag" tab on the docker web page of the container. @@ -125,6 +129,7 @@ The following NVIDIA libraries are available on all nodes: | cuDNN | `/usr/local/cuda/targets/ppc64le-linux` | !!! note + For optimal NCCL performance it is recommended to set the **NCCL_MIN_NRINGS** environment variable during execution. You can try different values but 4 should be a pretty good starting point. @@ -133,7 +138,7 @@ The following NVIDIA libraries are available on all nodes: marie@compute$ export NCCL_MIN_NRINGS=4 ``` -### HPC related Software +### HPC-Related Software The following HPC related software is installed on all nodes: @@ -151,15 +156,15 @@ The following HPC related software is installed on all nodes: There are many different datasets designed for research purposes. If you would like to download some of them, keep in mind that many machine learning libraries have direct access to public datasets without downloading it, e.g. [TensorFlow Datasets](https://www.tensorflow.org/datasets). If you -still need to download some datasets use [DataMover](../data_transfer/datamover.md). +still need to download some datasets use [datamover](../data_transfer/datamover.md) machine. -### The ImageNet dataset +### The ImageNet Dataset The ImageNet project is a large visual database designed for use in visual object recognition software research. In order to save space in the filesystem by avoiding to have multiple duplicates of this lying around, we have put a copy of the ImageNet database (ILSVRC2012 and ILSVR2017) under -`/scratch/imagenet` which you can use without having to download it again. For the future, -the ImageNet dataset will be available in +`/scratch/imagenet` which you can use without having to download it again. For the future, the +ImageNet dataset will be available in [Warm Archive](../data_lifecycle/workspaces.md#mid-term-storage). ILSVR2017 also includes a dataset for recognition objects from a video. Please respect the corresponding [Terms of Use](https://image-net.org/download.php).