diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md index 40534bd3bcd13a6bb1e01e6d8dd90b5a9cfdbb4e..5324f550e30e66b6ec6830cf7fddbb921b0dbdbf 100644 --- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md +++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md @@ -21,7 +21,6 @@ It has 34 nodes, each with: The easiest way is using the [module system](../software/modules.md). The software for the `alpha` partition is available in `modenv/hiera` module environment. - To check the available modules for `modenv/hiera`, use the command ```bash diff --git a/doc.zih.tu-dresden.de/docs/software/distributed_training.md b/doc.zih.tu-dresden.de/docs/software/distributed_training.md index c43e2890880d3376f88f29c6cb882df4ec37dc91..165dad639c6ab2a90f99724aa43c05c0fea20d17 100644 --- a/doc.zih.tu-dresden.de/docs/software/distributed_training.md +++ b/doc.zih.tu-dresden.de/docs/software/distributed_training.md @@ -5,10 +5,10 @@ ### Distributed TensorFlow TODO - + ### Distributed Pytorch -**hint: just copied some old content as starting point** +just copied some old content as starting point #### Using Multiple GPUs with PyTorch @@ -180,4 +180,4 @@ install command after loading the NCCL module: ```Bash module load NCCL/2.3.7-fosscuda-2018b HOROVOD_GPU_ALLREDUCE=NCCL HOROVOD_GPU_BROADCAST=NCCL HOROVOD_WITHOUT_TENSORFLOW=1 HOROVOD_WITH_PYTORCH=1 HOROVOD_WITHOUT_MXNET=1 pip install --no-cache-dir horovod -``` \ No newline at end of file +``` diff --git a/doc.zih.tu-dresden.de/docs/software/hyperparameter_optimization.md b/doc.zih.tu-dresden.de/docs/software/hyperparameter_optimization.md index b75d6031e9774bf6d6c275c83cd8914a750ce50b..07ec3f6cee2cb9c6a45d69cd49e2c6e68affaebc 100644 --- a/doc.zih.tu-dresden.de/docs/software/hyperparameter_optimization.md +++ b/doc.zih.tu-dresden.de/docs/software/hyperparameter_optimization.md @@ -196,11 +196,13 @@ There are three script preparation steps for OmniOpt: ### Configure and Run OmniOpt -As a starting point, configuring OmniOpt is done via a GUI at [https://imageseg.scads.ai/omnioptgui/](https://imageseg.scads.ai/omnioptgui/). -This GUI guides through the configuration process and as result the config file is created automatically according to the GUI input. -If you are more familiar with using OmniOpt later on, this config file can be modified directly without using the GUI. +As a starting point, configuring OmniOpt is done via a GUI at +[https://imageseg.scads.ai/omnioptgui/](https://imageseg.scads.ai/omnioptgui/). +This GUI guides through the configuration process and as result the config file is created +automatically according to the GUI input. If you are more familiar with using OmniOpt later on, +this config file can be modified directly without using the GUI. -A screenshot of the GUI, including a properly configuration for the MNIST fashion example is shown below. +A screenshot of the GUI, including a properly configuration for the MNIST fashion example is shown below. The GUI, in which the displayed values are already entered, can be reached [here](https://imageseg.scads.ai/omnioptgui/?maxevalserror=5&mem_per_worker=1000&projectname=mnist-fashion-optimization-set-1&partition=alpha&searchtype=tpe.suggest&objective_program=bash%20%2Fscratch%2Fws%2Fpath%2Fto%2Fyou%2Fscript%2Frun-mnist-fashion.sh%20(%24x_0)%20(%24x_1)%20(%24x_2)¶m_0_type=hp.randint¶m_1_type=hp.randint&number_of_parameters=3){:target="_blank"}. 