From b6483746c064a4ccfaceb27ee307c52c5cc2b7b5 Mon Sep 17 00:00:00 2001
From: lazariv <taras.lazariv@tu-dresden.de>
Date: Wed, 25 Aug 2021 15:09:10 +0000
Subject: [PATCH] Make linter happy again

---
 .../docs/jobs_and_resources/alpha_centauri.md          |  1 -
 .../docs/software/distributed_training.md              |  6 +++---
 .../docs/software/hyperparameter_optimization.md       | 10 ++++++----
 3 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md b/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md
index 40534bd3b..5324f550e 100644
--- a/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md
+++ b/doc.zih.tu-dresden.de/docs/jobs_and_resources/alpha_centauri.md
@@ -21,7 +21,6 @@ It has 34 nodes, each with:
 The easiest way is using the [module system](../software/modules.md).
 The software for the `alpha` partition is available in `modenv/hiera` module environment.
 
-
 To check the available modules for `modenv/hiera`, use the command
 
 ```bash
diff --git a/doc.zih.tu-dresden.de/docs/software/distributed_training.md b/doc.zih.tu-dresden.de/docs/software/distributed_training.md
index c43e28908..165dad639 100644
--- a/doc.zih.tu-dresden.de/docs/software/distributed_training.md
+++ b/doc.zih.tu-dresden.de/docs/software/distributed_training.md
@@ -5,10 +5,10 @@
 ### Distributed TensorFlow
 
 TODO
- 
+
 ### Distributed Pytorch
 
-**hint: just copied some old content as starting point**
+just copied some old content as starting point
 
 #### Using Multiple GPUs with PyTorch
 
@@ -180,4 +180,4 @@ install command after loading the NCCL module:
 ```Bash
 module load NCCL/2.3.7-fosscuda-2018b
 HOROVOD_GPU_ALLREDUCE=NCCL HOROVOD_GPU_BROADCAST=NCCL HOROVOD_WITHOUT_TENSORFLOW=1 HOROVOD_WITH_PYTORCH=1 HOROVOD_WITHOUT_MXNET=1 pip install --no-cache-dir horovod
-```
\ No newline at end of file
+```
diff --git a/doc.zih.tu-dresden.de/docs/software/hyperparameter_optimization.md b/doc.zih.tu-dresden.de/docs/software/hyperparameter_optimization.md
index b75d6031e..07ec3f6ce 100644
--- a/doc.zih.tu-dresden.de/docs/software/hyperparameter_optimization.md
+++ b/doc.zih.tu-dresden.de/docs/software/hyperparameter_optimization.md
@@ -196,11 +196,13 @@ There are three script preparation steps for OmniOpt:
 
 ### Configure and Run OmniOpt
 
-As a starting point, configuring OmniOpt is done via a GUI at [https://imageseg.scads.ai/omnioptgui/](https://imageseg.scads.ai/omnioptgui/). 
-This GUI guides through the configuration process and as result the config file is created automatically according to the GUI input.
-If you are more familiar with using OmniOpt later on, this config file can be modified directly without using the GUI. 
+As a starting point, configuring OmniOpt is done via a GUI at
+[https://imageseg.scads.ai/omnioptgui/](https://imageseg.scads.ai/omnioptgui/).
+This GUI guides through the configuration process and as result the config file is created
+automatically according to the GUI input. If you are more familiar with using OmniOpt later on,
+this config file can be modified directly without using the GUI.
 
-A screenshot of the GUI, including a properly configuration for the MNIST fashion example is shown below. 
+A screenshot of the GUI, including a properly configuration for the MNIST fashion example is shown below.
 The GUI, in which the displayed values are already entered, can be reached [here](https://imageseg.scads.ai/omnioptgui/?maxevalserror=5&mem_per_worker=1000&projectname=mnist-fashion-optimization-set-1&partition=alpha&searchtype=tpe.suggest&objective_program=bash%20%2Fscratch%2Fws%2Fpath%2Fto%2Fyou%2Fscript%2Frun-mnist-fashion.sh%20(%24x_0)%20(%24x_1)%20(%24x_2)&param_0_type=hp.randint&param_1_type=hp.randint&number_of_parameters=3){:target="_blank"}.
 
 ![GUI for configuring OmniOpt](misc/hyperparameter_optimization-OmniOpt-GUI.png)
-- 
GitLab