From 8ca343c456d86e2477b508e4bf43b366610f3e41 Mon Sep 17 00:00:00 2001
From: Jan Frenzel <jan.frenzel@tu-dresden.de>
Date: Tue, 19 Oct 2021 07:58:21 +0200
Subject: [PATCH] Apply 1 suggestion(s) to 1 file(s)

---
 doc.zih.tu-dresden.de/docs/software/distributed_training.md | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/doc.zih.tu-dresden.de/docs/software/distributed_training.md b/doc.zih.tu-dresden.de/docs/software/distributed_training.md
index cfb8c6a38..f1879521b 100644
--- a/doc.zih.tu-dresden.de/docs/software/distributed_training.md
+++ b/doc.zih.tu-dresden.de/docs/software/distributed_training.md
@@ -141,8 +141,9 @@ wait
 !!! note
     This section is under construction
 
-Pytorch provides mutliple ways to acheieve data parallelism to train the deep learning models effieciently. These models are part of the `torch.distributed` sub-package that ships 
-with the main deep learning package.
+PyTorch provides multiple ways to achieve data parallelism to train the deep learning models
+efficiently. These models are part of the `torch.distributed` sub-package that ships with the main
+deep learning package.
 
 Easiest method to quickly prototype if the model is trainable in a multi-GPU setting is to wrap the exisiting model with the `torch.nn.DataParallel` class as shown below,
 
-- 
GitLab