From 4d47209e8c72f3c8f78a26cec6ed0e7ae6f2f43f Mon Sep 17 00:00:00 2001
From: Jan Frenzel <jan.frenzel@tu-dresden.de>
Date: Tue, 19 Oct 2021 07:58:55 +0200
Subject: [PATCH] Apply 1 suggestion(s) to 1 file(s)

---
 doc.zih.tu-dresden.de/docs/software/distributed_training.md | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/doc.zih.tu-dresden.de/docs/software/distributed_training.md b/doc.zih.tu-dresden.de/docs/software/distributed_training.md
index f1879521b..3d9dc0ce7 100644
--- a/doc.zih.tu-dresden.de/docs/software/distributed_training.md
+++ b/doc.zih.tu-dresden.de/docs/software/distributed_training.md
@@ -145,7 +145,8 @@ PyTorch provides multiple ways to achieve data parallelism to train the deep lea
 efficiently. These models are part of the `torch.distributed` sub-package that ships with the main
 deep learning package.
 
-Easiest method to quickly prototype if the model is trainable in a multi-GPU setting is to wrap the exisiting model with the `torch.nn.DataParallel` class as shown below,
+The easiest method to quickly prototype if the model is trainable in a multi-GPU setting is to wrap
+the existing model with the `torch.nn.DataParallel` class as shown below,
 
 ```python
 model = torch.nn.DataParalell(model)
-- 
GitLab