diff --git a/doc.zih.tu-dresden.de/docs/software/distributed_training.md b/doc.zih.tu-dresden.de/docs/software/distributed_training.md index f1879521b52714079b5d5cf044d1c2dfc710ce8c..3d9dc0ce78e44e8d35205d8a18d9a06a9392eaaa 100644 --- a/doc.zih.tu-dresden.de/docs/software/distributed_training.md +++ b/doc.zih.tu-dresden.de/docs/software/distributed_training.md @@ -145,7 +145,8 @@ PyTorch provides multiple ways to achieve data parallelism to train the deep lea efficiently. These models are part of the `torch.distributed` sub-package that ships with the main deep learning package. -Easiest method to quickly prototype if the model is trainable in a multi-GPU setting is to wrap the exisiting model with the `torch.nn.DataParallel` class as shown below, +The easiest method to quickly prototype if the model is trainable in a multi-GPU setting is to wrap +the existing model with the `torch.nn.DataParallel` class as shown below, ```python model = torch.nn.DataParalell(model)