diff --git a/doc.zih.tu-dresden.de/docs/software/distributed_training.md b/doc.zih.tu-dresden.de/docs/software/distributed_training.md index cfb8c6a38f2b3115aa690eb4615e02697f37fa17..f1879521b52714079b5d5cf044d1c2dfc710ce8c 100644 --- a/doc.zih.tu-dresden.de/docs/software/distributed_training.md +++ b/doc.zih.tu-dresden.de/docs/software/distributed_training.md @@ -141,8 +141,9 @@ wait !!! note This section is under construction -Pytorch provides mutliple ways to acheieve data parallelism to train the deep learning models effieciently. These models are part of the `torch.distributed` sub-package that ships -with the main deep learning package. +PyTorch provides multiple ways to achieve data parallelism to train the deep learning models +efficiently. These models are part of the `torch.distributed` sub-package that ships with the main +deep learning package. Easiest method to quickly prototype if the model is trainable in a multi-GPU setting is to wrap the exisiting model with the `torch.nn.DataParallel` class as shown below,