Update the distributed_training.md Pytorch section
Merged
requested to merge lama722b--tu-dresden.de/hpc-compendium:lama722b--tu-dresden.de-preview-patch-83080 into preview
All threads resolved!
Compare changes
- Jan Frenzel authored
@@ -145,7 +145,8 @@ PyTorch provides multiple ways to achieve data parallelism to train the deep lea
@@ -145,7 +145,8 @@ PyTorch provides multiple ways to achieve data parallelism to train the deep lea
Easiest method to quickly prototype if the model is trainable in a multi-GPU setting is to wrap the exisiting model with the `torch.nn.DataParallel` class as shown below,