Update the distributed_training.md Pytorch section
Merged
requested to merge lama722b--tu-dresden.de/hpc-compendium:lama722b--tu-dresden.de-preview-patch-83080 into preview
Compare changes
- Jan Frenzel authored
@@ -159,8 +159,8 @@ Python. To work around this issue and gain performance benefits of parallelism,
@@ -159,8 +159,8 @@ Python. To work around this issue and gain performance benefits of parallelism,
`torch.nn.DistributedDataParallel` is recommended. This involves little more code changes to set up,