Skip to content
Snippets Groups Projects
Commit 88cb3cb5 authored by Lalith Manjunath's avatar Lalith Manjunath
Browse files

Update the distributed_training.md Pytorch section

parent b865c42f
No related branches found
No related tags found
3 merge requests!392Merge preview into contrib guide for browser users,!378Merge p in m,!367Update the distributed_training.md Pytorch section
...@@ -141,6 +141,9 @@ wait ...@@ -141,6 +141,9 @@ wait
!!! note !!! note
This section is under construction This section is under construction
Pytorch provides mutliple ways to acheieve data parallelism to train the deep learning models effieciently. These models are part of the `torch.distributed` sub-package that ships
with the main deep learning package.
#### Using Multiple GPUs with PyTorch #### Using Multiple GPUs with PyTorch
The example below shows how to solve that problem by using model parallelism, which in contrast to The example below shows how to solve that problem by using model parallelism, which in contrast to
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment