From e8c618e4769b7726e520090bd6c67e723d0f396a Mon Sep 17 00:00:00 2001
From: Jan Frenzel <jan.frenzel@tu-dresden.de>
Date: Thu, 3 Feb 2022 14:54:45 +0100
Subject: [PATCH] Replaced "page" as link text by a more descriptive text in
 distributed_training.md.

---
 doc.zih.tu-dresden.de/docs/software/distributed_training.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/doc.zih.tu-dresden.de/docs/software/distributed_training.md b/doc.zih.tu-dresden.de/docs/software/distributed_training.md
index b3c6733bc..41cd1dab3 100644
--- a/doc.zih.tu-dresden.de/docs/software/distributed_training.md
+++ b/doc.zih.tu-dresden.de/docs/software/distributed_training.md
@@ -177,8 +177,8 @@ It is recommended to use
 [DistributedDataParallel](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html),
 instead of this class, to do multi-GPU training, even if there is only a single node.
 See: Use `nn.parallel.DistributedDataParallel` instead of multiprocessing or `nn.DataParallel`.
-Check the [page](https://pytorch.org/docs/stable/notes/cuda.html#cuda-nn-ddp-instead) and
-[Distributed Data Parallel](https://pytorch.org/docs/stable/notes/ddp.html#ddp).
+Check the [PyTorch CUDA page](https://pytorch.org/docs/stable/notes/cuda.html#cuda-nn-ddp-instead)
+and [Distributed Data Parallel](https://pytorch.org/docs/stable/notes/ddp.html#ddp).
 
 ??? example "Parallel Model"
 
-- 
GitLab