Skip to content

Commit 564e541

Browse files
kamahorivfdev-5
andauthored
Fix link to pytorch documents (#1294)
* Fix link to pytorch documents * Fix too long lines Co-authored-by: vfdev <vfdev.5@gmail.com>
1 parent 766167e commit 564e541

File tree

4 files changed

+14
-12
lines changed

4 files changed

+14
-12
lines changed

examples/notebooks/FashionMNIST.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -234,10 +234,10 @@
234234
"source": [
235235
"Explanation of Model Architecture\n",
236236
"\n",
237-
"* [Convolutional layers](https://pytorch.org/docs/stable/nn.html#conv2d), the Convolutional layer is used to create a convolution kernel that is convolved with the layer input to produce a tensor of outputs.\n",
238-
"* [Maxpooling layers](https://pytorch.org/docs/stable/nn.html#maxpool2d), the Maxpooling layer is used to downsample an input representation keeping the most active pixels from the previous layer.\n",
239-
"* The usual [Linear](https://pytorch.org/docs/stable/nn.html#linear) + [Dropout](https://pytorch.org/docs/stable/nn.html#dropout2d) layers to avoid overfitting and produce a 10-dim output.\n",
240-
"* We had used [Relu](https://pytorch.org/docs/stable/nn.html#id27) Non Linearity for the model and [logsoftmax](https://pytorch.org/docs/stable/nn.html#log-softmax) at the last layer because we are going to use the [NLLL loss](https://pytorch.org/docs/stable/nn.html#nllloss).\n"
237+
"* [Convolutional layers](https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html), the Convolutional layer is used to create a convolution kernel that is convolved with the layer input to produce a tensor of outputs.\n",
238+
"* [Maxpooling layers](https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html), the Maxpooling layer is used to downsample an input representation keeping the most active pixels from the previous layer.\n",
239+
"* The usual [Linear](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) + [Dropout](https://pytorch.org/docs/stable/generated/torch.nn.Dropout2d.html) layers to avoid overfitting and produce a 10-dim output.\n",
240+
"* We had used [Relu](https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html) Non Linearity for the model and [logsoftmax](https://pytorch.org/docs/stable/generated/torch.nn.LogSoftmax.html) at the last layer because we are going to use the [NLLL loss](https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html).\n"
241241
]
242242
},
243243
{

ignite/contrib/handlers/tensorboard_logger.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -384,16 +384,16 @@ class TensorboardLogger(BaseLogger):
384384
385385
otherwise, it falls back to using
386386
`PyTorch's SummaryWriter
387-
<https://pytorch.org/docs/stable/tensorboard.html#torch.utils.tensorboard.writer.SummaryWriter>`_
387+
<https://pytorch.org/docs/stable/tensorboard.html>`_
388388
(>=v1.2.0).
389389
390390
Args:
391391
*args: Positional arguments accepted from
392392
`SummaryWriter
393-
<https://pytorch.org/docs/stable/tensorboard.html#torch.utils.tensorboard.writer.SummaryWriter>`_.
393+
<https://pytorch.org/docs/stable/tensorboard.html>`_.
394394
**kwargs: Keyword arguments accepted from
395395
`SummaryWriter
396-
<https://pytorch.org/docs/stable/tensorboard.html#torch.utils.tensorboard.writer.SummaryWriter>`_.
396+
<https://pytorch.org/docs/stable/tensorboard.html>`_.
397397
For example, `log_dir` to setup path to the directory where to log.
398398
399399
Examples:

ignite/distributed/utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -293,7 +293,7 @@ def train_fn(local_rank, a, b, c, d=12):
293293
| and `node_rank=0` are tolerated and ignored, otherwise an exception is raised.
294294
295295
.. _dist.init_process_group: https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group
296-
.. _mp.start_processes: https://pytorch.org/docs/stable/_modules/torch/multiprocessing/spawn.html#spawn
296+
.. _mp.start_processes: https://pytorch.org/docs/stable/multiprocessing.html#torch.multiprocessing.spawn
297297
.. _xmp.spawn: http://pytorch.org/xla/release/1.6/index.html#torch_xla.distributed.xla_multiprocessing.spawn
298298
.. _hvd_run: https://horovod.readthedocs.io/en/latest/api.html#module-horovod.run
299299

ignite/handlers/checkpoint.py

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -99,8 +99,9 @@ class Checkpoint(Serializable):
9999
include_self (bool): Whether to include the `state_dict` of this object in the checkpoint. If `True`, then
100100
there must not be another object in ``to_save`` with key ``checkpointer``.
101101
102-
.. _DistributedDataParallel: https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel
103-
.. _DataParallel: https://pytorch.org/docs/stable/nn.html#torch.nn.DataParallel
102+
.. _DistributedDataParallel: https://pytorch.org/docs/stable/generated/
103+
torch.nn.parallel.DistributedDataParallel.html
104+
.. _DataParallel: https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html
104105
105106
Note:
106107
This class stores a single file as a dictionary of provided objects to save.
@@ -475,8 +476,9 @@ def load_objects(to_load: Mapping, checkpoint: Mapping, **kwargs) -> None:
475476
**kwargs: Keyword arguments accepted for `nn.Module.load_state_dict()`. Passing `strict=False` enables
476477
the user to load part of the pretrained model (useful for example, in Transfer Learning)
477478
478-
.. _DistributedDataParallel: https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel
479-
.. _DataParallel: https://pytorch.org/docs/stable/nn.html#torch.nn.DataParallel
479+
.. _DistributedDataParallel: https://pytorch.org/docs/stable/generated/
480+
torch.nn.parallel.DistributedDataParallel.html
481+
.. _DataParallel: https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html
480482
481483
"""
482484
Checkpoint._check_objects(to_load, "load_state_dict")

0 commit comments

Comments
 (0)