Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: Force docs warnings to be raised as errors (+ fix all) #1191

Merged
merged 52 commits into from
Mar 20, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
52 commits
Select commit Hold shift + click to select a range
6a12ecb
add argument to force warn
Borda Mar 18, 2020
02ab2c0
fix automodule error
Mar 18, 2020
cd545b4
fix permalink error
Mar 18, 2020
a47be92
fix indentation warning
Mar 18, 2020
1bc105d
fix warning
Mar 18, 2020
7d7c669
fix import warnings
Mar 18, 2020
c18b5c2
fix duplicate label warning
Mar 18, 2020
5a229e8
fix bullet point indentation warning
Mar 18, 2020
98e12b7
fix duplicate label warning
Mar 18, 2020
1c583fe
fix "import not top level" warning
Mar 19, 2020
e43b28c
line too long
Mar 19, 2020
1348e3d
fix indentation
Mar 19, 2020
16314e0
fix bullet points indentation warning
Mar 19, 2020
f3ff726
fix hooks warnings
Mar 19, 2020
6f1175d
fix reference problem with excluded test_tube
Mar 19, 2020
cdca1b7
fix indentation in print
Mar 19, 2020
bc39d2e
change imports for trains logger
Mar 19, 2020
d5bc488
remove pandas type annotation
Mar 19, 2020
5d47b2c
Update pytorch_lightning/core/lightning.py
Mar 19, 2020
8c8ace6
Merge branch 'docs-warn' of https://github.com/awaelchli/pytorch-ligh…
Mar 19, 2020
24c6667
include bullet points inside note
Mar 19, 2020
d51bb40
remove old quick start guide (unused)
Mar 19, 2020
75600b2
fix unused warning
Mar 19, 2020
85c0283
fix formatting
Mar 19, 2020
2a7def8
fix duplicate label issue
Mar 19, 2020
1483708
fix duplicate label warning (replaced by class ref)
Mar 19, 2020
bce722a
fix tick
Mar 19, 2020
0108e79
fix indentation warnings
Mar 19, 2020
84551eb
docstring ticks
Mar 19, 2020
e07cf5c
remove obsolete docstring typing
Mar 19, 2020
46d74f8
Revert "remove old quick start guide (unused)"
Mar 19, 2020
662f2db
added old quick start guide to navigation
Mar 19, 2020
997db73
remove unused tutorials file
Mar 19, 2020
4565767
Merge branch 'master' into docs-warn
Mar 19, 2020
718cd53
ignore some modules that got deprecated and are not used anymore
Mar 19, 2020
25f4209
fix duplicate label warning
Mar 20, 2020
b9d7d8b
move examples doc and exclude pl_examples from autodoc
Mar 20, 2020
6da1ce7
fix formatting for configure_optimizer
Mar 20, 2020
274e688
fix no blank line warnings
Mar 20, 2020
7e4a673
fix "see also" labels and add paramref extension
Mar 20, 2020
6cdf5ad
fix more reference problems
Mar 20, 2020
7cc3517
fix multi-gpu reference
Mar 20, 2020
3403917
fix weird warning
Mar 20, 2020
ef06878
fix indentation and unrecognized characters in code block
Mar 20, 2020
03556d5
fix warning "... not included in toctree"
Mar 20, 2020
741cd63
fix PIL import error
Mar 20, 2020
0209304
fix duplicate target "here" warning
Mar 20, 2020
6bab26f
fix broken link
Mar 20, 2020
0111a2f
revert accidentally moved pl_examples
Mar 20, 2020
d2f90e3
changelog
Mar 20, 2020
359765f
stdout
Mar 20, 2020
8401f35
note some things to know
Mar 20, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ references:
pip install -r requirements.txt --user
sudo pip install -r docs/requirements.txt
# sphinx-apidoc -o ./docs/source ./pytorch_lightning **/test_* --force --follow-links
cd docs; make clean ; make html --debug --jobs 2
cd docs; make clean ; make html --debug --jobs 2 SPHINXOPTS="-W"

jobs:

Expand Down
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

- Fixed bug related to type cheking of `ReduceLROnPlateau` lr schedulers([#1114](https://github.com/PyTorchLightning/pytorch-lightning/issues/1114))
- Fixed a bug to ensure lightning checkpoints to be backward compatible ([#1132](https://github.com/PyTorchLightning/pytorch-lightning/pull/1132))
- Fixed all warnings and errors in the docs build process ([#1191](https://github.com/PyTorchLightning/pytorch-lightning/pull/1191))

## [0.7.1] - 2020-03-07

Expand Down
3 changes: 2 additions & 1 deletion docs/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,5 @@ sphinxcontrib-fulltoc
sphinxcontrib-mockautodoc
git+https://github.com/PytorchLightning/lightning_sphinx_theme.git
# pip_shims
sphinx-autodoc-typehints
sphinx-autodoc-typehints
sphinx-paramlinks
2 changes: 2 additions & 0 deletions docs/source/callbacks.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
.. role:: hidden
:class: hidden-section

.. _callbacks:

Callbacks
=========

Expand Down
33 changes: 28 additions & 5 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinxcontrib.mockautodoc',
# 'sphinxcontrib.mockautodoc', # raises error: directive 'automodule' is already registered ...
# 'sphinxcontrib.fulltoc', # breaks pytorch-theme with unexpected kw argument 'titles_only'
'sphinx.ext.doctest',
'sphinx.ext.intersphinx',
Expand All @@ -87,6 +87,7 @@
# 'm2r',
'nbsphinx',
'sphinx_autodoc_typehints',
'sphinx_paramlinks',
]

# Add any paths that contain templates here, relative to this directory.
Expand Down Expand Up @@ -125,7 +126,20 @@
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['*.test_*']
exclude_patterns = [
Borda marked this conversation as resolved.
Show resolved Hide resolved
'pytorch_lightning.rst',
'pl_examples.*',
'modules.rst',

# deprecated/renamed:
'pytorch_lightning.loggers.comet_logger.rst', # TODO: remove in v0.8.0
'pytorch_lightning.loggers.mlflow_logger.rst', # TODO: remove in v0.8.0
'pytorch_lightning.loggers.test_tube_logger.rst', # TODO: remove in v0.8.0
'pytorch_lightning.callbacks.pt_callbacks.*', # TODO: remove in v0.8.0
'pytorch_lightning.pt_overrides.*', # TODO: remove in v0.8.0
'pytorch_lightning.root_module.*', # TODO: remove in v0.8.0
'pytorch_lightning.logging.*', # TODO: remove in v0.8.0
]

# The name of the Pygments (syntax highlighting) style to use.
pygments_style = None
Expand Down Expand Up @@ -297,8 +311,17 @@ def setup(app):
MOCK_REQUIRE_PACKAGES.append(pkg.rstrip())

# TODO: better parse from package since the import name and package name may differ
MOCK_MANUAL_PACKAGES = ['torch', 'torchvision', 'test_tube',
'mlflow', 'comet_ml', 'wandb', 'neptune', 'trains']
MOCK_MANUAL_PACKAGES = [
'torch',
'torchvision',
'PIL',
'test_tube',
'mlflow',
'comet_ml',
'wandb',
'neptune',
'trains',
]
autodoc_mock_imports = MOCK_REQUIRE_PACKAGES + MOCK_MANUAL_PACKAGES
# for mod_name in MOCK_REQUIRE_PACKAGES:
# sys.modules[mod_name] = mock.Mock()
Expand Down Expand Up @@ -369,7 +392,7 @@ def find_source():
# This value determines the text for the permalink; it defaults to "¶". Set it to None or the empty
# string to disable permalinks.
# https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-html_add_permalinks
html_add_permalinks = True
html_add_permalinks = "¶"

# True to prefix each section label with the name of the document it is in, followed by a colon.
# For example, index:Introduction for a section called Introduction that appears in document index.rst.
Expand Down
29 changes: 16 additions & 13 deletions docs/source/debugging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,9 @@ This flag runs a "unit test" by running 1 training batch and 1 validation batch.
The point is to detect any bugs in the training/validation loop without having to wait for
a full epoch to crash.

(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.fast_dev_run`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. code-block:: python

trainer = pl.Trainer(fast_dev_run=True)
Expand All @@ -16,6 +19,9 @@ Inspect gradient norms
----------------------
Logs (to a logger), the norm of each weight matrix.

(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.track_grad_norm`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. code-block:: python

# the 2-norm
Expand All @@ -25,7 +31,8 @@ Log GPU usage
-------------
Logs (to a logger) the GPU usage for each GPU on the master machine.

(See: :ref:`trainer`)
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.log_gpu_memory`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. code-block:: python

Expand All @@ -37,7 +44,8 @@ Make model overfit on subset of data
A good debugging technique is to take a tiny portion of your data (say 2 samples per class),
and try to get your model to overfit. If it can't, it's a sign it won't work with large datasets.

(See: :ref:`trainer`)
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.overfit_pct`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. code-block:: python

Expand All @@ -48,28 +56,23 @@ Print the parameter count by layer
Whenever the .fit() function gets called, the Trainer will print the weights summary for the lightningModule.
To disable this behavior, turn off this flag:

(See: :ref:`trainer.weights_summary`)
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.weights_summary`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. code-block:: python

trainer = pl.Trainer(weights_summary=None)

Print which gradients are nan
-----------------------------
Prints the tensors with nan gradients.

(See: :meth:`trainer.print_nan_grads`)

.. code-block:: python

trainer = pl.Trainer(print_nan_grads=False)

Set the number of validation sanity steps
-----------------------------------------
Lightning runs a few steps of validation in the beginning of training.
This avoids crashing in the validation loop sometime deep into a lengthy training loop.

(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.num_sanity_val_steps`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. code-block:: python

# DEFAULT
trainer = Trainer(nb_sanity_val_steps=5)
trainer = Trainer(num_sanity_val_steps=5)
6 changes: 4 additions & 2 deletions docs/source/early_stopping.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@ Enable Early Stopping
---------------------
There are two ways to enable early stopping.

.. seealso:: :ref:`trainer`
.. seealso::
:class:`~pytorch_lightning.trainer.trainer.Trainer`

.. code-block:: python

Expand All @@ -35,4 +36,5 @@ To disable early stopping pass ``False`` to the `early_stop_callback`.
Note that ``None`` will not disable early stopping but will lead to the
default behaviour.

.. seealso:: :ref:`trainer`
.. seealso::
:class:`~pytorch_lightning.trainer.trainer.Trainer`
18 changes: 0 additions & 18 deletions docs/source/examples.rst

This file was deleted.

20 changes: 13 additions & 7 deletions docs/source/experiment_logging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,8 @@ Comet.ml
`Comet.ml <https://www.comet.ml/site/>`_ is a third-party logger.
To use CometLogger as your logger do the following.

.. seealso:: :ref:`comet` docs.
.. seealso::
:class:`~pytorch_lightning.loggers.CometLogger` docs.
Borda marked this conversation as resolved.
Show resolved Hide resolved

.. code-block:: python

Expand Down Expand Up @@ -38,7 +39,8 @@ Neptune.ai
`Neptune.ai <https://neptune.ai/>`_ is a third-party logger.
To use Neptune.ai as your logger do the following.

.. seealso:: :ref:`neptune` docs.
.. seealso::
:class:`~pytorch_lightning.loggers.NeptuneLogger` docs.

.. code-block:: python

Expand Down Expand Up @@ -68,7 +70,8 @@ allegro.ai TRAINS
`allegro.ai <https://github.com/allegroai/trains/>`_ is a third-party logger.
To use TRAINS as your logger do the following.

.. seealso:: :ref:`trains` docs.
.. seealso::
:class:`~pytorch_lightning.loggers.TrainsLogger` docs.

.. code-block:: python

Expand All @@ -95,7 +98,8 @@ Tensorboard

To use `Tensorboard <https://pytorch.org/docs/stable/tensorboard.html>`_ as your logger do the following.

.. seealso:: TensorBoardLogger :ref:`tf-logger`
.. seealso::
:class:`~pytorch_lightning.loggers.TensorBoardLogger` docs.

.. code-block:: python

Expand All @@ -121,7 +125,8 @@ Test Tube
`Test Tube <https://github.com/williamFalcon/test-tube>`_ is a tensorboard logger but with nicer file structure.
To use TestTube as your logger do the following.

.. seealso:: TestTube :ref:`testTube`
.. seealso::
:class:`~pytorch_lightning.loggers.TestTubeLogger` docs.

.. code-block:: python

Expand All @@ -146,7 +151,8 @@ Wandb
`Wandb <https://www.wandb.com/>`_ is a third-party logger.
To use Wandb as your logger do the following.

.. seealso:: :ref:`wandb` docs
.. seealso::
:class:`~pytorch_lightning.loggers.WandbLogger` docs.

.. code-block:: python

Expand All @@ -167,7 +173,7 @@ The Wandb logger is available anywhere except ``__init__`` in your LightningModu


Multiple Loggers
^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^

PyTorch-Lightning supports use of multiple loggers, just pass a list to the `Trainer`.

Expand Down
3 changes: 2 additions & 1 deletion docs/source/experiment_reporting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,8 @@ Control log writing frequency
Writing to a logger can be expensive. In Lightning you can set the interval at which you
want to log using this trainer flag.

.. seealso:: :ref:`trainer`
.. seealso::
:class:`~pytorch_lightning.trainer.trainer.Trainer`

.. code-block:: python

Expand Down
3 changes: 2 additions & 1 deletion docs/source/fast_training.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@ Force training for min or max epochs
-------------------------------------
It can be useful to force training for a minimum number of epochs or limit to a max number.

.. seealso:: :ref:`trainer`
.. seealso::
:class:`~pytorch_lightning.trainer.trainer.Trainer`

.. code-block:: python

Expand Down
1 change: 1 addition & 0 deletions docs/source/hooks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ Hooks
=====

.. automodule:: pytorch_lightning.core.hooks
:noindex:

Hooks lifecycle
---------------
Expand Down
30 changes: 28 additions & 2 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ PyTorch Lightning Documentation
:name: start
:caption: Start Here

new-project
introduction_guide

.. toctree::
Expand All @@ -24,13 +25,24 @@ PyTorch Lightning Documentation
loggers
trainer


.. toctree::
:maxdepth: 1
:name: Community Examples
:caption: Community Examples

examples
Contextual Emotion Detection (DoubleDistilBert) <https://github.com/PyTorchLightning/emotion_transformer>
Generative Adversarial Network <https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=TyYOdg8g77P0>
Hyperparameter optimization with Optuna <https://github.com/optuna/optuna/blob/master/examples/pytorch_lightning_simple.py>
Image Inpainting using Partial Convolutions <https://github.com/ryanwongsa/Image-Inpainting>
MNIST on TPU <https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3#scrollTo=BHBz1_AnamN_>
NER (transformers, TPU) <https://colab.research.google.com/drive/1dBN-wwYUngLYVt985wGs_OKPlK_ANB9D>
NeuralTexture (CVPR) <https://github.com/PyTorchLightning/neuraltexture>
Recurrent Attentive Neural Process <https://github.com/PyTorchLightning/attentive-neural-processes>
Siamese Nets for One-shot Image Recognition <https://github.com/PyTorchLightning/Siamese-Neural-Networks>
Speech Transformers <https://github.com/PyTorchLightning/speech-transformer-pytorch_lightning>
Transformers transfer learning (Huggingface) <https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=yr7eaxkF-djf>
Transformers text classification <https://github.com/ricardorei/lightning-text-classification>
VAE Library of over 18+ VAE flavors <https://github.com/AntixK/PyTorch-VAE>

.. toctree::
:maxdepth: 1
Expand Down Expand Up @@ -83,3 +95,17 @@ Indices and tables
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`



.. This is here to make sphinx aware of the modules but not throw an error/warning
.. toctree::
:hidden:

pytorch_lightning.core
pytorch_lightning.callbacks
pytorch_lightning.loggers
pytorch_lightning.overrides
pytorch_lightning.profiler
pytorch_lightning.trainer
pytorch_lightning.utilities
7 changes: 5 additions & 2 deletions docs/source/introduction_guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -472,7 +472,7 @@ First, change the runtime to TPU (and reinstall lightning).

Next, install the required xla library (adds support for PyTorch on TPUs)

.. code-block::
.. code-block:: python

import collections
from datetime import datetime, timedelta
Expand Down Expand Up @@ -504,6 +504,8 @@ Next, install the required xla library (adds support for PyTorch on TPUs)
update = threading.Thread(target=update_server_xrt)
update.start()

.. code-block::

# Install Colab TPU compat PyTorch/TPU wheels and dependencies
!pip uninstall -y torch torchvision
!gsutil cp "$DIST_BUCKET/$TORCH_WHEEL" .
Expand Down Expand Up @@ -981,7 +983,8 @@ And pass the callbacks into the trainer

Trainer(callbacks=[MyPrintingCallback()])

.. note:: See full list of 12+ hooks in the `Callback docs <callbacks.rst#callback-class>`_
.. note::
See full list of 12+ hooks in the :ref:`callbacks`.

---------

Expand Down
Loading