Skip to content

Commit

Permalink
Update core/lightning.py to core/module.py (Lightning-AI#12740)
Browse files Browse the repository at this point in the history
  • Loading branch information
rohitgr7 committed May 12, 2022
1 parent 4011f37 commit 9e5e88e
Show file tree
Hide file tree
Showing 59 changed files with 237 additions and 236 deletions.
3 changes: 3 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Raise an error if there are insufficient training batches when using a float value of `limit_train_batches` ([#12885](https://github.com/PyTorchLightning/pytorch-lightning/pull/12885))


- Changed `pytorch_lightning.core.lightning` to `pytorch_lightning.core.module` ([#12740](https://github.com/PyTorchLightning/pytorch-lightning/pull/12740))


-

### Deprecated
Expand Down
2 changes: 1 addition & 1 deletion docs/source/accelerators/accelerator_prepare.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ This will make your code scale to any arbitrary number of GPUs or TPUs with Ligh
z = torch.Tensor(2, 3)
z = z.type_as(x)

The :class:`~pytorch_lightning.core.lightning.LightningModule` knows what device it is on. You can access the reference via ``self.device``.
The :class:`~pytorch_lightning.core.module.LightningModule` knows what device it is on. You can access the reference via ``self.device``.
Sometimes it is necessary to store tensors as module attributes. However, if they are not parameters they will
remain on the CPU even if the module gets moved to a new device. To prevent that and remain device agnostic,
register the tensor as a buffer in your modules' ``__init__`` method with :meth:`~torch.nn.Module.register_buffer`.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/accelerators/tpu_advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Example:

.. code-block:: python
from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning.core.module import LightningModule
from torch import nn
from pytorch_lightning.trainer.trainer import Trainer
Expand Down
6 changes: 3 additions & 3 deletions docs/source/cli/lightning_cli_advanced_3.rst
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ To use shorthand notation, the options need to be registered beforehand. This ca
LightningCLI(auto_registry=True) # False by default
which will register all subclasses of :class:`torch.optim.Optimizer`, :class:`torch.optim.lr_scheduler._LRScheduler`,
:class:`~pytorch_lightning.core.lightning.LightningModule`,
:class:`~pytorch_lightning.core.module.LightningModule`,
:class:`~pytorch_lightning.core.datamodule.LightningDataModule`, :class:`~pytorch_lightning.callbacks.Callback`, and
:class:`~pytorch_lightning.loggers.LightningLoggerBase` across all imported modules. This includes those in your own
code.
Expand Down Expand Up @@ -108,7 +108,7 @@ file example that defines a couple of callbacks is the following:
...
Similar to the callbacks, any arguments in :class:`~pytorch_lightning.trainer.trainer.Trainer` and user extended
:class:`~pytorch_lightning.core.lightning.LightningModule` and
:class:`~pytorch_lightning.core.module.LightningModule` and
:class:`~pytorch_lightning.core.datamodule.LightningDataModule` classes that have as type hint a class can be configured
the same way using :code:`class_path` and :code:`init_args`.

Expand Down Expand Up @@ -209,7 +209,7 @@ A possible config file could be as follows:
...
Only model classes that are a subclass of :code:`MyModelBaseClass` would be allowed, and similarly only subclasses of
:code:`MyDataModuleBaseClass`. If as base classes :class:`~pytorch_lightning.core.lightning.LightningModule` and
:code:`MyDataModuleBaseClass`. If as base classes :class:`~pytorch_lightning.core.module.LightningModule` and
:class:`~pytorch_lightning.core.datamodule.LightningDataModule` are given, then the tool would allow any lightning
module and data module.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/common/checkpointing_intermediate.rst
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ What
Where
=====

- It gives you the ability to specify the ``dirpath`` and ``filename`` for your checkpoints. Filename can also be dynamic so you can inject the metrics that are being logged using :meth:`~pytorch_lightning.core.lightning.LightningModule.log`.
- It gives you the ability to specify the ``dirpath`` and ``filename`` for your checkpoints. Filename can also be dynamic so you can inject the metrics that are being logged using :meth:`~pytorch_lightning.core.module.LightningModule.log`.

|
Expand Down
2 changes: 1 addition & 1 deletion docs/source/common/child_modules.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ and we can train this using the ``Trainer``:
trainer = Trainer()
trainer.fit(lightning_module, train_dataloader, val_dataloader)
And remember that the forward method should define the practical use of a :class:`~pytorch_lightning.core.lightning.LightningModule`.
And remember that the forward method should define the practical use of a :class:`~pytorch_lightning.core.module.LightningModule`.
In this case, we want to use the ``LitAutoEncoder`` to extract image representations:

.. code-block:: python
Expand Down
2 changes: 1 addition & 1 deletion docs/source/common/early_stopping.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ The :class:`~pytorch_lightning.callbacks.early_stopping.EarlyStopping` callback
To enable it:

- Import :class:`~pytorch_lightning.callbacks.early_stopping.EarlyStopping` callback.
- Log the metric you want to monitor using :meth:`~pytorch_lightning.core.lightning.LightningModule.log` method.
- Log the metric you want to monitor using :meth:`~pytorch_lightning.core.module.LightningModule.log` method.
- Init the callback, and set ``monitor`` to the logged metric of your choice.
- Set the ``mode`` based on the metric needs to be monitored.
- Pass the :class:`~pytorch_lightning.callbacks.early_stopping.EarlyStopping` callback to the :class:`~pytorch_lightning.trainer.trainer.Trainer` callbacks flag.
Expand Down
6 changes: 3 additions & 3 deletions docs/source/common/evaluation_intermediate.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Testing

Lightning allows the user to test their models with any compatible test dataloaders. This can be done before/after training
and is completely agnostic to :meth:`~pytorch_lightning.trainer.trainer.Trainer.fit` call. The logic used here is defined under
:meth:`~pytorch_lightning.core.lightning.LightningModule.test_step`.
:meth:`~pytorch_lightning.core.module.LightningModule.test_step`.

Testing is performed using the ``Trainer`` object's ``.test()`` method.

Expand Down Expand Up @@ -141,9 +141,9 @@ Validation
**********

Lightning allows the user to validate their models with any compatible ``val dataloaders``. This can be done before/after training.
The logic associated to the validation is defined within the :meth:`~pytorch_lightning.core.lightning.LightningModule.validation_step`.
The logic associated to the validation is defined within the :meth:`~pytorch_lightning.core.module.LightningModule.validation_step`.

Apart from this ``.validate`` has same API as ``.test``, but would rely respectively on :meth:`~pytorch_lightning.core.lightning.LightningModule.validation_step` and :meth:`~pytorch_lightning.core.lightning.LightningModule.test_step`.
Apart from this ``.validate`` has same API as ``.test``, but would rely respectively on :meth:`~pytorch_lightning.core.module.LightningModule.validation_step` and :meth:`~pytorch_lightning.core.module.LightningModule.test_step`.

.. note::
``.validate`` method uses the same validation logic being used under validation happening within
Expand Down
8 changes: 4 additions & 4 deletions docs/source/common/hyperparameters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -116,8 +116,8 @@ improve readability and reproducibility.
save_hyperparameters
""""""""""""""""""""

Use :meth:`~pytorch_lightning.core.lightning.LightningModule.save_hyperparameters` within your
:class:`~pytorch_lightning.core.lightning.LightningModule`'s ``__init__`` method.
Use :meth:`~pytorch_lightning.core.module.LightningModule.save_hyperparameters` within your
:class:`~pytorch_lightning.core.module.LightningModule`'s ``__init__`` method.
It will enable Lightning to store all the provided arguments under the ``self.hparams`` attribute.
These hyperparameters will also be stored within the model checkpoint, which simplifies model re-instantiation after training.

Expand Down Expand Up @@ -164,8 +164,8 @@ In this case, exclude them explicitly:
load_from_checkpoint
""""""""""""""""""""

LightningModules that have hyperparameters automatically saved with :meth:`~pytorch_lightning.core.lightning.LightningModule.save_hyperparameters`
can conveniently be loaded and instantiated directly from a checkpoint with :meth:`~pytorch_lightning.core.lightning.LightningModule.load_from_checkpoint`:
LightningModules that have hyperparameters automatically saved with :meth:`~pytorch_lightning.core.module.LightningModule.save_hyperparameters`
can conveniently be loaded and instantiated directly from a checkpoint with :meth:`~pytorch_lightning.core.module.LightningModule.load_from_checkpoint`:

.. code-block:: python
Expand Down
Loading

0 comments on commit 9e5e88e

Please sign in to comment.