diff --git a/CHANGELOG.md b/CHANGELOG.md index e6a45c98671fd..dd874c7c724a8 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -181,6 +181,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/). - Added support for `torch.autograd.set_detect_anomaly` through `Trainer` constructor argument `detect_anomaly` ([#9848](https://github.com/PyTorchLightning/pytorch-lightning/pull/9848)) +- Added `enable_model_summary` flag to Trainer ([#9699](https://github.com/PyTorchLightning/pytorch-lightning/pull/9699)) + + ### Changed - Module imports are now catching `ModuleNotFoundError` instead of `ImportError` ([#9867](https://github.com/PyTorchLightning/pytorch-lightning/pull/9867)) @@ -344,6 +347,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/). - Deprecated `pytorch_lightning.core.decorators.parameter_validation` in favor of `pytorch_lightning.utilities.parameter_tying.set_shared_parameters` ([#9525](https://github.com/PyTorchLightning/pytorch-lightning/pull/9525)) +- Deprecated passing `weights_summary` to the `Trainer` constructor in favor of adding the `ModelSummary` callback with `max_depth` directly to the list of callbacks ([#9699](https://github.com/PyTorchLightning/pytorch-lightning/pull/9699)) + + ### Removed - Removed deprecated `metrics` ([#8586](https://github.com/PyTorchLightning/pytorch-lightning/pull/8586/)) diff --git a/benchmarks/test_basic_parity.py b/benchmarks/test_basic_parity.py index e9442dd26e65b..2144be39394cb 100644 --- a/benchmarks/test_basic_parity.py +++ b/benchmarks/test_basic_parity.py @@ -159,7 +159,7 @@ def lightning_loop(cls_model, idx, device_type: str = "cuda", num_epochs=10): # as the first run is skipped, no need to run it long max_epochs=num_epochs if idx > 0 else 1, enable_progress_bar=False, - weights_summary=None, + enable_model_summary=False, gpus=1 if device_type == "cuda" else 0, checkpoint_callback=False, logger=False, diff --git a/docs/source/common/debugging.rst b/docs/source/common/debugging.rst index 7a11863c0e1bf..6e5a721dd092a 100644 --- a/docs/source/common/debugging.rst +++ b/docs/source/common/debugging.rst @@ -95,11 +95,14 @@ Print a summary of your LightningModule --------------------------------------- Whenever the ``.fit()`` function gets called, the Trainer will print the weights summary for the LightningModule. By default it only prints the top-level modules. If you want to show all submodules in your network, use the -`'full'` option: +``max_depth`` option: .. testcode:: - trainer = Trainer(weights_summary="full") + from pytorch_lightning.callbacks import ModelSummary + + trainer = Trainer(callbacks=[ModelSummary(max_depth=-1)]) + You can also display the intermediate input- and output sizes of all your layers by setting the ``example_input_array`` attribute in your LightningModule. It will print a table like this @@ -115,8 +118,9 @@ You can also display the intermediate input- and output sizes of all your layers when you call ``.fit()`` on the Trainer. This can help you find bugs in the composition of your layers. See Also: - - :paramref:`~pytorch_lightning.trainer.trainer.Trainer.weights_summary` Trainer argument - - :class:`~pytorch_lightning.core.memory.ModelSummary` + - :class:`~pytorch_lightning.callbacks.model_summary.ModelSummary` + - :func:`~pytorch_lightning.utilities.model_summary.summarize` + - :class:`~pytorch_lightning.utilities.model_summary.ModelSummary` ---------------- diff --git a/docs/source/common/trainer.rst b/docs/source/common/trainer.rst index 42615383b5c1f..41fd89c3e7ee0 100644 --- a/docs/source/common/trainer.rst +++ b/docs/source/common/trainer.rst @@ -1589,6 +1589,11 @@ Example:: weights_summary ^^^^^^^^^^^^^^^ +.. warning:: `weights_summary` is deprecated in v1.5 and will be removed in v1.7. Please pass :class:`~pytorch_lightning.callbacks.model_summary.ModelSummary` + directly to the Trainer's ``callbacks`` argument instead. To disable the model summary, + pass ``enable_model_summary = False`` to the Trainer. + + .. raw:: html