Skip to content

Commit

Permalink
complete PR
Browse files Browse the repository at this point in the history
  • Loading branch information
rohitgr7 committed Oct 25, 2021
1 parent dab99af commit 5ca972d
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 16 deletions.
15 changes: 5 additions & 10 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -247,10 +247,10 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- The `trainer.lightning_module` reference is now properly set at the very beginning of the run ([#8536](https://github.com/PyTorchLightning/pytorch-lightning/pull/8536))


- The `Trainer` functions `reset_{train,val,test,predict}_dataloader`, `reset_train_val_dataloaders`, and `request_dataloader` `model` argument is now optional ([#8536](https://github.com/PyTorchLightning/pytorch-lightning/pull/8536))
- Load ckpt path when model provided in validate/test/predict ([#8352](https://github.com/PyTorchLightning/pytorch-lightning/pull/8352))


- Load ckpt path when model provided in validate/test/predict ([#8352](https://github.com/PyTorchLightning/pytorch-lightning/pull/8352)))
- The `Trainer` functions `reset_{train,val,test,predict}_dataloader`, `reset_train_val_dataloaders`, and `request_dataloader` `model` argument is now optional ([#8536](https://github.com/PyTorchLightning/pytorch-lightning/pull/8536))


- Saved checkpoints will no longer use the type of a `Callback` as the key to avoid issues with unpickling ([#6886](https://github.com/PyTorchLightning/pytorch-lightning/pull/6886))
Expand Down Expand Up @@ -281,9 +281,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Removed restrictions in the trainer that loggers can only log from rank 0. Existing logger behavior has not changed. ([#8608](https://github.com/PyTorchLightning/pytorch-lightning/pull/8608))


- Disable quantization aware training observers by default during validating/testing/predicting stages ([#8540](https://github.com/PyTorchLightning/pytorch-lightning/pull/8540))


- `Trainer.request_dataloader` now takes a `RunningStage` enum instance ([#8858](https://github.com/PyTorchLightning/pytorch-lightning/pull/8858))


Expand Down Expand Up @@ -330,13 +327,14 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- `pytorch_lightning.utilities.grads.grad_norm` now raises an exception if parameter `norm_type <= 0` ([#9765](https://github.com/PyTorchLightning/pytorch-lightning/pull/9765))



- Updated error message for interactive incompatible plugins ([#9896](https://github.com/PyTorchLightning/pytorch-lightning/pull/9896))


- Updated several places in the loops and trainer to access `training_type_plugin` directly instead of `accelerator` ([#9901](https://github.com/PyTorchLightning/pytorch-lightning/pull/9901))


- Disable quantization aware training observers by default during validating/testing/predicting stages ([#8540](https://github.com/PyTorchLightning/pytorch-lightning/pull/8540))


### Deprecated

Expand Down Expand Up @@ -411,6 +409,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

- Deprecated `GPUStatsMonitor` and `XLAStatsMonitor` in favor of `DeviceStatsMonitor` callback ([#9924](https://github.com/PyTorchLightning/pytorch-lightning/pull/9924))


### Removed

- Removed deprecated `metrics` ([#8586](https://github.com/PyTorchLightning/pytorch-lightning/pull/8586/))
Expand Down Expand Up @@ -613,7 +612,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Fixed `LearningRateMonitor` logging with multiple param groups optimizer with no scheduler ([#10044](https://github.com/PyTorchLightning/pytorch-lightning/pull/10044))



- Fixed undesired side effects being caused by `Trainer` patching dataloader methods on the `LightningModule` ([#9764](https://github.com/PyTorchLightning/pytorch-lightning/pull/9764))


Expand Down Expand Up @@ -935,9 +933,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Fixed `lr_scheduler` with metric (e.g. `torch.optim.lr_scheduler.ReduceLROnPlateau`) when using `automatic_optimization = False` ([#7643](https://github.com/PyTorchLightning/pytorch-lightning/pull/7643))
- Fixed `DeepSpeed` breaking with no schedulers ([#8580](https://github.com/PyTorchLightning/pytorch-lightning/pull/8580))

- Disable quantization aware training observers by default during validating/testing/predicting stages ([#8540](https://github.com/PyTorchLightning/pytorch-lightning/pull/8540))



## [1.3.8] - 2021-07-01

Expand Down
2 changes: 0 additions & 2 deletions pytorch_lightning/callbacks/quantization.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@

import torch
from torch import Tensor
from torch.quantization import QConfig

from pytorch_lightning.utilities.imports import _TORCH_GREATER_EQUAL_1_8

Expand Down Expand Up @@ -152,7 +151,6 @@ def custom_trigger_last(trainer):
.. _PyTorch Quantization: https://pytorch.org/docs/stable/quantization.html#quantization-aware-training
.. _torch.quantization.QConfig: https://pytorch.org/docs/stable/torch.quantization.html#torch.quantization.QConfig
"""

OBSERVER_TYPES = ("histogram", "average")
Expand Down
6 changes: 2 additions & 4 deletions tests/callbacks/test_quantization.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,10 @@
import pytest
import torch
from torchmetrics.functional import mean_absolute_percentage_error as mape
from torch import Tensor

from pytorch_lightning import seed_everything, Trainer
from pytorch_lightning.callbacks import QuantizationAwareTraining
from pytorch_lightning.utilities.exceptions import MisconfigurationException
from pytorch_lightning.utilities.memory import get_model_size_mb
from pytorch_lightning.utilities.imports import _TORCH_GREATER_EQUAL_1_8
from pytorch_lightning.utilities.memory import get_model_size_mb
from tests.helpers.boring_model import RandomDataset
Expand Down Expand Up @@ -171,7 +169,7 @@ def _get_observer_enabled(fake_quant: FakeQuantizeBase):
)
@RunIf(quantization=True)
def test_quantization_disable_observers(tmpdir, observer_enabled_stages):
"""Test disabling observers"""
"""Test disabling observers."""
qmodel = RegressionModel()
qcb = QuantizationAwareTraining(observer_enabled_stages=observer_enabled_stages)
trainer = Trainer(callbacks=[qcb], default_root_dir=tmpdir)
Expand Down Expand Up @@ -207,7 +205,7 @@ def test_quantization_disable_observers(tmpdir, observer_enabled_stages):

@RunIf(quantization=True)
def test_quantization_val_test_predict(tmpdir):
"""Test the default quantization aware training not affected by validating, testing and predicting"""
"""Test the default quantization aware training not affected by validating, testing and predicting."""
seed_everything(42)
num_features = 16
dm = RegressDataModule(num_features=num_features)
Expand Down

0 comments on commit 5ca972d

Please sign in to comment.