-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disable quantization aware training observers #8540
Disable quantization aware training observers #8540
Conversation
Codecov Report
@@ Coverage Diff @@
## master #8540 +/- ##
======================================
Coverage 89% 89%
======================================
Files 182 182
Lines 16111 16165 +54
======================================
+ Hits 14273 14327 +54
Misses 1838 1838 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pls add typing to all impemented hooks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
last suggestion, then it looks good to me 🐰
for more information, see https://pre-commit.ci
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
cd3cce0
to
5ca972d
Compare
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> Co-authored-by: tchaton <thomas@grid.ai> Co-authored-by: rohitgr7 <rohitgr1998@gmail.com>
What does this PR do?
Fixes #8507
With this PR, the observers belonging to fake-quantization modules (
torch.quantization.FakeQuantizeBase
) will be disabled byQuantizationAwareTraining
during validating, testing and predicting stages by default. The fake-quantization modules can use observers to do calibration only in the stages listed inobserver_enabled_stages
, which is("train",)
by default.QuantizationAwareTraining
don't disable observers during the sanity check as the model hasn't been calibrated with training data yet. After the sanity check, the fake-quantization modules are restored to initial states.Note that we only handle observers belonging to fake-quantization modules.
Does your PR introduce any breaking changes ? If yes, please list them.
The quantization aware training observers belonging to fake-quantization modules (
torch.quantization.FakeQuantizeBase
) will be disabled during validating, testing and predicting stages by default. The old behavior can be recovered by passingobserver_enabled_stages=("train", "validate", "test", "predict")
topytorch_lightning.callbacks.QuantizationAwareTraining
.Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:
Did you have fun?
Make sure you had fun coding 🙃