-
Notifications
You must be signed in to change notification settings - Fork 431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update imports after QAT was moved out of prototype #1883
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1883
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 1d05e1a with merge base e030626 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
fc52849
to
d8f24c1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the update! Couple small comments but otherwise looks good
torchtune/utils/_import_guard.py
Outdated
@@ -20,7 +20,7 @@ | |||
_USE_NEW_TENSOR_CORE_TILED_LAYOUT_API = _is_fbcode() or ( | |||
not _is_fbcode() | |||
and ( | |||
("dev" not in torchao_version and torchao_version >= "0.6.0") | |||
("dev" not in torchao_version and torchao_version >= "0.7.0") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the new API is not in 0.6.1? I thought it landed before that release
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah it's not in 0.6.x
torchtune/training/quantization.py
Outdated
- :class:`~torchao.quantization.quant_api.Int8DynActInt4WeightQuantizer`: "8da4w" (requires ``torch>=2.3.0``) | ||
- :class:`~torchao.quantization.prototype.qat.Int8DynActInt4WeightQATQuantizer`: "8da4w-qat" (requires ``torch>=2.4.0``) | ||
- :class:`~torchao.quantization.qat.Int8DynActInt4WeightQATQuantizer`: "8da4w-qat" (requires ``torch>=2.4.0``) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can fully remove both of these torch requirements as we assume stable PyTorch (i.e. >= 2.5)
mode = _quantizer_to_mode.get(type(quantizer), None) | ||
if mode is not None and "module-swap" in mode: | ||
warn( | ||
"*QuantizerModuleSwap is deprecated. Please use the version without 'ModuleSwap' instead" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: should be able to just give the exact classes in the warn message using e.g. f"{quantizer.__class__.__name__}"
(or something like that)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm I don't think that's possible given we just assign the classes today, e.g.:
>>> class A: pass
...
>>> B = A
>>> A().__class__.__name__
'A'
>>> B().__class__.__name__
'A'
d8f24c1
to
48dc978
Compare
Summary: pytorch/ao#1091 moved QAT out of prototype in torchao. This is a BC-breaking change so torchtune also needs to update its QAT imports. Additionally, after pytorch/ao#987 we decided that QAT in torchao will use module swaps to insert fake quantizes, so there is no need to have a separate module swap quantizer, so this commit removes the `*ModuleSwapQuantizer` option. Test Plan: pytest -m integration_test tests/recipes/test_qat_distributed.py should work
48dc978
to
1d05e1a
Compare
@joecummings @ebsmothers good to land? |
Summary: pytorch/ao#1091 moved QAT out of prototype in torchao. This is a BC-breaking change so torchtune also needs to update its QAT imports. Additionally, after pytorch/ao#987 we decided that QAT in torchao will use module swaps to insert fake quantizes, so there is no need to have a separate module swap quantizer, so this commit removes the
*ModuleSwapQuantizer
option.Test Plan:
pytest -m integration_test tests/recipes/test_qat_distributed.py should work