-
Notifications
You must be signed in to change notification settings - Fork 361
Add NVFP4 QAT #2666
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add NVFP4 QAT #2666
Conversation
**Summary:** The existing `FakeQuantizeConfig` performs only intx quantization, but we plan to extend QAT to other dtypes such as fp8 and nvfp4 in the near future. This is the necessary refactor before that. Specifically: ``` # New abstract class FakeQuantizeConfigBase # Rename FakeQuantizeConfig -> IntxFakeQuantizeConfig ``` In the future, we will have other types of `FakeQuantizeConfigBase` for float dtypes that users can pass in instead of the existing Intx one. **BC-breaking notes:** For BC, we keep around the old names to reference the new ones. However, this commit is still BC-breaking in the sense that a few APIs now accept the abstract `FakeQuantizeConfigBase` instead. For the most part, this abstract class will be hidden from the user. Before: ``` activation_config = FakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False) weight_config = FakeQuantizeConfig(torch.int4, group_size=32) ``` After: ``` activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False) weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32) ``` **Test Plan:** python test/quantization/test_qat.py [ghstack-poisoned]
**Summary:** This commit adds a new multi-step QAT API with the
main goal of simplifying the existing UX. The new API uses the
same `QATConfig` for both the prepare and convert steps, and
automatically infers the fake quantization configs based on
a PTQ base config provided by the user:
```
from torchao.quantization import (
quantize_,
Int8DynamicActivationInt4WeightConfig
)
from torchao.quantization.qat import QATConfig
\# prepare
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
qat_config = QATConfig(base_config, step="prepare")
quantize_(m, qat_config)
\# train (not shown)
\# convert
quantize_(m, QATConfig(base_config, step="convert"))
```
The main improvements include:
- A single config for both prepare and convert steps
- A single quantize_ for convert (instead of 2)
- No chance for incompatible prepare vs convert configs
- Much less boilerplate code for most common use case
- Simpler config names
For less common use cases such as experimentation, users can
still specify arbitrary fake quantization configs for
activations and/or weights as before. This is still important
since there may not always be a corresponding PTQ base config.
For example:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import IntxFakeQuantizeConfig, QATConfig
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
\# train and convert same as above (not shown)
```
**BC-breaking notes:** This change by itself is technically not
BC-breaking since we keep around the old path, but will become
so when we deprecate and remove the old path in the future.
Before:
```
\# prepare
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = IntXQuantizationAwareTrainingConfig(activation_config, weight_config),
quantize_(model, qat_config)
\# train (not shown)
\# convert
quantize_(model, FromIntXQuantizationAwareTrainingConfig())
quantize_(model, Int8DynamicActivationInt4WeightConfig(group_size=32))
```
After: (see above)
**Test Plan:**
```
python test/quantization/test_qat.py
```
[ghstack-poisoned]
**Summary:** This commit adds a new multi-step QAT API with the
main goal of simplifying the existing UX. The new API uses the
same `QATConfig` for both the prepare and convert steps, and
automatically infers the fake quantization configs based on
a PTQ base config provided by the user:
```
from torchao.quantization import (
quantize_,
Int8DynamicActivationInt4WeightConfig
)
from torchao.quantization.qat import QATConfig
# prepare
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(m, QATConfig(base_config, step="prepare"))
# train (not shown)
# convert
quantize_(m, QATConfig(base_config, step="convert"))
```
The main improvements include:
- A single config for both prepare and convert steps
- A single quantize_ for convert (instead of 2)
- No chance for incompatible prepare vs convert configs
- Much less boilerplate code for most common use case
- Simpler config names
For less common use cases such as experimentation, users can
still specify arbitrary fake quantization configs for
activations and/or weights as before. This is still important
since there may not always be a corresponding PTQ base config.
For example:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import IntxFakeQuantizeConfig, QATConfig
# prepare
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
# train and convert same as above (not shown)
```
**BC-breaking notes:** This change by itself is technically not
BC-breaking since we keep around the old path, but will become
so when we deprecate and remove the old path in the future.
Before:
```
# prepare
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = IntXQuantizationAwareTrainingConfig(activation_config, weight_config),
quantize_(model, qat_config)
# train (not shown)
# convert
quantize_(model, FromIntXQuantizationAwareTrainingConfig())
quantize_(model, Int8DynamicActivationInt4WeightConfig(group_size=32))
```
After: (see above)
**Test Plan:**
```
python test/quantization/test_qat.py
```
[ghstack-poisoned]
**Summary:** This commit adds a new multi-step QAT API with the
main goal of simplifying the existing UX. The new API uses the
same `QATConfig` for both the prepare and convert steps, and
automatically infers the fake quantization configs based on
a PTQ base config provided by the user:
```
from torchao.quantization import (
quantize_,
Int8DynamicActivationInt4WeightConfig
)
from torchao.quantization.qat import QATConfig
# prepare
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(m, QATConfig(base_config, step="prepare"))
# train (not shown)
# convert
quantize_(m, QATConfig(base_config, step="convert"))
```
The main improvements include:
- A single config for both prepare and convert steps
- A single quantize_ for convert (instead of 2)
- No chance for incompatible prepare vs convert configs
- Much less boilerplate code for most common use case
- Simpler config names
For less common use cases such as experimentation, users can
still specify arbitrary fake quantization configs for
activations and/or weights as before. This is still important
since there may not always be a corresponding PTQ base config.
For example:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import IntxFakeQuantizeConfig, QATConfig
# prepare
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
# train and convert same as above (not shown)
```
**BC-breaking notes:** This change by itself is technically not
BC-breaking since we keep around the old path, but will become
so when we deprecate and remove the old path in the future.
Before:
```
# prepare
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = IntXQuantizationAwareTrainingConfig(activation_config, weight_config),
quantize_(model, qat_config)
# train (not shown)
# convert
quantize_(model, FromIntXQuantizationAwareTrainingConfig())
quantize_(model, Int8DynamicActivationInt4WeightConfig(group_size=32))
```
After: (see above)
**Test Plan:**
```
python test/quantization/test_qat.py
```
[ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used. Print helpful deprecation warning to help users migrate. **Test Plan:** ``` python test/quantization/test_qat.py -k test_qat_api_deprecation ``` [ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used. Print helpful deprecation warning to help users migrate. **Test Plan:** ``` python test/quantization/test_qat.py -k test_qat_api_deprecation ``` [ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used. Print helpful deprecation warning to help users migrate. **Test Plan:** ``` python test/quantization/test_qat.py -k test_qat_api_deprecation ``` [ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2666
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 1 Cancelled JobAs of commit a024e29 with merge base bc2c83e ( NEW FAILURE - The following job has failed:
CANCELLED JOB - The following job was cancelled. Please retry:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
ghstack-source-id: fe592ca
Pull Request resolved: #2666
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on oasst1 for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7927|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7323|± | N/A|
| | |none |None |word_perplexity|↓ |18.8815|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7921|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7316|± | N/A|
| | |none |None |word_perplexity|↓ |18.8409|± | N/A|
```
ghstack-source-id: 5548756
Pull Request resolved: #2666
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on oasst1 for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7927|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7323|± | N/A|
| | |none |None |word_perplexity|↓ |18.8815|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7921|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7316|± | N/A|
| | |none |None |word_perplexity|↓ |18.8409|± | N/A|
```
ghstack-source-id: 5548756
Pull Request resolved: #2666
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on oasst1 for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7927|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7323|± | N/A|
| | |none |None |word_perplexity|↓ |18.8815|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7921|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7316|± | N/A|
| | |none |None |word_perplexity|↓ |18.8409|± | N/A|
```
ghstack-source-id: 5548756
Pull Request resolved: #2666
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on oasst1 for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7927|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7323|± | N/A|
| | |none |None |word_perplexity|↓ |18.8815|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7921|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7316|± | N/A|
| | |none |None |word_perplexity|↓ |18.8409|± | N/A|
```
[ghstack-poisoned]
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on oasst1 for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7927|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7323|± | N/A|
| | |none |None |word_perplexity|↓ |18.8815|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7921|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7316|± | N/A|
| | |none |None |word_perplexity|↓ |18.8409|± | N/A|
```
[ghstack-poisoned]
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on oasst1 for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7927|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7323|± | N/A|
| | |none |None |word_perplexity|↓ |18.8815|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7921|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7316|± | N/A|
| | |none |None |word_perplexity|↓ |18.8409|± | N/A|
```
ghstack-source-id: 8c7d051
Pull Request resolved: #2666
| per_tensor_scale = None | ||
|
|
||
| # quantize | ||
| scale, q = _nvfp4_quantize( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no grad?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added STE instead (see _Float8Round)
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on oasst1 for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7927|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7323|± | N/A|
| | |none |None |word_perplexity|↓ |18.8815|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7921|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7316|± | N/A|
| | |none |None |word_perplexity|↓ |18.8409|± | N/A|
```
ghstack-source-id: 5548756
Pull Request resolved: #2666
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on oasst1 for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7927|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7323|± | N/A|
| | |none |None |word_perplexity|↓ |18.8815|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7921|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7316|± | N/A|
| | |none |None |word_perplexity|↓ |18.8409|± | N/A|
```
[ghstack-poisoned]
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on oasst1 for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7927|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7323|± | N/A|
| | |none |None |word_perplexity|↓ |18.8815|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7921|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7316|± | N/A|
| | |none |None |word_perplexity|↓ |18.8409|± | N/A|
```
[ghstack-poisoned]
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on oasst1 for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7927|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7323|± | N/A|
| | |none |None |word_perplexity|↓ |18.8815|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7921|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7316|± | N/A|
| | |none |None |word_perplexity|↓ |18.8409|± | N/A|
```
ghstack-source-id: 512c7c2
Pull Request resolved: #2666
| if x.dim() == 3: | ||
| x = x.view(-1, x.shape[-1]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why does this happen here? can this happen in quant primtive ops?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found that this was necessary for activations during training (for the batch size). Not sure if it was necessary for inference? @drisspg
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on alpaca for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8322|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7804|± | N/A|
| | |none |None |word_perplexity|↓ |21.8611|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8271|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7741|± | N/A|
| | |none |None |word_perplexity|↓ |21.4467|± | N/A|
```
ghstack-source-id: 5548756
Pull Request resolved: #2666
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on alpaca for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8322|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7804|± | N/A|
| | |none |None |word_perplexity|↓ |21.8611|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8271|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7741|± | N/A|
| | |none |None |word_perplexity|↓ |21.4467|± | N/A|
```
[ghstack-poisoned]
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on alpaca for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8322|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7804|± | N/A|
| | |none |None |word_perplexity|↓ |21.8611|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8271|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7741|± | N/A|
| | |none |None |word_perplexity|↓ |21.4467|± | N/A|
```
[ghstack-poisoned]
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on alpaca for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8322|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7804|± | N/A|
| | |none |None |word_perplexity|↓ |21.8611|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8271|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7741|± | N/A|
| | |none |None |word_perplexity|↓ |21.4467|± | N/A|
```
ghstack-source-id: 5ddda63
Pull Request resolved: #2666
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on alpaca for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8322|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7804|± | N/A|
| | |none |None |word_perplexity|↓ |21.8611|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8271|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7741|± | N/A|
| | |none |None |word_perplexity|↓ |21.4467|± | N/A|
```
ghstack-source-id: 5ddda63
Pull Request resolved: #2666
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on alpaca for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8322|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7804|± | N/A|
| | |none |None |word_perplexity|↓ |21.8611|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8271|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7741|± | N/A|
| | |none |None |word_perplexity|↓ |21.4467|± | N/A|
```
[ghstack-poisoned]
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on alpaca for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8322|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7804|± | N/A|
| | |none |None |word_perplexity|↓ |21.8611|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8271|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7741|± | N/A|
| | |none |None |word_perplexity|↓ |21.4467|± | N/A|
```
[ghstack-poisoned]
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on alpaca for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8322|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7804|± | N/A|
| | |none |None |word_perplexity|↓ |21.8611|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8271|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7741|± | N/A|
| | |none |None |word_perplexity|↓ |21.4467|± | N/A|
```
ghstack-source-id: fb5c617
Pull Request resolved: #2666
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on alpaca for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8322|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7804|± | N/A|
| | |none |None |word_perplexity|↓ |21.8611|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8271|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7741|± | N/A|
| | |none |None |word_perplexity|↓ |21.4467|± | N/A|
```
ghstack-source-id: fb5c617
Pull Request resolved: #2666
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on alpaca for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8322|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7804|± | N/A|
| | |none |None |word_perplexity|↓ |21.8611|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8271|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7741|± | N/A|
| | |none |None |word_perplexity|↓ |21.4467|± | N/A|
```
[ghstack-poisoned]
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on alpaca for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8322|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7804|± | N/A|
| | |none |None |word_perplexity|↓ |21.8611|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8271|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7741|± | N/A|
| | |none |None |word_perplexity|↓ |21.4467|± | N/A|
```
[ghstack-poisoned]
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on alpaca for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8322|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7804|± | N/A|
| | |none |None |word_perplexity|↓ |21.8611|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8271|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7741|± | N/A|
| | |none |None |word_perplexity|↓ |21.4467|± | N/A|
```
ghstack-source-id: 3e1b617
Pull Request resolved: #2666
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good
Stack from ghstack (oldest at bottom):
Summary: This commit adds a QAT flow for NVFP4, following the
numerics in
NVFP4Tensorclosely but without the dtyping casting,swizzling, and the packing/unpacking. Users can call this flow as follows:
Test Plan:
Initial benchmarks on fine-tuning Qwen3-1.7B on alpaca for 3 epochs: