Skip to content

Conversation

@vkuzo
Copy link
Contributor

@vkuzo vkuzo commented Dec 3, 2025

Summary:

Overall we know RCEIL is better from industry knowledge, the benchmarks
below are very light just to validate we can measure the increase. Switching to RCEIL for inference.

Flipping the overall default is left as a TODO because we need to update our dim1 triton and c++ kernels accordingly, and we can do that after the next branch cut.

Accuracy

  • before
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.609070006132819, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4615491037668933,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5474983002838458, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7292817679558011,
'acc_stderr,none': 0.012487904760626407}
  • after
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.605192917647689, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4614098103053235,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.547360797163005, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7355958958168903,
'acc_stderr,none': 0.012394724896983764}

nice lift in perplexity and winogrande accuracy score

Performance on norm -> linear benchmarks

a slight performance regression, but we have not optimized RCEIL
performance at all and we aren't using the intrinsics yet, so room to
optimize

Test Plan:

pytest test/prototype/mx_formats/ -s -x

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
@vkuzo
Copy link
Contributor Author

vkuzo commented Dec 3, 2025

Stack from ghstack (oldest at bottom):

@pytorch-bot
Copy link

pytorch-bot bot commented Dec 3, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3428

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

vkuzo added a commit that referenced this pull request Dec 3, 2025
Summary:

Overall we know RCEIL is better from industry knowledge, the benchmarks
below are very light just to validate we can measure the increase.

Accuracy
* before

```
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.609070006132819, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4615491037668933,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5474983002838458, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7292817679558011,
'acc_stderr,none': 0.012487904760626407}
```

* after

```
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.605192917647689, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4614098103053235,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.547360797163005, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7355958958168903,
'acc_stderr,none': 0.012394724896983764}
```

nice lift in perplexity and winogrande accuracy score

Performance on norm -> linear benchmarks

* before: https://gist.github.com/vkuzo/e4eab53fc9a23c007585c2235a7c7088
* after: https://gist.github.com/vkuzo/4ac7cde8a3ec1cd8f4d66847df091f7e

a slight performance regression, but we have not optimized RCEIL
performance at all and we aren't using the intrinsics yet, so room to
optimize

Test Plan:

```
pytest test/prototype/mx_formats/ -s -x
```

Reviewers:

Subscribers:

Tasks:

Tags:
ghstack-source-id: f47b33f
ghstack-comment-id: 3608933956
Pull-Request: #3428
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 3, 2025
@vkuzo vkuzo added the topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories) label Dec 3, 2025
@vkuzo vkuzo changed the title flip mx scaling enum default to RCEIL flip mx inference scaling setting to RCEIL Dec 4, 2025
[ghstack-poisoned]
vkuzo added a commit that referenced this pull request Dec 4, 2025
Summary:

Overall we know RCEIL is better from industry knowledge, the benchmarks
below are very light just to validate we can measure the increase.

Accuracy
* before

```
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.609070006132819, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4615491037668933,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5474983002838458, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7292817679558011,
'acc_stderr,none': 0.012487904760626407}
```

* after

```
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.605192917647689, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4614098103053235,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.547360797163005, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7355958958168903,
'acc_stderr,none': 0.012394724896983764}
```

nice lift in perplexity and winogrande accuracy score

Performance on norm -> linear benchmarks

* before: https://gist.github.com/vkuzo/e4eab53fc9a23c007585c2235a7c7088
* after: https://gist.github.com/vkuzo/4ac7cde8a3ec1cd8f4d66847df091f7e

a slight performance regression, but we have not optimized RCEIL
performance at all and we aren't using the intrinsics yet, so room to
optimize

Test Plan:

```
pytest test/prototype/mx_formats/ -s -x
```

Reviewers:

Subscribers:

Tasks:

Tags:
ghstack-source-id: 7c8fd99
ghstack-comment-id: 3608933956
Pull-Request: #3428
[ghstack-poisoned]
vkuzo added a commit that referenced this pull request Dec 4, 2025
Summary:

Overall we know RCEIL is better from industry knowledge, the benchmarks
below are very light just to validate we can measure the increase.

Accuracy
* before

```
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.609070006132819, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4615491037668933,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5474983002838458, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7292817679558011,
'acc_stderr,none': 0.012487904760626407}
```

* after

```
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.605192917647689, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4614098103053235,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.547360797163005, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7355958958168903,
'acc_stderr,none': 0.012394724896983764}
```

nice lift in perplexity and winogrande accuracy score

Performance on norm -> linear benchmarks

* before: https://gist.github.com/vkuzo/e4eab53fc9a23c007585c2235a7c7088
* after: https://gist.github.com/vkuzo/4ac7cde8a3ec1cd8f4d66847df091f7e

a slight performance regression, but we have not optimized RCEIL
performance at all and we aren't using the intrinsics yet, so room to
optimize

Test Plan:

```
pytest test/prototype/mx_formats/ -s -x
```

Reviewers:

Subscribers:

Tasks:

Tags:
ghstack-source-id: a431565
ghstack-comment-id: 3608933956
Pull-Request: #3428
@vkuzo vkuzo changed the base branch from gh/vkuzo/173/head to main December 4, 2025 11:22
@vkuzo vkuzo merged commit 534bea5 into main Dec 4, 2025
39 of 51 checks passed
vkuzo added a commit that referenced this pull request Dec 10, 2025
* add MXFP8 all gather support

* added TODO for future feature

* remove emoji from comment

* fixed ruff formating

* fixed ruff formatting

* add mxfp8 and nvfp4 to Llama eval scripts (#3394)

Update

[ghstack-poisoned]

* flip mx inference scaling setting to RCEIL (#3428)

* Update

[ghstack-poisoned]

* Update

[ghstack-poisoned]

* Update

[ghstack-poisoned]

* add CLAUDE.local.md to gitignore (#3437)

Summary:

taking claude code for a more thorough spin, will start with local
instructions and will see what makes sense to upstream

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

* bump python version in tutorial ci workflow (#3439)

* [CPU] Reland qconv fp8 fusion passes (#3433)

* [Reland][PT2E][X86] Add Inductor fusion passes of float8 qconv for X86Inductor backend

* add torch version check for Qconv FP8 UTs

* fix format issue

* Skip tests for ROCm

---------

Co-authored-by: Sun, Jiayi <jiayi.sun@intel.com>

* Int8Tensor migration cleanup (#3407)

* Int8Tensor migration

Summary:

This PR creates a new Int8Tensor and updates the configs to use the new
Int8Tensor flow

Test Plan:

To ensure BC:
```
pytest test/quantization/test_quant_api.py
```

To test new Int8Tensor:
```
pytest test/quantization/quantize_/workflows/int8/test_int8_tensor.py
```

Reviewers:

Subscribers:

Tasks:

Tags:

* ruff fixes

* add init

* fix ruff again

* update

* wip

* undo update tests

* fix ruff

* fix varname

* fix typing

* add tests

* fix dtype

* fix ci

* address granularity cr

* update _choose_quant_func_and_quantize_tensor

* make block size required attribute

* made dtype required as well

* address nits

* skip per tensor weight only test for now

* [xpu][test] Port 2 test/dtypes_{floatx, bitpacking} UT files to intel XPU (#3368)

* enable test/dtypes/test_bitpacking.py on intel xpu

* enable test/dtypes/test_floatx.py

* enable test/dtypes/test_floatx.py

* fix format issue

* fix format issue

* update _DEVICES

* [xpu][test] Port 2 test/quantization/pt2e/test_{quantize_pt2e, quantize_pt2e_qat} UT files to intel XPU (#3405)

* add test/quantization/pt2e/test_quantize_pt2e.py

* add test/quantization/pt2e/test_quantize_pt2e.py

* test/quantization/pt2e/test_quantize_pt2e_qat.py

* test/quantization/pt2e/test_quantize_pt2e_qat.py

* fix format issue

* update format

* increase timeout for xpu

* [Intel GPU] Enable optim SR test (#3055)

* updated test with rebase changes

* added checks to run only on CUDA with compatibility >=9

* updated test for H100

* added test to workflow

---------

Co-authored-by: Vasiliy Kuznetsov <vkuzo@users.noreply.github.com>
Co-authored-by: Daniel Vega-Myhre <danvm@meta.com>
Co-authored-by: Xia Weiwen <weiwen.xia@intel.com>
Co-authored-by: Sun, Jiayi <jiayi.sun@intel.com>
Co-authored-by: Jesse Cai <jessecai@meta.com>
Co-authored-by: xiangdong <40376367+zxd1997066@users.noreply.github.com>
Co-authored-by: Artur Lesniak <artur.lesniak@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants