Skip to content

Conversation

@vkuzo
Copy link
Contributor

@vkuzo vkuzo commented Nov 26, 2025

Summary:

Adds mxfp8 and nvfp4 to llama eval scripts.

  • baseline: wikitext 7.55, winogrande 0.743
  • mxfp8_floor: wikitext 7.61, winogrande 0.729
  • mxfp8_rceil: wikitext 7.60, winogrande 0.739
  • nvfp4: wikitext 8.44, winogrande 0.718
  • float8 rowwise: wikitext 7.62, winogrande 0.737

Results:

// bf16 baseline
with-proxy time python torchao/_models/llama/eval.py --checkpoint_path
checkpoints/meta-llama/Meta-Llama-3.1-8B/model.pth --print_model --tasks
wikitext winogrande
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.5472105433748435, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.459319739134015,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5452960145272896, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7426992896606156,
'acc_stderr,none': 0.012285989618865697}

// mxfp8 with floor scaling, turned off compile as it seemed stuck in coordinate descent
tuning
with-proxy time python torchao/_models/llama/eval.py --checkpoint_path
checkpoints/meta-llama/Meta-Llama-3.1-8B/model.pth --print_model --tasks
wikitext winogrande --quantization mxfp8
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.609070006132819, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4615491037668933,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5474983002838458, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7292817679558011,
'acc_stderr,none': 0.012487904760626407}

// mxfp8 with rceil scaling
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.605445025927753, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4614188696390065,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5473697404554175, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7387529597474349,
'acc_stderr,none': 0.012346914863415201}

// nvfp4
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
8.44478255417328, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4903102070118779,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5756126578938119, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7182320441988951,
'acc_stderr,none': 0.012643326011853038}

// float8 rowwise (for comparison to existing technique)
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.618818730886612, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4618990946965715,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5478437349532752, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7371744277821626,
'acc_stderr,none': 0.01237092252726192}

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@vkuzo
Copy link
Contributor Author

vkuzo commented Nov 26, 2025

vkuzo added a commit that referenced this pull request Nov 26, 2025
Summary:

Adds mxfp8 and nvfp4 to llama eval scripts.

Results:

```
// bf16 baseline
with-proxy time python torchao/_models/llama/eval.py --checkpoint_path
checkpoints/meta-llama/Meta-Llama-3.1-8B/model.pth --print_model --tasks
wikitext winogrande
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.5472105433748435, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.459319739134015,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5452960145272896, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7426992896606156,
'acc_stderr,none': 0.012285989618865697}

// mxfp8 with floor scaling, turned off compile as it seemed stuck in coordinate descent
tuning
with-proxy time python torchao/_models/llama/eval.py --checkpoint_path
checkpoints/meta-llama/Meta-Llama-3.1-8B/model.pth --print_model --tasks
wikitext winogrande --quantization mxfp8
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.609070006132819, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4615491037668933,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5474983002838458, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7292817679558011,
'acc_stderr,none': 0.012487904760626407}

// mxfp8 with rceil scaling
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.605445025927753, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4614188696390065,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5473697404554175, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7387529597474349,
'acc_stderr,none': 0.012346914863415201}

// nvfp4
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
8.44478255417328, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4903102070118779,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5756126578938119, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7182320441988951,
'acc_stderr,none': 0.012643326011853038}

// float8 rowwise (for comparison to existing technique)
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.618818730886612, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4618990946965715,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5478437349532752, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7371744277821626,
'acc_stderr,none': 0.01237092252726192}

```

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:
ghstack-source-id: a815634
ghstack-comment-id: 3581080988
Pull-Request: #3394
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 26, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3394

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (2 Unrelated Failures)

As of commit b4cf67b with merge base 16aad7c (image):

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 26, 2025
[ghstack-poisoned]
vkuzo added a commit that referenced this pull request Dec 1, 2025
Summary:

Adds mxfp8 and nvfp4 to llama eval scripts.

Results:

```
// bf16 baseline
with-proxy time python torchao/_models/llama/eval.py --checkpoint_path
checkpoints/meta-llama/Meta-Llama-3.1-8B/model.pth --print_model --tasks
wikitext winogrande
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.5472105433748435, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.459319739134015,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5452960145272896, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7426992896606156,
'acc_stderr,none': 0.012285989618865697}

// mxfp8 with floor scaling, turned off compile as it seemed stuck in coordinate descent
tuning
with-proxy time python torchao/_models/llama/eval.py --checkpoint_path
checkpoints/meta-llama/Meta-Llama-3.1-8B/model.pth --print_model --tasks
wikitext winogrande --quantization mxfp8
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.609070006132819, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4615491037668933,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5474983002838458, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7292817679558011,
'acc_stderr,none': 0.012487904760626407}

// mxfp8 with rceil scaling
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.605445025927753, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4614188696390065,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5473697404554175, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7387529597474349,
'acc_stderr,none': 0.012346914863415201}

// nvfp4
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
8.44478255417328, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4903102070118779,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5756126578938119, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7182320441988951,
'acc_stderr,none': 0.012643326011853038}

// float8 rowwise (for comparison to existing technique)
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.618818730886612, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4618990946965715,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5478437349532752, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7371744277821626,
'acc_stderr,none': 0.01237092252726192}

```

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:
ghstack-source-id: a656363
ghstack-comment-id: 3581080988
Pull-Request: #3394
[ghstack-poisoned]
vkuzo added a commit that referenced this pull request Dec 3, 2025
Summary:

Adds mxfp8 and nvfp4 to llama eval scripts.

Results:

```
// bf16 baseline
with-proxy time python torchao/_models/llama/eval.py --checkpoint_path
checkpoints/meta-llama/Meta-Llama-3.1-8B/model.pth --print_model --tasks
wikitext winogrande
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.5472105433748435, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.459319739134015,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5452960145272896, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7426992896606156,
'acc_stderr,none': 0.012285989618865697}

// mxfp8 with floor scaling, turned off compile as it seemed stuck in coordinate descent
tuning
with-proxy time python torchao/_models/llama/eval.py --checkpoint_path
checkpoints/meta-llama/Meta-Llama-3.1-8B/model.pth --print_model --tasks
wikitext winogrande --quantization mxfp8
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.609070006132819, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4615491037668933,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5474983002838458, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7292817679558011,
'acc_stderr,none': 0.012487904760626407}

// mxfp8 with rceil scaling
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.605445025927753, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4614188696390065,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5473697404554175, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7387529597474349,
'acc_stderr,none': 0.012346914863415201}

// nvfp4
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
8.44478255417328, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4903102070118779,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5756126578938119, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7182320441988951,
'acc_stderr,none': 0.012643326011853038}

// float8 rowwise (for comparison to existing technique)
wikitext: {'alias': 'wikitext', 'word_perplexity,none':
7.618818730886612, 'word_perplexity_stderr,none': 'N/A',
'byte_perplexity,none': 1.4618990946965715,
'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none':
0.5478437349532752, 'bits_per_byte_stderr,none': 'N/A'}
winogrande: {'alias': 'winogrande', 'acc,none': 0.7371744277821626,
'acc_stderr,none': 0.01237092252726192}

```

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:
ghstack-source-id: 3a2d8ef
ghstack-comment-id: 3581080988
Pull-Request: #3394
@vkuzo vkuzo added the topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories) label Dec 3, 2025
@vkuzo vkuzo merged commit ca2132e into main Dec 4, 2025
50 of 57 checks passed
vkuzo added a commit that referenced this pull request Dec 10, 2025
* add MXFP8 all gather support

* added TODO for future feature

* remove emoji from comment

* fixed ruff formating

* fixed ruff formatting

* add mxfp8 and nvfp4 to Llama eval scripts (#3394)

Update

[ghstack-poisoned]

* flip mx inference scaling setting to RCEIL (#3428)

* Update

[ghstack-poisoned]

* Update

[ghstack-poisoned]

* Update

[ghstack-poisoned]

* add CLAUDE.local.md to gitignore (#3437)

Summary:

taking claude code for a more thorough spin, will start with local
instructions and will see what makes sense to upstream

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

* bump python version in tutorial ci workflow (#3439)

* [CPU] Reland qconv fp8 fusion passes (#3433)

* [Reland][PT2E][X86] Add Inductor fusion passes of float8 qconv for X86Inductor backend

* add torch version check for Qconv FP8 UTs

* fix format issue

* Skip tests for ROCm

---------

Co-authored-by: Sun, Jiayi <jiayi.sun@intel.com>

* Int8Tensor migration cleanup (#3407)

* Int8Tensor migration

Summary:

This PR creates a new Int8Tensor and updates the configs to use the new
Int8Tensor flow

Test Plan:

To ensure BC:
```
pytest test/quantization/test_quant_api.py
```

To test new Int8Tensor:
```
pytest test/quantization/quantize_/workflows/int8/test_int8_tensor.py
```

Reviewers:

Subscribers:

Tasks:

Tags:

* ruff fixes

* add init

* fix ruff again

* update

* wip

* undo update tests

* fix ruff

* fix varname

* fix typing

* add tests

* fix dtype

* fix ci

* address granularity cr

* update _choose_quant_func_and_quantize_tensor

* make block size required attribute

* made dtype required as well

* address nits

* skip per tensor weight only test for now

* [xpu][test] Port 2 test/dtypes_{floatx, bitpacking} UT files to intel XPU (#3368)

* enable test/dtypes/test_bitpacking.py on intel xpu

* enable test/dtypes/test_floatx.py

* enable test/dtypes/test_floatx.py

* fix format issue

* fix format issue

* update _DEVICES

* [xpu][test] Port 2 test/quantization/pt2e/test_{quantize_pt2e, quantize_pt2e_qat} UT files to intel XPU (#3405)

* add test/quantization/pt2e/test_quantize_pt2e.py

* add test/quantization/pt2e/test_quantize_pt2e.py

* test/quantization/pt2e/test_quantize_pt2e_qat.py

* test/quantization/pt2e/test_quantize_pt2e_qat.py

* fix format issue

* update format

* increase timeout for xpu

* [Intel GPU] Enable optim SR test (#3055)

* updated test with rebase changes

* added checks to run only on CUDA with compatibility >=9

* updated test for H100

* added test to workflow

---------

Co-authored-by: Vasiliy Kuznetsov <vkuzo@users.noreply.github.com>
Co-authored-by: Daniel Vega-Myhre <danvm@meta.com>
Co-authored-by: Xia Weiwen <weiwen.xia@intel.com>
Co-authored-by: Sun, Jiayi <jiayi.sun@intel.com>
Co-authored-by: Jesse Cai <jessecai@meta.com>
Co-authored-by: xiangdong <40376367+zxd1997066@users.noreply.github.com>
Co-authored-by: Artur Lesniak <artur.lesniak@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants