Skip to content

create staticmethod for quantizing weights of QATLinear and QATEmbedding #2079

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 21, 2025

Conversation

navsud
Copy link
Contributor

@navsud navsud commented Apr 18, 2025

Summary:
For saving the quantized weights, we have been using adhoc notebooks with copy-pasted code from the convert method.
This had been a source of numerical discrepancies. To avoid this issue, this diff adds separates the weight quantization logic in to a separate staticmethods so that we can reuse it.

Reviewed By: jerryzh168

Differential Revision: D73201409

Copy link

pytorch-bot bot commented Apr 18, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2079

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 1 New Failure, 6 Unrelated Failures

As of commit 7956303 with merge base 0045d88 (image):

NEW FAILURE - The following job has failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 18, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D73201409

@navsud navsud added the topic: not user facing Use this tag if you don't want this PR to show up in release notes label Apr 18, 2025
…ing (pytorch#2079)

Summary:

For saving the quantized weights, we have been using adhoc notebooks with copy-pasted code from the convert method.
This had been a source of numerical discrepancies. To avoid this issue, this diff adds separates the weight quantization logic in to a separate staticmethods so that we can reuse it.

Reviewed By: jerryzh168

Differential Revision: D73201409
@navsud navsud force-pushed the export-D73201409 branch from d953902 to 7956303 Compare April 21, 2025 17:16
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D73201409

@facebook-github-bot facebook-github-bot merged commit 4805efd into pytorch:main Apr 21, 2025
12 of 20 checks passed
lisjin pushed a commit to lisjin/ao that referenced this pull request Apr 22, 2025
Differential Revision: D73201409

Pull Request resolved: pytorch#2079
@facebook-github-bot
Copy link
Contributor

This pull request has been reverted by 896f61b.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Reverted topic: not user facing Use this tag if you don't want this PR to show up in release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants