Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move Uintx out of prototype for future extension #635

Merged
merged 1 commit into from
Aug 13, 2024

Conversation

jerryzh168
Copy link
Contributor

Summary:
Thanks @vayuda for adding the initial version of Uintx tensor subclass we can now integrate this with torch.uint1 to torch.uint7 dtypes with some helpers to unblock the benefit of bitpacking (model size saving) to people first, and then we can gradually optimize the performance.

Also executorch is planning to integrate their low bit kernels with us, more native experience with these lower bit types will be required / useful there as well

Test Plan:
python test/dtypes/test_uintx.py

Reviewers:

Subscribers:

Tasks:

Tags:

Copy link

pytorch-bot bot commented Aug 8, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/635

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit ec39e6c with merge base 433cd14 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 8, 2024
@jerryzh168 jerryzh168 force-pushed the move-intx branch 2 times, most recently from bc7bff3 to 462e94e Compare August 9, 2024 18:39
Copy link
Member

@msaroufim msaroufim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary:
Thanks @vayuda for adding the initial version of Uintx tensor subclass
we can now integrate this with `torch.uint1` to `torch.uint7` dtypes with some helpers
to unblock the benefit of bitpacking (model size saving) to people first, and then
we can gradually optimize the performance.

Also executorch is planning to integrate their low bit kernels with us, more native experience with
these lower bit types will be required / useful there as well

Test Plan:
python test/dtypes/test_uintx.py

Reviewers:

Subscribers:

Tasks:

Tags:
@jerryzh168 jerryzh168 merged commit e7fc0ed into pytorch:main Aug 13, 2024
14 checks passed
@jerryzh168 jerryzh168 deleted the move-intx branch August 13, 2024 01:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants