-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enable cpu integer matmul #126
Conversation
Thank you for the pull-request ! I'd rather guard the call to int_mm CPU to a minimum pytorch version to avoid errors with current pytorch. from packaging import version
...
if version.parse(torch.__version__) >= version.parse(2.3.0):
# Use CPU int_mm |
Got it. I added release version control so it can work with the current nightly build. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, but you need to:
- fix style (
make style
from the root directory), - rebase on main (no merge please:
git rebase <commit-before-your-branch> --onto upstream/main
).
@dacorvo, I rebased and did make style. Let me know if I missed something |
Merged as #130 |
I don't know how you applied styling, but it was all wrong ... I fixed your branch and created another pull-request. |
Regarding issue #64, torch now supports integer matmul (toch._int_mm) on the CPU at its master branch with this PR merged. Here, I made small changes to enable this kernel on quanto. To use the new backend, install torch nightly build after quanto.