Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix affine quantized tensor to device calls #726

Merged
merged 4 commits into from
Aug 22, 2024

Conversation

jerryzh168
Copy link
Contributor

Summary:
Fixes: #698

Also added TorchAOBaseTensor addressing part of #710

Test Plan:
python test/dtypes/test_affine_quantized.py

Reviewers:

Subscribers:

Tasks:

Tags:

Summary:
Fixes: pytorch#698

Also added `TorchAOBaseTensor` addressing part of pytorch#710

Test Plan:
python test/dtypes/test_affine_quantized.py

Reviewers:

Subscribers:

Tasks:

Tags:
Copy link

pytorch-bot bot commented Aug 22, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/726

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit f2e4a39 with merge base ac8ce4c (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 22, 2024
torchao/utils.py Outdated
Comment on lines 298 to 300
memory_format = (
memory_format if memory_format is not None else torch.preserve_format
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have wondered about this for some time. Does it make sense to have memory_format applied to inner tensor? From what I understand, it's mainly for channels_last in convolution (https://pytorch.org/docs/stable/tensor_attributes.html#torch-memory-format). Maybe we can skip memory_format argument also?

Copy link
Contributor Author

@jerryzh168 jerryzh168 Aug 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah this probably does not apply for most of the tensors, it won't impact things much if we remove this as well I think.

I guess if in the future some special cases need these args we can just copy paste this function and add the ommitted args

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the current tests pass, I think should be good for now!

@jerryzh168 jerryzh168 merged commit 99644e9 into pytorch:main Aug 22, 2024
16 checks passed
@jerryzh168 jerryzh168 deleted the fix-device-aqt branch August 22, 2024 05:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[AQT-bug] AffineQuantizedTensor raises error with .cuda()
3 participants