-
Notifications
You must be signed in to change notification settings - Fork 236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Register codebook quant ops #1988
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1988
Note: Links to docs will display an error until the docs builds have been completed. ❌ 13 New FailuresAs of commit ebaa5fc with merge base 9516764 ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
codes (torch.Tensor): Indices of codebook entries for each block, | ||
shape (d1//b1, d2//b2, ..., dN//bN). | ||
codes (torch.Tensor): torch.int32 dtype, indices of codebook entries for each block, | ||
shape (d1//b1, d2//b2, ..., dN//bN). | ||
codebook (torch.Tensor): Codebook tensor used for quantization, | ||
shape (k, b1, b2, ..., bN) where b_i are block sizes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: say what k is
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will update docs after I update the code to support block_size
@@ -90,20 +95,24 @@ def quantize_codebook( | |||
return codes.to(code_dtype) | |||
|
|||
|
|||
@register_custom_op | |||
def dequantize_codebook( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, this does not look like it supports granularity, which we will want.
From what I can tell, k is the idx range, e.g., for 4-bit quantization, k = 16. Each idx=i is mapped to the tensor codebook[i]. So we have 1 codebook/LUT for the tensor that maps indices to tensors.
This seems a bit complicated to me. For CoreML, the default is each idx maps to a scalar (but they also support mapping to a vector). I'm not sure if anyone will need tensor-valued look up values.
But we do want granularity in the sense that we can have one codebook per channel, grouped channel, tensor, etc.
Maybe this is what was originally intended for the block_size (based on https://github.com/pytorch/ao/pull/1299/files/53874a005cb174f764363a7c3a22f653ccf738df#r1870108715), but I understand the code correctly, that's not what got implemented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the scale_block_size
in choose_qparams_codebook
or the shape of scales
in the dequant op is supposed to allow us to control the granularity, the block_size
arg seems to have a different meaning than the block_size
in other ops, so we should probably rename it, may guess is the block_size
of tensor values that share the same kmeans cluster value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh wait, the granularity of codebook is separate, let me take a look again
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stamping to unblock as discussed
Summary: Register the codebook quant / dequant ops as custom ops so they can be recongnized after export Test Plan: python test/prototype/test_codebook_quant.py -k test_export Reviewers: Subscribers: Tasks: Tags:
0f7fa57
to
7600e9b
Compare
Summary:
Register the codebook quant / dequant ops as custom ops so they can be recongnized after export
Test Plan:
python test/prototype/test_codebook_quant.py -k test_export
Reviewers:
Subscribers:
Tasks:
Tags: