-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QNN] Channel wise quantization - Quantize & Requantize #4629
Conversation
cf1fba5
to
d692126
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @anijain2305 for the PR, I've added some comments.
d692126
to
bfa50a7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Several code style comments :)
|
||
auto expanded_output_scale = output_scale; | ||
if (!IsConstScalar(output_scale)) { | ||
expanded_output_scale = ExpandBiasToMatchAxis(output_scale, n_dim, {axis}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what about renaming ExpandBiasToMatchAxis
to something more generic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This helper is used in many other passes. We can keep this name for now. And then send a separate PR for better naming across all those files.
bfa50a7
to
5a94896
Compare
Thanks for the helpful comments @jackwish All the comments were very good and have been addressed :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @anijain2305 , LGTM
@tmoreau89 Could you please manage the PR? |
@zhiics on it, thanks |
5a94896
to
602965f
Compare
@shoubhik Thanks for the comments. Addressed. |
Thank you @anijain2305 , @shoubhik , @jackwish ; the PR has been merged |
.describe("The output channel axis for channel wise quantization. Default value is -1," | ||
"which corresponds to the last axis.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have an opinion on whether this is the right approach or not but the question that popped in my mind was :
Why not be more explicit and say that axis == -1 actually means per-tensor quantization ?
Thanks,
Ramana
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Never mind , later in the thread things are a bit clearer. The axis of -1 is fine - it is part of the per-channel quantization scheme .
In some sense I would have preferred to see a IsPerChannelQuantized predicate, rather than reusing a IsConstScalar predicate further below but that's a code hygiene thing.
And , yes sorry about the slow response, I'm a bit snowed under.
RFC - https://discuss.tvm.ai/t/qnn-channel-wise-quantization/5300