-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Conversation
Change-Id: I3d97ef80b7466d7555f4970e24f02e8dfba6be2b
@ZhennanQin Thanks for your contribution! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we modify quantize_v2 operator to work with int8 dtypes. This way we dont limit quantization users to using specific dtypes for their inference datasets. Also we won't need an additional argument for the quantize_model
API.
@anirudh2290 Thanks for reviewing. That idea was in my head, but I was a bit worried about letting quantize_v2 accept int8 dtypes looks a bit weird, and would not be accepted by community. I have to say, that way is good for user experience as user can use single int8 model for both fp32 and int8 data loader. If you think it's doable, then I will refactor towards it. |
Yes I would prefer that and I think it would be doable. Would need to take care of inplace optimization if input and output is int8. |
@anirudh2290 I tried to add FInplaceOption and FInplaceIdentity to quantize_v2 to optimize the case that input and ouput is int8. But failed with this check in plan memory pass:
It seems that, data is an variable of this network, so its |
apologies for the delay, I made an attempt on my end to make quantize use inplace optimization when in dtype is also int8. I couldnt find a way to save state in InferType to reuse it within function registered for Inplace optimization. Here is my attempt: anirudh2290@cc6bf9e . Also, I am not sure why input var cant be inplaced and would like to reproduce this issue at my end. Would you be able to share your current code for the inplace optimization for quantization op. If you are short on time, I think we can proceed with the quantization implementation without inplace optimization. |
@anirudh2290 No need to apologies, you're one of the most active reviewers I've seen in MXNet project:)
|
Thanks @ZhennanQin ! Good to know that |
I guess there are good reasons for not allowing inplace operations for inputs. One reason could be that the data allocation for inputs is controlled by the user instead of engine and it should not be mutated. We can argue that identity operations should still be supported, but this means that the memory cannot be reused if identity is followed by an inplace operation. Also, the reference count for reused input memory will never drop to 0. This means that it cannot be recycled and reused in the computation graph. According to me, even if we cannot do inplace optimization we should still add support in the operator for int8 to make the user experience easier. |
@anirudh2290 Thanks for having a look on this. I agree with your point that user experience is important. Then I will add operator support without inplace optimization. In future, we can implement another pass to delete such redundant op to get it optimized. |
PR is updated to address all comments received. Please review again. Thanks. @anirudh2290 @zhreshold @reminisce |
Looks like other comments have been addressed. Merging now. |
* Enable int8 data layer Change-Id: I3d97ef80b7466d7555f4970e24f02e8dfba6be2b * fix lint * Add parameter description * Fix imagenet_inference.py * Allow quantize_v2 to accept int8 * make float32 default
* Enable int8 data layer Change-Id: I3d97ef80b7466d7555f4970e24f02e8dfba6be2b * fix lint * Add parameter description * Fix imagenet_inference.py * Allow quantize_v2 to accept int8 * make float32 default
@reminisce @zheng-da @szha @KellenSunderland @pengzhao-intel @TaoLv
PR major change:
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
Changes
Comments