Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assistance Needed for Saving ONNX Model in QDQ or QOperator Formats #3220

Open
tuanbos opened this issue Aug 1, 2024 · 5 comments
Open

Assistance Needed for Saving ONNX Model in QDQ or QOperator Formats #3220

tuanbos opened this issue Aug 1, 2024 · 5 comments

Comments

@tuanbos
Copy link

tuanbos commented Aug 1, 2024

Hi authors,

Thank you for your excellent work.

Currently, I don't see an option to save the ONNX model after it has been quantized into QDQ or QOperator formats. I am using the ONNX format as input, and the output includes the ONNX format in FP32 as well as encoding information (scale, bias).

Could you please show me how to obtain QDQ or QOperator in ONNX from the FP32 model and the encoding information?

Thank you very much, and I look forward to your response.

@e-said
Copy link

e-said commented Aug 1, 2024

Hello @tuanbos,

In aimet onnx export method, you need to set use_embedded_encodings to true to get the onnx with QDQ nodes.

Please note that this feature is support currently for int8 QDQ nodes only.

@tuanbos
Copy link
Author

tuanbos commented Aug 1, 2024

Hi,

Thank you for your response.

I understand that use_embedded_encodings is only available when converting a model from PyTorch and saving the output to ONNX. This feature is implemented in the QuantizationSimModel class for PyTorch, as seen here: QuantizationSimModel for PyTorch.

For ONNX model input, I need to use the QuantizationSimModel for ONNX, which can be found here: QuantizationSimModel for ONNX. However, this class does not yet support exporting to QDQ or QOperation.

Is my understanding correct?

@e-said
Copy link

e-said commented Aug 1, 2024

yes, your understanding is correct. My bad I didn't see you are using aimet_onnx

TBH, I don't use aimet_onnx, but in the code it seems there is no option at the moment to generate the model with QDQ nodes.
Did you try to put under comment this line ?

I hope that would help

@tuanbos
Copy link
Author

tuanbos commented Aug 1, 2024

Hi,

Yes, we did already but the format is also AIMET format.
image

It seems more QDQ but not really ONNX QDQ.
Do you have any idea?

@e-said
Copy link

e-said commented Aug 1, 2024

yes they are not native QDQ nodes, you can try to do something similar to what they implemented in aimet_torch here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants