You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi ,aimet team
I want to know if when I use aimet to generate the encoding file and then use the --quantization_overrides flag when converting to qnn, will the generated NPU backend model lose accuracy again? For example, aimet loss = 1.1 qnn htp backend may be 1.3~1.4
The reason for this is that I was using qc's genAI notebook to call aimet quantization model, and I found that the loss of the actual generated model was higher than that in aimet.
The text was updated successfully, but these errors were encountered:
Hi ,aimet team
I want to know if when I use aimet to generate the encoding file and then use the --quantization_overrides flag when converting to qnn, will the generated NPU backend model lose accuracy again? For example, aimet loss = 1.1 qnn htp backend may be 1.3~1.4
The reason for this is that I was using qc's genAI notebook to call aimet quantization model, and I found that the loss of the actual generated model was higher than that in aimet.
The text was updated successfully, but these errors were encountered: