You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi. The function model_save_quantized_weights returns a dictionary of the quantized weights. Have you checked if that dictionary has correctly quantized the weights?
I'm trying to quantize LSTM network from the notebook you have: https://github.com/google/qkeras/blob/eb6e0dc86c43128c6708988d9cb54d1e106685a4/notebook/QRNNTutorial.ipynb.
After seeing this issue I've changed the config file to look like this:
I'm training this model and I apply model_save_quantized_weights function. Then when I print weights, they are still in floating point:
The example of printed weights:
Could you, please, guide me, what should I do get int8 weights?
The text was updated successfully, but these errors were encountered: