-
Notifications
You must be signed in to change notification settings - Fork 160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using convert_weights_to_tf_lite.py did not produce the same results as the pre-training model. #68
Comments
I tried to check the model information of pretrained quant_model and converted quant_model, and they're totally different: There is converted quant_model: |
Thanks for your help, I will try to fix this. |
I used this project to retrain on DNS-challenge dataset. But When I finished the training model, I tried to convert the model using convert_weights_to_tf_lite.py, but found that the model I converted(_1.tflite 372k, _2.tflite 641k) did not match the pre-training model(model_quant_1.tflite 369k, model_quant_2.tflite 635k). So I tried to convert pre-training model(model.h5 and other two h5 models) to tflite, they did not match the pre-training model either. Is there any suggestion for me? I wonder why I could not convert pre-training model to get consistent results. I used tf 2.10.0 which should be the last version to be supported on windows and just tried cpu to convert.
The text was updated successfully, but these errors were encountered: