-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added support for tflite quantized maximum and minimum #6018
Conversation
04fc45e
to
1c10c7f
Compare
@anijain2305 please follow up |
@@ -250,7 +256,7 @@ def compare_tflite_with_tvm(in_data, in_name, input_tensors, | |||
# convert to tflite model | |||
converter = tf.lite.TFLiteConverter.from_session( | |||
sess, input_tensors, output_tensors) | |||
|
|||
converter.experimental_new_converter=experimental_new_converter |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need experimental?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The quantised versions of maximum and minimum cannot be converted using TOCO (TOCO is not supported anymore).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case, I would suggest to have the test but skip it for now. When we upgrade the TFLite to a newer version, we can enable the test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MLIR-based (experimental_new_converter=True) converter is already in use in Tensorflow and in TFLite. Why there is a need to postpone?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ISn't the term experimental suggests that the feature is not mature yet? Typically, I have seen that the experimental features go through code churn and can be deprecated and API may also change before it gets matured. This is the main reason, I am suggesting not to put this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understood that it is not an experimental feature any more. However the name "experimental_new_converter" was preserved. I don't see any harm to use this feature and have this test especially if we plan to migrate to a newer version of TFLite.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am still not fully comfortable about this. https://www.tensorflow.org/api_docs/python/tf/lite/TFLiteConverter shows that that API is subject to change. What do @u99127 @siju-samuel think about this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anijain2305 - sorry for some reason github is refusing to send notifications to email when tagged here :( and that's the reason for my delay in responding to this in addition to holidays.
AFAIUI there is no way of freezing a tflite model that contains quantized max or min using the toco converter and thus we need to use the API in that form to get the testsuite coverage. While the API to use this is "subject to change", from my pov it's a use in the testsuite , we aren't using it in the main code base and thus using it is less risky.
Note also that the tflite converter in tensorflow is now defaulting to the mlir based converter as per the latest docs so this use is still a conservative move forward as we are sticking to the default but it gives us additional operator coverage.
Does that help ?
Ramana
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. I understand. If CI upgrade to future TFLite versions causes problem, we can remove the flag.
@siju-samuel @FrozenGene Please review when you get time |
Bugfix in unit test. Unit test slightly simplified
1c10c7f
to
5dde3b5
Compare
Thanks @d-smirnov @u99127 This is merged |
* Added support for tflite quantized maximum and minimum * Unit test simplified Bugfix in unit test. Unit test slightly simplified * re-trigger CI * renamed use_real_qnn to ignore_qnn_params
* Added support for tflite quantized maximum and minimum * Unit test simplified Bugfix in unit test. Unit test slightly simplified * re-trigger CI * renamed use_real_qnn to ignore_qnn_params
* Added support for tflite quantized maximum and minimum * Unit test simplified Bugfix in unit test. Unit test slightly simplified * re-trigger CI * renamed use_real_qnn to ignore_qnn_params
* Added support for tflite quantized maximum and minimum * Unit test simplified Bugfix in unit test. Unit test slightly simplified * re-trigger CI * renamed use_real_qnn to ignore_qnn_params
* Added support for tflite quantized maximum and minimum * Unit test simplified Bugfix in unit test. Unit test slightly simplified * re-trigger CI * renamed use_real_qnn to ignore_qnn_params
Added support for tflite quantized maximum and minimum