Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added support for tflite quantized maximum and minimum #6018

Merged
merged 4 commits into from
Aug 14, 2020

Conversation

d-smirnov
Copy link
Contributor

Added support for tflite quantized maximum and minimum

@tqchen
Copy link
Member

tqchen commented Jul 25, 2020

@anijain2305 please follow up

@@ -250,7 +256,7 @@ def compare_tflite_with_tvm(in_data, in_name, input_tensors,
# convert to tflite model
converter = tf.lite.TFLiteConverter.from_session(
sess, input_tensors, output_tensors)

converter.experimental_new_converter=experimental_new_converter
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need experimental?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The quantised versions of maximum and minimum cannot be converted using TOCO (TOCO is not supported anymore).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case, I would suggest to have the test but skip it for now. When we upgrade the TFLite to a newer version, we can enable the test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MLIR-based (experimental_new_converter=True) converter is already in use in Tensorflow and in TFLite. Why there is a need to postpone?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ISn't the term experimental suggests that the feature is not mature yet? Typically, I have seen that the experimental features go through code churn and can be deprecated and API may also change before it gets matured. This is the main reason, I am suggesting not to put this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understood that it is not an experimental feature any more. However the name "experimental_new_converter" was preserved. I don't see any harm to use this feature and have this test especially if we plan to migrate to a newer version of TFLite.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am still not fully comfortable about this. https://www.tensorflow.org/api_docs/python/tf/lite/TFLiteConverter shows that that API is subject to change. What do @u99127 @siju-samuel think about this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anijain2305 - sorry for some reason github is refusing to send notifications to email when tagged here :( and that's the reason for my delay in responding to this in addition to holidays.

AFAIUI there is no way of freezing a tflite model that contains quantized max or min using the toco converter and thus we need to use the API in that form to get the testsuite coverage. While the API to use this is "subject to change", from my pov it's a use in the testsuite , we aren't using it in the main code base and thus using it is less risky.

Note also that the tflite converter in tensorflow is now defaulting to the mlir based converter as per the latest docs so this use is still a conservative move forward as we are sticking to the default but it gives us additional operator coverage.

Does that help ?

Ramana

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. I understand. If CI upgrade to future TFLite versions causes problem, we can remove the flag.

@anijain2305
Copy link
Contributor

@siju-samuel @FrozenGene Please review when you get time

@anijain2305 anijain2305 merged commit aa0271e into apache:master Aug 14, 2020
@anijain2305
Copy link
Contributor

Thanks @d-smirnov @u99127 This is merged

trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Aug 26, 2020
* Added support for tflite quantized maximum and minimum

* Unit test simplified

Bugfix in unit test. Unit test slightly simplified

* re-trigger CI

* renamed use_real_qnn to ignore_qnn_params
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Aug 26, 2020
* Added support for tflite quantized maximum and minimum

* Unit test simplified

Bugfix in unit test. Unit test slightly simplified

* re-trigger CI

* renamed use_real_qnn to ignore_qnn_params
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Aug 26, 2020
* Added support for tflite quantized maximum and minimum

* Unit test simplified

Bugfix in unit test. Unit test slightly simplified

* re-trigger CI

* renamed use_real_qnn to ignore_qnn_params
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Sep 2, 2020
* Added support for tflite quantized maximum and minimum

* Unit test simplified

Bugfix in unit test. Unit test slightly simplified

* re-trigger CI

* renamed use_real_qnn to ignore_qnn_params
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Sep 3, 2020
* Added support for tflite quantized maximum and minimum

* Unit test simplified

Bugfix in unit test. Unit test slightly simplified

* re-trigger CI

* renamed use_real_qnn to ignore_qnn_params
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants