Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TFLite failures resulted from TF latest version upgrade resolved #6774

Merged
merged 2 commits into from
Oct 29, 2020

Conversation

ANSHUMAN87
Copy link
Contributor

This PR addressed below 2 points:

  • TFLite frontend version support upgraded from 2.1 to 2.3

  • Resolved test failures reported because of Tensorflow version upgrade in CI from 2.1.0 to 2.3.1 (Please refer)

cc @tqchen

@tqchen
Copy link
Member

tqchen commented Oct 27, 2020

cc @leandron

Copy link
Contributor

@u99127 u99127 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some small changes please.

tests/python/frontend/tflite/test_forward.py Outdated Show resolved Hide resolved
tests/python/frontend/tflite/test_forward.py Outdated Show resolved Hide resolved
Comment on lines -2873 to +2900
if quantized:
if package_version.parse(tf.VERSION) < package_version.parse("2.1.0"):
pytest.skip("Testcase requires tflite version >= 2.1.0")
data_in = tf.keras.layers.Input(shape=data.shape[1:])
relu = tf.keras.layers.ReLU()(data_in)
keras_model = tf.keras.models.Model(inputs=data_in, outputs=relu)
input_name = data_in.name.split(":")[0]

# To create quantized values with dynamic range of activations, needs representative dataset
def representative_data_gen():
for i in range(1):
yield [data]

tflite_model_quant = _quantize_keras_model(keras_model, representative_data_gen)

tflite_output = run_tflite_graph(tflite_model_quant, data)
tvm_output = run_tvm_graph(tflite_model_quant, data, input_name)
tvm.testing.assert_allclose(
np.squeeze(tvm_output[0]), np.squeeze(tflite_output[0]), rtol=1e-5, atol=1e-5
)
else:
with tf.Graph().as_default():
in_data = array_ops.placeholder(shape=data.shape, dtype=data.dtype)
with tf.Graph().as_default():
in_data = array_ops.placeholder(shape=data.shape, dtype="float32", name="in_0")

if quantized:
inq_data = tf.quantization.fake_quant_with_min_max_args(
in_data, min=-10, max=10, name="inq_0"
)
input_range = {"inq_0": (-10, 10)}
out = nn_ops.relu(inq_data)
out = tf.quantization.fake_quant_with_min_max_args(out, min=0, max=6, name="out")
compare_tflite_with_tvm(
data, "inq_0:0", [inq_data], [out], quantized=True, input_range=input_range
)
else:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you give a hint as to the reason for these changes ? is there any reason why the existing mechanism doesn't work in your research ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, generally, we want to move towards keras quantization as TF gets more mature. What issues do you see?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue here was the test case was not behaving as expected. When we need a quantized graph with quantized inference, then we should provide quant input(Uint8 or int8, based on settings chosen). But here it was feeding float input to the graph, so the failure. Also i think current change is to keep a uniform steps as followed similar to case of other operators. Any specific reason to move towards Keras quantization than TF quant ?

Please let me know, if still need to keep Keras quant. Thanks!

@anijain2305 anijain2305 self-requested a review October 27, 2020 21:14
@tqchen
Copy link
Member

tqchen commented Oct 28, 2020

@leandron please confirm if it fixes your problem and we can proceed with a merge versus revert

@leandron
Copy link
Contributor

@leandron please confirm if it fixes your problem and we can proceed with a merge versus revert

I re-ran the tests and it seems this PR fixes them all. Thanks @ANSHUMAN87.
I'll close the revert PR.

@ANSHUMAN87
Copy link
Contributor Author

@leandron please confirm if it fixes your problem and we can proceed with a merge versus revert

I re-ran the tests and it seems this PR fixes them all. Thanks @ANSHUMAN87.
I'll close the revert PR.

Thanks @leandron for confirmation.

@u99127
Copy link
Contributor

u99127 commented Oct 29, 2020

Is there a committer around who might be able to merge this in now ?

@tqchen tqchen merged commit c94623a into apache:main Oct 29, 2020
@ANSHUMAN87
Copy link
Contributor Author

Thanks @u99127 , @tqchen , @leandron , @anijain2305 !

@ANSHUMAN87
Copy link
Contributor Author

Issue Tracker: #6791
cc @u99127

trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Dec 2, 2020
…che#6774)

* TFLite failures resulted from TF latest version upgrade resolved

* [1] Review comments handled
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Dec 4, 2020
…che#6774)

* TFLite failures resulted from TF latest version upgrade resolved

* [1] Review comments handled
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Dec 4, 2020
…che#6774)

* TFLite failures resulted from TF latest version upgrade resolved

* [1] Review comments handled
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants