-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensorflow Lite Model Output #533
Comments
Hey! May be it is correlated to |
Hi @InfiniteLife , it looks like that's because your app is expecting [1, 100, 4], while our model outputs [1, 100, 7]. You may want to take a look at #529 (comment) Notably, our model output format is [image_id, ymin, xmin, ymax, xmax, score, class] (see here) |
yeah, I was able to run it on Android-cpu, currently trying to run in on Android-gpu |
Hi @InfiniteLife, could you point me to changes required in Android cpu/gpu app |
I don't think this is a problem with EfficientDet's model. The structure of the model and the structure of the inference results converted using Tensorflow v2.2.0 is shown below And so it was. However, import numpy as np
import tensorflow as tf
interpreter = tf.lite.Interpreter(model_path="efficientdet_d0_512x512_weight_quant.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
print('input:', input_details)
print('')
print('output:', output_details)
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print('output_data.shape:', output_data.shape) It may not be helpful, but here's my collection of quantized tflites. |
@PINTO0309, thanks for your comments. Perhaps it's an issue with Netron. I'll investigate a bit more on my side to confirm what you're seeing. |
I'm using 2.3.0.dev20200617, and have run the recommend command to create a TF Lite model:
!python model_inspect.py --runmode=saved_model --model_name=efficientdet-d0
--ckpt_path=efficientdet-d0 --saved_model_dir=savedmodeldir
--tensorrt=FP32 --tflite_path=efficientdet-d0.tflite
The .tflite file is created successfully. However, when viewing it in Netron, its output is 1x1x7, which isn't what I expected. Shouldn't there be a much larger tensor with classes, probabilities, and box coordinates?
The text was updated successfully, but these errors were encountered: