-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not able to instantiate Interpreter with converted model #74
Comments
Hello, I reproduced it with this:
Do you have the model before it was compiled? |
I do and I can comfirm that it works with interpreter. Here it is. |
@zye1996 hi, I discussed this issue with the team and ended up filing an internal bug to get this fix, I'll keep you updated. |
Did this ever end up being resolved? Having an identical issue myself. |
@sheldoncoup apologies, this is still a wip :( |
@sheldoncoup @zye1996 |
I had a similar issue and was able to resolve it.My problem was that the representative dataset that I used for post-training quantization had more images than what I had provided in the images folder. def representative_data_gen(): converter=tf.compat.v1.lite.TFLiteConverter.from_keras_model_file(keras_model) #This ensures that if any ops can't be quantized, the converter throws an error These set the input and output tensors to uint8converter.inference_input_type = tf.uint8 And this sets the representative dataset so we can quantize the activationsconverter.representative_dataset = representative_data_gen |
@arshren Thanks for the report, I'm surprised that tflite conversion allows that to pass in the first place o_0 Any how, we found the issue and fixed it internally although it didn't quite make the cut for the latest release. If you're having this issue, I can compile the model for you @sheldoncoup @zye1996 here is your model + log: |
@Namburger Glad to hear that the bug has been tracked down. I have a bunch of large (30MB+) models to be converted and have a lot of testing/reconverting to do in the near future, so it wouldn't be a great use of your time to do that for me. |
Thank you so much! |
Hi, @Namburger I'm having the same issue. Could you please help me compile my tflite model as well! Here is the tflite model before and after compiling the model |
@vathsan97 I need the non edgetpu version before compilation, this one is already compiled |
@Namburger Please do find attached the non edgetpu version here |
Hi @Namburger It would be great if I could get this model converted as well. Could I know when will there be a new release with this bug fixed? |
@Sri-Butlr Sorry, just now saw this: https://drive.google.com/file/d/1iJ-sEhGRuu4Jnghl9WE3qP5vIhmnvNV_/view?usp=sharing |
@Namburger has a fix to this error been released? |
@BernardinD we are expecting a release in mid q4 which should include this fix! |
Hi,
I complied tflite model with tpu_compiler and then tried to instantiate interpreter for inference. But it fails with:
ValueError: Found too many dimensions in the input array of operation 'reshape'.
Here is my compiled model and compile log:
Edge TPU Compiler version 2.0.291256449
Model compiled successfully in 161 ms.
Input model: retinaface_landmark_320_240_quant.tflite
Input size: 478.66KiB
Output model: retinaface_landmark_320_240_quant_edgetpu.tflite
Output size: 537.74KiB
On-chip memory available for caching model parameters: 7.69MiB
On-chip memory used for caching model parameters: 729.50KiB
Off-chip memory used for streaming uncached model parameters: 0.00B
Number of Edge TPU subgraphs: 1
Total number of operations: 90
Operation log: retinaface_landmark_320_240_quant_edgetpu.log
Model successfully compiled but not all operations are supported by the Edge TPU. A percentage of the model will instead run on the CPU, which is slower. If possible, consider updating your model to use only operations supported by the Edge TPU. For details, visit g.co/coral/model-reqs.
Number of operations that will run on Edge TPU: 39
Number of operations that will run on CPU: 51
Operator Count Status
CONCATENATION 6 More than one subgraph is not supported
LEAKY_RELU 3 Operation is working on an unsupported data type
QUANTIZE 4 Operation is otherwise supported, but not mapped due to some unspecified limitation
QUANTIZE 3 Mapped to Edge TPU
QUANTIZE 8 More than one subgraph is not supported
PAD 5 Mapped to Edge TPU
RELU 3 More than one subgraph is not supported
CONV_2D 12 More than one subgraph is not supported
CONV_2D 19 Mapped to Edge TPU
DEPTHWISE_CONV_2D 12 Mapped to Edge TPU
RESHAPE 9 More than one subgraph is not supported
DEQUANTIZE 6 Operation is working on an unsupported data type
model.tflite.tar.gz
The text was updated successfully, but these errors were encountered: