-
Notifications
You must be signed in to change notification settings - Fork 754
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot convert imglib.Image to InputImage: the resulting image is not what it should be #536
Comments
I just pushed the .tflite model to use for the repro (that way, it is not necessary to run the python script to create the model): https://github.com/andynewman10/testrepo/blob/main/testnet.tflite |
I did manage to read the Android code performing the inference, but I had to decompile the AAR - somehow it's very difficult to find the code on github sometimes. Anyway, as @fbernaly mentionned (thank you!), the ML Kit code dealing with Right away I see that the only supported format for the Flutter package is The code that I pasted above ( Looking at the code I also discovered that the minimal image size is 32x32 (which is the size I am using, phew...) So I went ahead with an RGB to NV21 converter, using the following code:
This is some code I found on the web, and it looks pretty good to me, respecting the And it still doesn't work: |
Following my previous message, things are now working as expected, so I am closing this issue. |
Great, actually that is specify in the README, you need to use nv21 when using the camera plugin. |
I made an interesting experiment in which
InputImage
usingInputImage.fromBytes()
or get one throughInputImage.fromFile()
being exactly identical to the generated imageImageLabeler.processImage()
using a pass-through TF Lite model (see below for the model)List<ImageLabel>
values (output of the TF Lite model).This test is interesting in that it allows developers to verify that a generated
InputImage
instance is valid. In other words, it allows to study/debug Image-to-InputImage conversion routines easily.My question is: how to successfully create an InputImage from an imagelib Image? I have been trying all bits of code found on the web for weeks, to no avail.
This test is Android only for now.
Steps to reproduce the behavior:
Add model metadata using "passthrough" parameters : 0-mean, 1-std, numclasses=32x32x3 flattened=3072. To add metadata, I use metadata_writer_for_image_classifier.py, provided by the Tensorflow team.
I want all logits to be passed through, I can therefore set maxCount to 3072 (=32x32x3, flattened) or any higher value (1000000).
Similarly,
confidenceThreshold: 0
is meant to include all values.then
handler in the code above and inspect the values ofimageLabels
.Expected behavior
Whether
readFromDisk
is true or false, I should get the same results. More specifically, I should get255.0
, 2048 labels (mapped to the green and blue values) with confidence =0.0
Actual behavior
When
readFromDisk
is true I get the expected results.When readFromDisk is false, I get:
[242.0, 138.0, 0.0]
. That's wrong. This means that theInputImage
I created is not what it should be.Additional testing
I rewrote the convertImage function so that an
InputImage
withyuv420
encoding is used: the results are also wrong.Logits (label confidence values) are in this case:
[239.0, 198.0, 61.0, 15.0, 9.0, 0.0]
. Again, they should be[255.0]
only (as in the InputImage.fromFilePath case, which shows that reading an image from disk works fine).Platform (please complete the following information):
The text was updated successfully, but these errors were encountered: