-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[mediapipe_task_genai] On Android llm engine failed to validate graph for gemma-2b-it-gpu-int8.bin model #56
Comments
same issue here |
facing the same issue |
same issue |
idk why but i guess this issue is with the flutter version only, i tried the one in native android and it worked fine, using Method Channel to implement it for now. |
Does the CPU version work? Currently, it is known that the CPU version increases the number of devices on which Gemma can successfully run. |
cpu ,gpu ,both the quantised model tried same issue |
Thanks for the report. I've shared this with the MediaPipe team, but both engineers who work on that half of the stack are currently OOO, so we'll have to all be patient for a little bit until they're able to unravel why the MediaPipe SDK is crashing. |
This is potentially similar to google-ai-edge/mediapipe-samples#335. |
Same Issue testing on 3 different android devices (Samsung S8, S10, A25 5G). All crashes on the example app, doesn't matter which models gets loaded. |
the model works in native android though check this out |
That very likely means there are issues with the version of the SDK compiled specifically for Flutter. I have filed this as an internal bug on the MediaPipe project (now called For Googlers: b/349870091 |
hey any update on this @craiglabenz ? Do you have any time estimation ? |
Sadly, I don't have any updates yet. The relevant MediaPipe engineers are still on leave. |
Got the following error Log on Samsung A31, when launching the mediapipe_task_genai example with GEMMA_4B_CPU_URI.
Note: The same model works fine when used with com.google.mediapipe.tasks.genai.llminference.LlmInference using Android Method Channel |
Hey @jawad111 would you be okay to share a code sample/repo of your implementation using platform channel and android native ? I tried on my side but couldn't get it to work :/ |
Hi @tempo-riz. First, I would like to say that this repo is a better and an optimized solution. While the issues are resolved, you can follow this official video by flutter team for a detailed reference. Observable Flutter #43: On-device LLMs with Gemma. |
hey @craiglabenz any update on this ? |
We're attempting to finalize contracts with a great developer to finish work on this library, which should ultimately get and keep it in great shape. Unfortunately, my employer can be slow with paperwork. Still, I'm cautiously optimistic that renewed engineering work on this library will begin in Q1 2025 🤞 |
@craiglabenz is it possible to get internship for this project ? I am quite interested in the project |
Faced the same issue. :( |
Same here. |
Same here too |
+1 |
Sorry for the late response. Please find below a working example of using com.google.mediapipe.tasks.genai.llminference.LlmInference with Android Method Channel. This should work as a temporary workaround. Example Project Link: https://github.com/jawad111/mediapipe_llminference_example.git Tested on Devices:
Considerations for project setup
Model Setup Instructions
Project OverviewWidget Layer
Service Layer: LlmService
Native Code: Kotlin
Code Snippets for Each Step1. Widget Layer
2. Service Layer (LlmService)
3. Native Code (Kotlin)
|
Bug report
Describe the bug
LLM Engine failed in ValidatedGraphConfig Initialization step.
Steps to reproduce
Steps to reproduce the behavior:
adb push gemma-2b-it-gpu-int8.bin /data/local/tmp
flutter-mediapipe/packages/mediapipe-task-genai/example
and runflutter run -d 9TAUH6MRNZJ7KN6H --dart-define=GEMMA_8B_GPU_URI=/data/local/tmp/gemma-2b-it-gpu-int8.bin
, in which9TAUH6MRNZJ7KN6H
is my Android ID.Gemma 8b GPU
and then inputHello, world!
and click send button;Expected behavior
Just run one or two conversation successfully.
Additional context
Device Info:
I downloaded the model twice and tested twice, so the model file should be fine.
Flutter doctor
Run
flutter doctor
and paste the output below:Click To Expand
Flutter dependencies
Run
flutter pub deps -- --style=compact
and paste the output below:Click To Expand
The text was updated successfully, but these errors were encountered: