-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLM video understanding #10720
Comments
I found two similar discussions that might be relevant to your question about LLM video understanding in Dify:
Regarding your specific questions about submitting a work order application for qwen mode and the model used for testing qwen with video understanding, these discussions might not directly address them. You might need to check the official documentation or reach out to the maintainers for detailed guidance on submitting work orders and the specific models used. |
the doc is here |
What if the qwen-vl model is deployed locally by xinferent? |
Not supported yet. Seems xinferent use openAI's API, and openAI not support video yet |
@guchenhe @crazywoola @GarfieldDai ? |
@hjlarry Thank you for pointing out the pull request. After reviewing it, I found that the implementation for handling video files in the LLM node is more appropriate than my own. I'm looking forward to seeing that pull request merged. |
Oh, I just noticed that the PR has already been approved. In that case, I will submit a separate pull request for the Gemini and VertexAI support. |
yes, I also notice that Gemini support video in their doc, but when I tried in an agent app it not work for me. Looking forward your PR. |
There are currently solutions to understand videos through locally running models |
Looking forward to this option for Gemini as well, seems to not pass the file if its a video currently (mp4), sends an empty file array to the LLM endpoint. for images or other docs seems to work fine currently. |
Self Checks
1. Is this request related to a challenge you're experiencing? Tell me about your story.
Version 0.11.1 adds support for LLM video understanding. According to #9828, qwen mode requires users to submit a work order application and only supports urlsend mode. Please tell me how to do this and where to submit a work order?
In addition, what model is used to test qwen that supports video understanding?
2. Additional context or comments
No response
3. Can you help us with this feature?
The text was updated successfully, but these errors were encountered: