Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLM video understanding #10720

Closed
4 of 5 tasks
Dongnc1017 opened this issue Nov 15, 2024 · 11 comments · Fixed by #11497
Closed
4 of 5 tasks

LLM video understanding #10720

Dongnc1017 opened this issue Nov 15, 2024 · 11 comments · Fixed by #11497
Labels
💪 enhancement New feature or request

Comments

@Dongnc1017
Copy link

Self Checks

  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

1. Is this request related to a challenge you're experiencing? Tell me about your story.

Version 0.11.1 adds support for LLM video understanding. According to #9828, qwen mode requires users to submit a work order application and only supports urlsend mode. Please tell me how to do this and where to submit a work order?
In addition, what model is used to test qwen that supports video understanding?

2. Additional context or comments

No response

3. Can you help us with this feature?

  • I am interested in contributing to this feature.
Copy link

dosubot bot commented Nov 15, 2024

I found two similar discussions that might be relevant to your question about LLM video understanding in Dify:

  1. How to use/process video files in workflow? - This discussion is still open and might provide insights into processing video files within Dify workflows.

  2. Not support for mp4? - This is another open discussion that addresses issues related to video file support, specifically MP4 format.

Regarding your specific questions about submitting a work order application for qwen mode and the model used for testing qwen with video understanding, these discussions might not directly address them. You might need to check the official documentation or reach out to the maintainers for detailed guidance on submitting work orders and the specific models used.

To continue talking to Dosu, mention @dosu.

@dosubot dosubot bot added the 💪 enhancement New feature or request label Nov 15, 2024
@hjlarry
Copy link
Contributor

hjlarry commented Nov 15, 2024

the doc is here

@Copilotes
Copy link

the doc is here

What if the qwen-vl model is deployed locally by xinferent?

@hjlarry
Copy link
Contributor

hjlarry commented Nov 16, 2024

What if the qwen-vl model is deployed locally by xinferent?

Not supported yet. Seems xinferent use openAI's API, and openAI not support video yet

@co3k
Copy link

co3k commented Nov 18, 2024

@guchenhe @crazywoola @GarfieldDai ?
I have a patch to handle video files on the LLM node and process them as multi-modal inputs on the Gemini model. I'd like to submit this patch as a pull request linked to this issue. Could you please let me know if it's okay to open the pull request in this way?
Additionally, I am planning to create a similar patch for Gemini model of the VertexAI. Would it be alright to submit that as a pull request in the same manner?

@hjlarry
Copy link
Contributor

hjlarry commented Nov 19, 2024

@co3k seems this PR #10679 will process files in the LLM node

@co3k
Copy link

co3k commented Nov 19, 2024

@hjlarry Thank you for pointing out the pull request. After reviewing it, I found that the implementation for handling video files in the LLM node is more appropriate than my own. I'm looking forward to seeing that pull request merged.
However, regarding the support for Gemini and VertexAI, I believe the patch I have on hand could still be useful. I'd like to discuss this further through a pull request.

@co3k
Copy link

co3k commented Nov 19, 2024

Oh, I just noticed that the PR has already been approved. In that case, I will submit a separate pull request for the Gemini and VertexAI support.

@hjlarry
Copy link
Contributor

hjlarry commented Nov 19, 2024

Oh, I just noticed that the PR has already been approved. In that case, I will submit a separate pull request for the Gemini and VertexAI support.

yes, I also notice that Gemini support video in their doc, but when I tried in an agent app it not work for me. Looking forward your PR.

@Dongnc1017
Copy link
Author

There are currently solutions to understand videos through locally running models

@pvoo
Copy link
Contributor

pvoo commented Dec 3, 2024

Looking forward to this option for Gemini as well, seems to not pass the file if its a video currently (mp4), sends an empty file array to the LLM endpoint. for images or other docs seems to work fine currently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
💪 enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants