-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Model] Initialize deepseek-vl support #5817
base: main
Are you sure you want to change the base?
[Model] Initialize deepseek-vl support #5817
Conversation
Contributed by enflame-tech |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution! I have a few initial comments.
Apart from that, can you add a test case (similar to test_llava.py
) to test the correctness of the model in CI?
now support deepseek-ai/deepseek-vl-7b-chat deepseek-ai/deepseek-vl-1.3b-chat |
This model depends on timm>=0.9.16, which depends on torch, but it will conflict with the dependencies of other third-party components and cause the pipeline to fail. Therefore, running this model requires additional installation. I don’t know if this is appropriate. In addition, it depends on many modules of timm, which is difficult to remove. |
Can you implement the individual timm modules inside vLLM? (where possible, you should use vLLM-specific layers to improve the performance anyway) |
OK,I will try to do this and I think it will take some time |
You can make use of our implementation of |
This test [tests/models/test_deepseek_vl.py] case depends on the project https://github.com/deepseek-ai/DeepSeek-VL, and it seems that pip installation will fail when build the docker .I think it is possible not to add this test case And The [examples/deepseek_vl_example.py] can run successfully. |
In this case it won't function for users of vLLM either since they can't install it (so you should still keep the tests). Can you figure out which dependency is causing the issue? |
|
You can manually register the model to HuggingFace inside the test case. |
Ok, I'll try it |
buildkite/fastcheck/pr/tensorizer-metrics-tracing-test — Failed (exit status 1) |
It's unrelated to this PR. |
…ech/vllm into deepseek-vl-7b-chat
This pull request has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this pull request should remain open. Thank you! |
Will this PR get merged? |
Or rather where could I help, what is missing or needs work, only the |
Sorry for forgetting about this! I think we need to fix the merge conflicts and update the tests (you might need to fork this branch and create a new PR). Afterwards we can work on removing redundant code in the model file. |
No problem at all! I am sorry to have prematurely commented here, as Qwen2 VL architecturally fits my usecase way better. |
Test On NVIDIA L40S
FIX #3356
FIX #4982