Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[VLM] Merged multi-modal processor and V1 support for Qwen-VL #12504

Merged
merged 7 commits into from
Jan 28, 2025

Conversation

DarkLight1337
Copy link
Member

@DarkLight1337 DarkLight1337 commented Jan 28, 2025

(Not to be confused with Qwen2-VL (which is already supported) and Qwen2.5-VL (not supported yet))


I have tested V1 locally and the model output is basically the same, but not entirely sure why an extra EOS token is being outputted.

V0 output:

# python examples/offline_inference/vision_language.py -m qwen_vl
The Tokyo Skytree tower is seen through cherry blossoms.
The Tokyo Skytree in the background with cherry blossoms in the foreground
The Tokyo Skytree is a must-see during cherry blossom season in Tokyo.
The Tokyo Skytree is seen through cherry blossoms in Tokyo, Japan.

V1 output (used TP=4 to avoid OOM locally):

# VLLM_USE_V1=1 python examples/offline_inference/vision_language.py -m qwen_vl
The Tokyo Skytree tower is seen through cherry blossoms.<|endoftext|>
The Tokyo Skytree is a must-see during cherry blossom season in Tokyo<|endoftext|>
The Tokyo Skytree is a must-see in Tokyo in spring<|endoftext|>
The Tokyo Skytree is seen through cherry blossoms in Tokyo, Japan.<|endoftext|>

I'll work on updating the model development guide with an example of custom HF processor in another PR.

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@mergify mergify bot added the documentation Improvements or additions to documentation label Jan 28, 2025
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@Isotr0py Isotr0py enabled auto-merge (squash) January 28, 2025 09:48
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jan 28, 2025
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@Isotr0py Isotr0py merged commit 8f58a51 into vllm-project:main Jan 28, 2025
50 of 51 checks passed
@DarkLight1337 DarkLight1337 deleted the qwenvl-v1 branch January 28, 2025 16:25
rasmith pushed a commit to rasmith/vllm that referenced this pull request Jan 30, 2025
Isotr0py pushed a commit to Isotr0py/vllm that referenced this pull request Feb 2, 2025
…roject#12504)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Isotr0py <2037008807@qq.com>
NickLucche pushed a commit to NickLucche/vllm that referenced this pull request Feb 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants