-
Notifications
You must be signed in to change notification settings - Fork 617
[Bugfix] fix qwen2.5-vl-72b shape ERROR during the _prepare_inputs phase under high concurrency. #4553
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: v0.11.0-dev
Are you sure you want to change the base?
Conversation
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request aims to fix a shape error for qwen2.5-vl-72b under high concurrency by updating how multimodal embeddings are prepared. The changes involve modifying get_input_embeddings to accept an is_multimodal mask and updating the NPUModelRunner to construct and pass this mask. This aligns the codebase with newer vLLM practices for handling multimodal inputs.
My review identifies a critical issue where a necessary argument is missing in the call to get_input_embeddings, which likely leaves the original bug unfixed. The proposed change completes the fix by correctly enabling the handling of out-of-vocabulary multimodal tokens.
| inputs_embeds = self.model.get_input_embeddings( | ||
| input_ids, | ||
| multimodal_embeddings=mm_embeds, | ||
| is_multimodal=is_mm_embed, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The call to get_input_embeddings is missing the handle_oov_mm_token argument. Without it, the default value False will be used, and the logic to handle out-of-vocabulary (OOV) multimodal tokens will not be triggered. This can lead to errors when the input contains multimodal tokens with IDs outside the language model's vocabulary range, which is likely the root cause of the issue this PR aims to fix. You should pass handle_oov_mm_token=self.supports_mm_inputs to correctly handle these cases.
| inputs_embeds = self.model.get_input_embeddings( | |
| input_ids, | |
| multimodal_embeddings=mm_embeds, | |
| is_multimodal=is_mm_embed, | |
| ) | |
| inputs_embeds = self.model.get_input_embeddings( | |
| input_ids, | |
| multimodal_embeddings=mm_embeds, | |
| is_multimodal=is_mm_embed, | |
| handle_oov_mm_token=self.supports_mm_inputs, | |
| ) |
…uts phase under high concurrency. Signed-off-by: Levi-JQ <yujinqi2@huawei.com>
0cf6a2c to
291c6d6
Compare
What this PR does / why we need it?
qwen2.5-vl-72b reports a shape ERROR during the _prepare_inputs phase under high concurrency【 issue #4430 】
This PR fix it.
The related PR in main branch :#3612
The related commit in vllm : https://github.com/vllm-project/vllm/blob/17c540a993af88204ad1b78345c8a865cf58ce44/vllm/model_executor/models/interfaces.py
【The _get_text_embeddings function has been refactored to interfaces.pyin vllm.】