Skip to content

Conversation

@Levi-JQ
Copy link
Contributor

@Levi-JQ Levi-JQ commented Nov 28, 2025

What this PR does / why we need it?

qwen2.5-vl-72b reports a shape ERROR during the _prepare_inputs phase under high concurrency【 issue #4430

This PR fix it.

The related PR in main branch :#3612

The related commit in vllm : https://github.com/vllm-project/vllm/blob/17c540a993af88204ad1b78345c8a865cf58ce44/vllm/model_executor/models/interfaces.py

【The _get_text_embeddings function has been refactored to interfaces.pyin vllm.】

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@Levi-JQ Levi-JQ changed the title [Bugfix] qwen2.5-vl-72b reports a shape ERROR during the _prepare_inputs phase under high concurrency. [Bugfix] fix qwen2.5-vl-72b shape ERROR during the _prepare_inputs phase under high concurrency. Nov 28, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix a shape error for qwen2.5-vl-72b under high concurrency by updating how multimodal embeddings are prepared. The changes involve modifying get_input_embeddings to accept an is_multimodal mask and updating the NPUModelRunner to construct and pass this mask. This aligns the codebase with newer vLLM practices for handling multimodal inputs.

My review identifies a critical issue where a necessary argument is missing in the call to get_input_embeddings, which likely leaves the original bug unfixed. The proposed change completes the fix by correctly enabling the handling of out-of-vocabulary multimodal tokens.

Comment on lines +1396 to +1401
inputs_embeds = self.model.get_input_embeddings(
input_ids,
multimodal_embeddings=mm_embeds,
is_multimodal=is_mm_embed,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The call to get_input_embeddings is missing the handle_oov_mm_token argument. Without it, the default value False will be used, and the logic to handle out-of-vocabulary (OOV) multimodal tokens will not be triggered. This can lead to errors when the input contains multimodal tokens with IDs outside the language model's vocabulary range, which is likely the root cause of the issue this PR aims to fix. You should pass handle_oov_mm_token=self.supports_mm_inputs to correctly handle these cases.

Suggested change
inputs_embeds = self.model.get_input_embeddings(
input_ids,
multimodal_embeddings=mm_embeds,
is_multimodal=is_mm_embed,
)
inputs_embeds = self.model.get_input_embeddings(
input_ids,
multimodal_embeddings=mm_embeds,
is_multimodal=is_mm_embed,
handle_oov_mm_token=self.supports_mm_inputs,
)

…uts phase under high concurrency.

Signed-off-by: Levi-JQ <yujinqi2@huawei.com>
@Levi-JQ Levi-JQ force-pushed the official-qwen2.5-carsh branch from 0cf6a2c to 291c6d6 Compare November 28, 2025 13:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant