Skip to content

Conversation

@DarkLight1337
Copy link
Member

@DarkLight1337 DarkLight1337 commented Sep 25, 2025

Purpose

  • Initialize multimodal processor only once
  • Remove all async methods because they aren't being used in V1

FIX #25671

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request streamlines the InputPreprocessor by caching the multimodal processor and removing unused async methods. The change to initialize the processor only once is a good optimization. However, I've identified a potential race condition in the lazy initialization logic that could occur in a multi-threaded environment. My review comment provides details on this issue and suggests a fix to ensure thread safety. The removal of async methods simplifies the codebase as intended.

@DarkLight1337 DarkLight1337 added this to the v0.11.0 milestone Sep 25, 2025
Copy link
Collaborator

@ProExpertProg ProExpertProg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice cleanup. I'm not familiar enough to give a proper approve but I am very happy we're not hacking around on the ModelConfig hash!

Copy link
Member

@ywang96 ywang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) September 25, 2025 19:16
@DarkLight1337 DarkLight1337 merged commit 3d54bdc into vllm-project:main Sep 25, 2025
44 of 45 checks passed
@DarkLight1337 DarkLight1337 deleted the remove-input-preprocessor-async branch September 25, 2025 21:06
yewentao256 pushed a commit that referenced this pull request Oct 3, 2025
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: yewentao256 <zhyanwentao@126.com>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 10, 2025
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
choprahetarth pushed a commit to Tandemn-Labs/vllm that referenced this pull request Oct 11, 2025
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
lywa1998 pushed a commit to lywa1998/vllm that referenced this pull request Oct 20, 2025
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 24, 2025
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Performance] model_config.compute_hash is computed every time and introduce overhead in each new multi-modal req

3 participants