Skip to content

Conversation

@DarkLight1337
Copy link
Member

@DarkLight1337 DarkLight1337 commented Aug 7, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

Currently we have two separate options to control MM processing cache usage:

  • disable_mm_preprocessor_cache engine argument
  • VLLM_MM_INPUT_CACHE_GIB environment variable

This PR consolidates this into a single engine argument, mm_processor_cache_gb, to simplify the user setup. Accordingly, the following options have been deprecated and will be removed in v0.13:

  • disable_mm_preprocessor_cache=True is the same as mm_processor_cache_gb=0
  • VLLM_MM_INPUT_CACHE_GIB=XXX is the same as mm_processor_cache_gb=XXX

Test Plan

Test Result

(Optional) Documentation Update

Updated docs accordingly.

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@github-actions
Copy link

github-actions bot commented Aug 7, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added documentation Improvements or additions to documentation frontend llama Related to Llama models multi-modality Related to multi-modality (#4194) v1 labels Aug 7, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively consolidates the multi-modal cache size configuration into a single mm_processor_cache_gb engine argument, deprecating the older disable_mm_preprocessor_cache and VLLM_MM_INPUT_CACHE_GIB settings. The changes are well-propagated through documentation, examples, and tests. I've identified one correctness issue in the implementation that should be addressed.

@DarkLight1337 DarkLight1337 moved this to In Progress in Multi-modality Core Aug 7, 2025
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible to deduplicate this config so it doesn't have to live in ModelConfig and MultiModalConfig in the same way that disable_mm_preprocessor_cache did?

Copy link
Member Author

@DarkLight1337 DarkLight1337 Aug 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which config are you referring to here? Since disable-mm-preprocessor-cache will be removed anyway, no point in refactoring that

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mm_processor_cache_gb is a field of both ModelConfig and MultiModalConfig.

Can it just be a field of MultiModalConfig and anywhere we have model_config.mm_processor_cache_gb changes to model_config.multimodal_config.mm_processor_cache_gb?

Copy link
Member Author

@DarkLight1337 DarkLight1337 Aug 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We currently initialize MultiModalConfig inside ModelConfig, so ModelConfig needs to accept all of the arguments that are forwarded to MultiModalConfig

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should use https://docs.python.org/3/library/dataclasses.html#dataclasses.InitVar so that mm_processor_cache_gb can be accessed in __post_init__ without making it a field of ModelConfig

Copy link
Member Author

@DarkLight1337 DarkLight1337 Aug 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm let's do this in a separate PR since it would also apply to other fields of MultiModalConfig

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good idea

"--mm-processor-cache-gb",
**multimodal_kwargs["mm_processor_cache_gb"])
multimodal_group.add_argument("--disable-mm-preprocessor-cache",
type=bool,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This way it will default to None and you can check if self.disable_mm_preprocessor_cache is not None below, allowing you to warn if the user manually sets it to False too

Suggested change
type=bool,
type=argparse.BooleanOptionalAction,

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good idea

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@vllm-bot vllm-bot merged commit 139d155 into vllm-project:main Aug 7, 2025
36 of 45 checks passed
@github-project-automation github-project-automation bot moved this from In Progress to Done in Multi-modality Core Aug 7, 2025
@DarkLight1337 DarkLight1337 deleted the mm-cache-size-gb branch August 7, 2025 16:47
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
…#22441)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
noamgat pushed a commit to noamgat/vllm that referenced this pull request Aug 9, 2025
…#22441)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Noam Gat <noamgat@gmail.com>
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
…#22441)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Paul Pak <paulpak58@gmail.com>
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
…#22441)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Diego-Castan <diego.castan@ibm.com>
yiliu30 pushed a commit to yiliu30/vllm-fork that referenced this pull request Aug 19, 2025
…#22441)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
…#22441)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
…#22441)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Xiao Yu <xiao.yu@amd.com>
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
…#22441)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation frontend llama Related to Llama models multi-modality Related to multi-modality (#4194) ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

5 participants