Skip to content

Conversation

@BloodAxe
Copy link
Contributor

@BloodAxe BloodAxe commented Aug 15, 2025

Purpose

Enable use of Efficient Video Sampling (EVS) for redundant video tokens pruning:

llm = LLM(
    "nvidia/Cosmos-Reason1-7B",
    video_pruning_rate=0.75, # Prune 75% video tokens, effectively reducing TTFT 4x
    limit_mm_per_prompt={"image": 10, "video": 10},
)

EVS reduces TTFT and ITL by pruning less important vision tokens from the LLM:

Time to first token (TTFT) Token throughput

Test Plan

  • Added tests to verify inference with EVS on/off works as expected

Test Result

(Optional) Documentation Update


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

@mergify
Copy link

mergify bot commented Aug 15, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @BloodAxe.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for Efficient Video Sampling (EVS) by introducing a new interface for models to return custom embeddings and positions, which enables video token pruning. While the overall direction is good, there are several critical issues that need to be addressed. The new interface signature in interfaces.py is inconsistent with its usage in gpu_model_runner.py. More importantly, the logic for updating request states with the pruned positions appears to be incorrect, as it applies the same update to all requests in a batch. Additionally, there are several leftover debugging statements and commented-out code that should be cleaned up before merging.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added multi-modality Related to multi-modality (#4194) qwen Related to Qwen models labels Aug 19, 2025
@mergify mergify bot removed the needs-rebase label Aug 21, 2025
Signed-off-by: Eugene Khvedchenia <ekhvedchenia@nvidia.com>
@BloodAxe BloodAxe force-pushed the feature/evs-support branch from 3ad3321 to 5e784b0 Compare August 26, 2025 09:06
BloodAxe and others added 3 commits August 26, 2025 12:07
Signed-off-by: Eugene Khvedchenia <ekhvedchenia@nvidia.com>
Signed-off-by: Eugene Khvedchenia <ekhvedchenia@nvidia.com>
@mergify
Copy link

mergify bot commented Sep 25, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @BloodAxe.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Sep 25, 2025
@DarkLight1337 DarkLight1337 added this to the v0.11.0 milestone Sep 25, 2025
@DarkLight1337 DarkLight1337 moved this to In Progress in Multi-modality Core Sep 25, 2025
Signed-off-by: Eugene Khvedchenia <ekhvedchenia@nvidia.com>

# Conflicts:
#	vllm/v1/worker/gpu_model_runner.py
@mergify mergify bot removed the needs-rebase label Sep 25, 2025
BloodAxe and others added 6 commits September 25, 2025 21:00
Signed-off-by: Eugene Khvedchenia <ekhvedchenia@nvidia.com>
Signed-off-by: Eugene Khvedchenia <ekhvedchenia@nvidia.com>
Signed-off-by: Eugene Khvedchenia <ekhvedchenia@nvidia.com>
Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay, let's get this merged

@DarkLight1337 DarkLight1337 merged commit 392edee into vllm-project:main Sep 26, 2025
48 checks passed
@github-project-automation github-project-automation bot moved this from In Progress to Done in Multi-modality Core Sep 26, 2025
yewentao256 pushed a commit that referenced this pull request Oct 3, 2025
Signed-off-by: Eugene Khvedchenia <ekhvedchenia@nvidia.com>
Signed-off-by: Eugene Khvedchenya <ekhvedchenya@gmail.com>
Co-authored-by: Roger Wang <hey@rogerw.io>
Signed-off-by: yewentao256 <zhyanwentao@126.com>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 10, 2025
Signed-off-by: Eugene Khvedchenia <ekhvedchenia@nvidia.com>
Signed-off-by: Eugene Khvedchenya <ekhvedchenya@gmail.com>
Co-authored-by: Roger Wang <hey@rogerw.io>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
choprahetarth pushed a commit to Tandemn-Labs/vllm that referenced this pull request Oct 11, 2025
Signed-off-by: Eugene Khvedchenia <ekhvedchenia@nvidia.com>
Signed-off-by: Eugene Khvedchenya <ekhvedchenya@gmail.com>
Co-authored-by: Roger Wang <hey@rogerw.io>
lywa1998 pushed a commit to lywa1998/vllm that referenced this pull request Oct 20, 2025
Signed-off-by: Eugene Khvedchenia <ekhvedchenia@nvidia.com>
Signed-off-by: Eugene Khvedchenya <ekhvedchenya@gmail.com>
Co-authored-by: Roger Wang <hey@rogerw.io>
alhridoy pushed a commit to alhridoy/vllm that referenced this pull request Oct 24, 2025
Signed-off-by: Eugene Khvedchenia <ekhvedchenia@nvidia.com>
Signed-off-by: Eugene Khvedchenya <ekhvedchenya@gmail.com>
Co-authored-by: Roger Wang <hey@rogerw.io>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 24, 2025
Signed-off-by: Eugene Khvedchenia <ekhvedchenia@nvidia.com>
Signed-off-by: Eugene Khvedchenya <ekhvedchenya@gmail.com>
Co-authored-by: Roger Wang <hey@rogerw.io>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
rtourgeman pushed a commit to rtourgeman/vllm that referenced this pull request Nov 10, 2025
Signed-off-by: Eugene Khvedchenia <ekhvedchenia@nvidia.com>
Signed-off-by: Eugene Khvedchenya <ekhvedchenya@gmail.com>
Co-authored-by: Roger Wang <hey@rogerw.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

multi-modality Related to multi-modality (#4194) qwen Related to Qwen models ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

4 participants