Skip to content

Conversation

@yyzxw
Copy link
Contributor

@yyzxw yyzxw commented Sep 29, 2025

Purpose

fix #25840

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly addresses a ValueError that occurs when _dummy_pooler_run is called for a model that does not support any pooling tasks. The fix introduces a check at the beginning of the function to raise a RuntimeError with a clear message, which is a good improvement. I have a couple of suggestions to further refine the code by removing a redundant function call and a now-unnecessary assertion.

@yyzxw yyzxw force-pushed the fix/model-not-support-pool-runner branch 3 times, most recently from 15cfb10 to 25198aa Compare September 29, 2025 05:44
@noooop
Copy link
Collaborator

noooop commented Sep 29, 2025

Sorry for overlooking as_reward_model in #20930

try add @default_pooling_type("ALL") before L390

# Lazy import
from vllm.model_executor.layers.pooler import DispatchPooler, Pooler
class ModelForReward(_create_pooling_model_cls(cls)):
def _init_pooler(self, vllm_config: "VllmConfig", prefix: str = ""):

And comment: The Reward model uses all pooling by default

refer to

@default_pooling_type("ALL")
class Qwen2ForRewardModel(Qwen2RewardBaseModel):
def __init__(self, *, vllm_config: VllmConfig, prefix: str = ""):

@yyzxw
Copy link
Contributor Author

yyzxw commented Sep 29, 2025

@noooop why should add @default_pooling_type("ALL") to /vllm/model_executor/models/adapters.py#L390?
I tried to add it, but nothing changed,still get error.

@yyzxw yyzxw force-pushed the fix/model-not-support-pool-runner branch from 25198aa to 3bc918f Compare September 29, 2025 09:46
@noooop
Copy link
Collaborator

noooop commented Sep 29, 2025

@noooop why should add @default_pooling_type("ALL") to /vllm/model_executor/models/adapters.py#L390? I tried to add it, but nothing changed,still get error.

if self.model_config:
if self.model_config.pooler_config:
pooling_type = self.model_config.pooler_config.pooling_type
if pooling_type is None or pooling_type.lower() != "last":
disable_chunked_prefill_reasons.append(
"Only \"last\" pooling supports chunked "
"prefill and prefix caching; disabling both.")
if not getattr(self.model_config.hf_config, "is_causal", True):
disable_chunked_prefill_reasons.append(
"Only models using causal attention supports chunked "
"prefill and prefix caching; disabling both.")
elif self.model_config.is_encoder_decoder:

L410 requires pooling_type.lower() != "last" anyway. This will automatically turn off chunked prefill.

Please find a way to make setting @default_pooling_type("ALL") work

@yyzxw yyzxw force-pushed the fix/model-not-support-pool-runner branch from 3bc918f to 38d8700 Compare October 9, 2025 02:23
@yyzxw yyzxw force-pushed the fix/model-not-support-pool-runner branch from 38d8700 to ea0edbb Compare October 9, 2025 02:28
@DarkLight1337 DarkLight1337 added the ready ONLY add when PR is ready to merge/full CI is needed label Oct 9, 2025
@noooop
Copy link
Collaborator

noooop commented Oct 13, 2025

@yyzxw Thanks for your contribution

language-models-test-extended-pooling failure is unrelated to this PR

Will be fixed in #25817

@DarkLight1337 Is there anything else that needs to be modified in this PR?

@DarkLight1337
Copy link
Member

No

@vllm-bot vllm-bot merged commit 46ad739 into vllm-project:main Oct 13, 2025
53 of 55 checks passed
bbeckca pushed a commit to bbeckca/vllm that referenced this pull request Oct 13, 2025
vllm-project#25840) (vllm-project#25855)

Signed-off-by: zxw <1020938856@qq.com>
Co-authored-by: wang.yuqi <noooop@126.com>
1994 pushed a commit to 1994/vllm that referenced this pull request Oct 14, 2025
vllm-project#25840) (vllm-project#25855)

Signed-off-by: zxw <1020938856@qq.com>
Co-authored-by: wang.yuqi <noooop@126.com>
Signed-off-by: 1994 <1994@users.noreply.github.com>
Dhruvilbhatt pushed a commit to Dhruvilbhatt/vllm that referenced this pull request Oct 14, 2025
vllm-project#25840) (vllm-project#25855)

Signed-off-by: zxw <1020938856@qq.com>
Co-authored-by: wang.yuqi <noooop@126.com>
Signed-off-by: Dhruvil Bhatt <bhattdbh@amazon.com>
bbartels pushed a commit to bbartels/vllm that referenced this pull request Oct 16, 2025
vllm-project#25840) (vllm-project#25855)

Signed-off-by: zxw <1020938856@qq.com>
Co-authored-by: wang.yuqi <noooop@126.com>
Signed-off-by: bbartels <benjamin@bartels.dev>
lywa1998 pushed a commit to lywa1998/vllm that referenced this pull request Oct 20, 2025
vllm-project#25840) (vllm-project#25855)

Signed-off-by: zxw <1020938856@qq.com>
Co-authored-by: wang.yuqi <noooop@126.com>
alhridoy pushed a commit to alhridoy/vllm that referenced this pull request Oct 24, 2025
vllm-project#25840) (vllm-project#25855)

Signed-off-by: zxw <1020938856@qq.com>
Co-authored-by: wang.yuqi <noooop@126.com>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 24, 2025
vllm-project#25840) (vllm-project#25855)

Signed-off-by: zxw <1020938856@qq.com>
Co-authored-by: wang.yuqi <noooop@126.com>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 24, 2025
vllm-project#25840) (vllm-project#25855)

Signed-off-by: zxw <1020938856@qq.com>
Co-authored-by: wang.yuqi <noooop@126.com>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
0xrushi pushed a commit to 0xrushi/vllm that referenced this pull request Oct 26, 2025
vllm-project#25840) (vllm-project#25855)

Signed-off-by: zxw <1020938856@qq.com>
Co-authored-by: wang.yuqi <noooop@126.com>
Signed-off-by: 0xrushi <6279035+0xrushi@users.noreply.github.com>
0xrushi pushed a commit to 0xrushi/vllm that referenced this pull request Oct 26, 2025
vllm-project#25840) (vllm-project#25855)

Signed-off-by: zxw <1020938856@qq.com>
Co-authored-by: wang.yuqi <noooop@126.com>
Signed-off-by: 0xrushi <6279035+0xrushi@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Error in V1 engine when run with --convert reward

4 participants