Skip to content

Conversation

@weijinqian0
Copy link
Collaborator

@weijinqian0 weijinqian0 commented Aug 30, 2025

vllm-project/vllm@69f4635

Signed-off-by: weijinqian_v1 <weijinqian@huawei.com>
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request fixes a unit test error by adapting the creation of Request objects to handle API changes across different vllm versions. The changes introduce conditional logic based on the vllm version. My review focuses on improving the maintainability of this new logic by refactoring duplicated code and making the version checks more extensible. Both changes suffer from significant code duplication which can be addressed by extracting common parameters.

Comment on lines +57 to +75
if vllm_version_is("0.10.1.1") or vllm_version_is("0.10.1"):
request = Request(request_id=f"{i}",
prompt_token_ids=[i] * num_tokens,
sampling_params=sampling_params,
multi_modal_kwargs=None,
multi_modal_placeholders=None,
multi_modal_hashes=None,
eos_token_id=EOS_TOKEN_ID,
pooling_params=None,
block_hasher=get_request_block_hasher(
block_size, hash_fn))
else:
request = Request(request_id=f"{i}",
prompt_token_ids=[i] * num_tokens,
sampling_params=sampling_params,
eos_token_id=EOS_TOKEN_ID,
pooling_params=None,
block_hasher=get_request_block_hasher(
block_size, hash_fn))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There is significant code duplication between the if and else blocks when creating the Request object. This can make the code harder to maintain and prone to errors if one branch is updated but the other is not. You can refactor this by extracting the common arguments into a dictionary to reduce redundancy. The version check can also be made more concise and extensible.

        request_kwargs = {
            "request_id": f"{i}",
            "prompt_token_ids": [i] * num_tokens,
            "sampling_params": sampling_params,
            "eos_token_id": EOS_TOKEN_ID,
            "pooling_params": None,
            "block_hasher": get_request_block_hasher(block_size, hash_fn),
        }
        if any(vllm_version_is(v) for v in ["0.10.1.1", "0.10.1"]):
            request_kwargs.update({
                "multi_modal_kwargs": None,
                "multi_modal_placeholders": None,
                "multi_modal_hashes": None,
            })
        request = Request(**request_kwargs)

Comment on lines +163 to +183
if vllm_version_is("0.10.1.1") or vllm_version_is("0.10.1"):
req = Request(
request_id=f"id-{request_id}",
prompt_token_ids=prompt_token_ids,
sampling_params=sampling_params,
multi_modal_kwargs=None,
multi_modal_placeholders=None,
multi_modal_hashes=None,
pooling_params=[],
eos_token_id=EOS_TOKEN_ID,
block_hasher=block_hasher,
)
else:
req = Request(
request_id=f"id-{request_id}",
prompt_token_ids=prompt_token_ids,
sampling_params=sampling_params,
pooling_params=[],
eos_token_id=EOS_TOKEN_ID,
block_hasher=block_hasher,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Similar to the other change in this PR, there is significant code duplication here when creating the Request object. This can be refactored to improve maintainability by extracting common arguments into a dictionary. The version check can also be made more concise and extensible.

    request_kwargs = {
        "request_id": f"id-{request_id}",
        "prompt_token_ids": prompt_token_ids,
        "sampling_params": sampling_params,
        "pooling_params": [],
        "eos_token_id": EOS_TOKEN_ID,
        "block_hasher": block_hasher,
    }
    if any(vllm_version_is(v) for v in ["0.10.1.1", "0.10.1"]):
        request_kwargs.update({
            "multi_modal_kwargs": None,
            "multi_modal_placeholders": None,
            "multi_modal_hashes": None,
        })
    req = Request(**request_kwargs)

Signed-off-by: weijinqian_v1 <weijinqian@huawei.com>
@codecov
Copy link

codecov bot commented Aug 30, 2025

Codecov Report

❌ Patch coverage is 75.00000% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 72.32%. Comparing base (600b08f) to head (f6c8351).
⚠️ Report is 6 commits behind head on main.

Files with missing lines Patch % Lines
tests/ut/core/test_scheduler.py 66.66% 1 Missing ⚠️
tests/ut/kv_connector/utils.py 66.66% 1 Missing ⚠️

❌ Your patch status has failed because the patch coverage (75.00%) is below the target coverage (80.00%). You can increase the patch coverage or adjust the target coverage.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2644      +/-   ##
==========================================
- Coverage   72.61%   72.32%   -0.29%     
==========================================
  Files         147      147              
  Lines       21805    21870      +65     
==========================================
- Hits        15833    15818      -15     
- Misses       5972     6052      +80     
Flag Coverage Δ
unittests 72.32% <75.00%> (-0.29%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@wangxiyuan wangxiyuan merged commit 6f1047d into vllm-project:main Aug 30, 2025
15 of 16 checks passed
845473182 pushed a commit to raindaywhu/vllm-ascend that referenced this pull request Sep 1, 2025
…into main_829

* 'main_829' of https://github.com/raindaywhu/vllm-ascend:
  [torchair]remove aicpu op (vllm-project#2640)
  bugfix for torchair graph (vllm-project#2639)
  [CI] fix UT error. (vllm-project#2644)
  [3/N][Feat][Graph] Support `all-to-all` and quantized models with ACL Graph (vllm-project#2614)
  [Bugfix] Fix mc2 operator error in aclgraph + ep<16 scenario (vllm-project#2609)
wenba0 pushed a commit to wenba0/vllm-ascend that referenced this pull request Sep 5, 2025
vllm-project/vllm@69f4635 changed the vl input usage, this PR fix the related UT failure.

- vLLM version: v0.10.1.1
- vLLM main:
vllm-project/vllm@d660c98

---------

Signed-off-by: weijinqian_v1 <weijinqian@huawei.com>
Co-authored-by: weijinqian_v1 <weijinqian@huawei.com>
Signed-off-by: lijiaojiao <lijiaojiao990304@163.com>
@weijinqian0 weijinqian0 deleted the main_fix_ut branch September 8, 2025 01:11
wangxiaoteng888 pushed a commit to LCAIZJ/vllm-ascend that referenced this pull request Sep 25, 2025
vllm-project/vllm@69f4635 changed the vl input usage, this PR fix the related UT failure.

- vLLM version: v0.10.1.1
- vLLM main:
vllm-project/vllm@d660c98

---------

Signed-off-by: weijinqian_v1 <weijinqian@huawei.com>
Co-authored-by: weijinqian_v1 <weijinqian@huawei.com>
chopper0126 pushed a commit to chopper0126/vllm-ascend that referenced this pull request Sep 26, 2025
vllm-project/vllm@69f4635 changed the vl input usage, this PR fix the related UT failure.

- vLLM version: v0.10.1.1
- vLLM main:
vllm-project/vllm@d660c98

---------

Signed-off-by: weijinqian_v1 <weijinqian@huawei.com>
Co-authored-by: weijinqian_v1 <weijinqian@huawei.com>
Angazenn pushed a commit to Angazenn/vllm-ascend that referenced this pull request Oct 21, 2025
vllm-project/vllm@69f4635 changed the vl input usage, this PR fix the related UT failure.

- vLLM version: v0.10.1.1
- vLLM main:
vllm-project/vllm@d660c98

---------

Signed-off-by: weijinqian_v1 <weijinqian@huawei.com>
Co-authored-by: weijinqian_v1 <weijinqian@huawei.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants