Skip to content

Conversation

@DarkLight1337
Copy link
Member

@DarkLight1337 DarkLight1337 commented Sep 20, 2025

Purpose

Use set_default_torch_num_threads to avoid hanging issue

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@DarkLight1337 DarkLight1337 added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 20, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a hanging issue in the AWQ test on vLLM's V1 engine by limiting the number of PyTorch threads to one during model initialization. The approach is sound and correctly resolves the likely cause of the hang in a multiprocessing context. The changes also correctly re-enable the test for V1 by removing the environment variable override. I've added one comment to refactor the duplicated code blocks for running the source and quantized models into a helper function. This will improve the code's maintainability and readability.

@DarkLight1337
Copy link
Member Author

Actually let me edit this PR to migrate more tests

@Isotr0py
Copy link
Member

Actually let me edit this PR to migrate more tests

So the hanging issue also appears on tests just migrated to V1 recently? Perhaps there are some issues about processes forking...

@DarkLight1337
Copy link
Member Author

Yes, it seems this context is only necessary for V1.

@DarkLight1337 DarkLight1337 changed the title [CI/Build] Avoid hanging AWQ test on V1 [CI/Build] Enable more multimodal tests in V1 Sep 20, 2025
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@mergify mergify bot added multi-modality Related to multi-modality (#4194) qwen Related to Qwen models labels Sep 20, 2025
@DarkLight1337
Copy link
Member Author

DarkLight1337 commented Sep 20, 2025

cc @WoosukKwon after this PR, multimodal models tests should be fully V1 now

@DarkLight1337 DarkLight1337 changed the title [CI/Build] Enable more multimodal tests in V1 [CI/Build] Enable the remaining multimodal tests in V1 Sep 20, 2025
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) September 20, 2025 07:33
@DarkLight1337 DarkLight1337 changed the title [CI/Build] Enable the remaining multimodal tests in V1 [V0 Deprecation] Enable the remaining multimodal tests in V1 Sep 20, 2025
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) September 20, 2025 07:35
@DarkLight1337 DarkLight1337 moved this to In Progress in V0 Deprecation Sep 20, 2025
Copy link
Collaborator

@WoosukKwon WoosukKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for doing this!

@DarkLight1337
Copy link
Member Author

DarkLight1337 commented Sep 20, 2025

The llava-onevision-transformers test still hangs even after I expand the scope of set_default_torch_num_threads to cover inference as well, maybe we just have to disable this for now...

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@DarkLight1337 DarkLight1337 merged commit bef180f into vllm-project:main Sep 20, 2025
20 checks passed
@DarkLight1337 DarkLight1337 deleted the awq-test-v1 branch September 20, 2025 17:50
@github-project-automation github-project-automation bot moved this from In Progress to Done in V0 Deprecation Sep 20, 2025
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
charlifu pushed a commit to ROCm/vllm that referenced this pull request Sep 25, 2025
…oject#25307)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: charlifu <charlifu@amd.com>
yewentao256 pushed a commit that referenced this pull request Oct 3, 2025
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: yewentao256 <zhyanwentao@126.com>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 10, 2025
…oject#25307)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
choprahetarth pushed a commit to Tandemn-Labs/vllm that referenced this pull request Oct 11, 2025
lywa1998 pushed a commit to lywa1998/vllm that referenced this pull request Oct 20, 2025
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 24, 2025
…oject#25307)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

multi-modality Related to multi-modality (#4194) qwen Related to Qwen models ready ONLY add when PR is ready to merge/full CI is needed

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

3 participants