Skip to content

Conversation

@dvrogozh
Copy link
Contributor

@dvrogozh dvrogozh commented Jul 9, 2025

Purpose

It's required to use spawn start method running XPU backend with multiprocessing. There are 2 places in vllm where this needs to be fixed:

  • One in vllm/utils
  • Another in test/utils

Fix in the test adjusts create_new_process_for_each_test decorator which further needs to be used for the actual test. Some tests are already marked with it due to work done for ROCm. In some cases it might still be missing or fork_new_process_for_each_test used instead.

Test Plan

This commit unlocks running a number of tests on xpu and allows to look into actual runtime issues. Commit behavior can be tried on these tests:

  • tests/v1/engine/test_llm_engine.py::test_engine_metrics
  • tests/v1/e2e/test_cascade_attention.py

Test Result

Error happening before the fix:

RuntimeError: Cannot re-initialize XPU in forked subprocess. To use XPU with multiprocessing, you must use the 'spawn' start method

After the fix:

vLLM core gets initialized and tests start to run. Some pass, some fail - commit basically unlocks testing on XPU unveiling actual issues.

CC: @Liangliang-Ma

@github-actions
Copy link

github-actions bot commented Jul 9, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @dvrogozh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request implements necessary changes to enable the use of the XPU backend with multiprocessing in vLLM. The core issue addressed is the requirement to use the 'spawn' multiprocessing start method with XPU, as opposed to 'fork', to prevent re-initialization errors. By adjusting process creation logic in both testing utilities and core vLLM utilities, this PR unblocks further testing and development on XPU platforms.

Highlights

  • XPU Multiprocessing Support: Modified the create_new_process_for_each_test decorator in tests/utils.py to explicitly include XPU platforms when determining whether to default to the 'spawn' multiprocessing method. This aligns XPU behavior with ROCm platforms, which also require 'spawn'.
  • Test Decorator Update: Updated tests/v1/e2e/test_cascade_attention.py to use the create_new_process_for_each_test() decorator instead of the deprecated fork_new_process_for_each_test. This ensures that tests requiring a new process are correctly initialized with the appropriate 'spawn' method for XPU.
  • XPU Initialization Check: Introduced a new utility function xpu_is_initialized() in vllm/utils/__init__.py. This function checks if the torch.xpu backend is compiled and currently initialized, providing a robust way to detect XPU activity.
  • Forced Spawn for XPU: Enhanced the _maybe_force_spawn() function in vllm/utils/__init__.py to force the 'spawn' multiprocessing start method if the XPU backend is detected as initialized. This directly addresses the RuntimeError encountered when XPU is re-initialized in forked subprocesses.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the v1 label Jul 9, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly enables the spawn multiprocessing start method for the XPU backend, which is a necessary fix to avoid runtime errors. The changes are well-targeted to vllm/utils and tests/utils, and the update to test_cascade_attention.py demonstrates the correct usage. The overall logic is sound. I've included one suggestion to improve code readability in tests/utils.py.

It's required to use `spawn` start method running XPU backend with
multiprocessing. There are 2 places in vllm where this needs to be
fixed:
* One in `vllm/utils`
* Another in `test/utils`

Fix in the test adjusts `create_new_process_for_each_test` decorator
which further needs to be used for the actual test. Some tests are
already marked with it due to work done for ROCm. In some cases it
might still be missing or `fork_new_process_for_each_test` used instead.

This commit unlocks running a number of tests on xpu and allows tolook
into actual runtime issues. Commit behavior can be tried on these tests:
* `tests/v1/engine/test_llm_engine.py::test_engine_metrics`
* `tests/v1/e2e/test_cascade_attention.py`

Error happenning before the fix:
```
RuntimeError: Cannot re-initialize XPU in forked subprocess. To use XPU with multiprocessing, you must use the 'spawn' start method
```

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
@Liangliang-Ma
Copy link
Contributor

Thanks for fixing that. This one is very clear and reasonable.

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) July 9, 2025 06:11
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 9, 2025
@DarkLight1337 DarkLight1337 disabled auto-merge July 9, 2025 07:34
@vllm-bot vllm-bot merged commit e760fce into vllm-project:main Jul 9, 2025
67 of 73 checks passed
ant-yy pushed a commit to ant-yy/vllm that referenced this pull request Jul 9, 2025
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Pradyun92 pushed a commit to Pradyun92/vllm that referenced this pull request Aug 6, 2025
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 27, 2025
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants