Skip to content

Conversation

@markmc
Copy link
Member

@markmc markmc commented Mar 10, 2025

See #14512 (comment)

Enable tests/v1/entrypoints/llm/test_struct_output_generate.py in CI.

WIP while we debug why this somethimes fails with:

RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method

The failure can be reproduced in some environments with:

VLLM_USE_V1=1 pytest -s -v 'tests/v1/entrypoints/llm/test_struct_output_generate.py::test_guided_grammar_ebnf[xgrammar]' 'tests/v1/entrypoints/llm/test_struct_output_generate.py::test_guided_grammar_lark[xgrammar]'

Enable `tests/v1/entrypoints/llm/test_struct_output_generate.py` in CI.

WIP while we debug why this somethimes fails with:

```
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
```

The failure can be reproduced in some environments with:

```
VLLM_USE_V1=1 pytest -s -v 'tests/v1/entrypoints/llm/test_struct_output_generate.py::test_guided_grammar_ebnf[xgrammar]' 'tests/v1/entrypoints/llm/test_struct_output_generate.py::test_guided_grammar_lark[xgrammar]'
```

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the ci/build label Mar 10, 2025
@markmc markmc added the ready ONLY add when PR is ready to merge/full CI is needed label Mar 10, 2025
@markmc
Copy link
Member Author

markmc commented Mar 11, 2025

Sure enough, v1-test failed with:

[2025-03-10T22:48:18Z] ERROR 03-10 22:48:18 [core.py:324]   File "/opt/venv/lib/python3.12/site-packages/torch/cuda/__init__.py", line 305, in _lazy_init
[2025-03-10T22:48:18Z] ERROR 03-10 22:48:18 [core.py:324]     raise RuntimeError(
[2025-03-10T22:48:18Z] ERROR 03-10 22:48:18 [core.py:324] RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
[2025-03-10T22:48:18Z] ERROR 03-10 22:48:18 [core.py:324]

@markmc
Copy link
Member Author

markmc commented Mar 11, 2025

Closing in favor of #14619

@markmc markmc closed this Mar 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/build ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant