-
-
Notifications
You must be signed in to change notification settings - Fork 11.3k
[Doc] Update pip install instruction for testing dependencies #17963
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: zt2370 <ztang2370@gmail.com>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
|
|
||
| ```bash | ||
| pip install -r requirements/dev.txt | ||
| pip install -r requirements/dev.txt --extra-index-url https://download.pytorch.org/whl/cu128 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This only applies for developers using CUDA though. How about other backends?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for pointing this out! However, since torch==2.7.0+cu128 is now specified in requirements/test.txt (updated via #17576 last week), installing it requires access to the PyTorch CUDA 12.8 wheels via pip install -r requirements/dev.txt --extra-index-url https://download.pytorch.org/whl/cu128. I can think of 2 options:
- Make
requirements/test.txtbackend-agnostic:
Changerequirements/test.txtback to use a backend-neutral spec:
torch==2.7.0
Developers using cuda can then install their desired CUDA variant manually, e.g.:
pip install torch==2.7.0+cu128 --extra-index-url https://download.pytorch.org/whl/cu128
Add a note in the README or contributing guide like:
Note: If you're using CUDA and install torch==2.7.0 without a specific build, it will default to the CUDA 12.6 variant. However, CUDA 12.6 builds currently have known issues. It’s recommended to install the CUDA 12.8 build explicitly. See PyTorch installation guide.
- Keep
torch==2.7.0+cu128, but document clearly:
If we keep the CUDA-specific spec inrequirements/test.txt, we should update the doc to say:
pip install -r requirements/dev.txt --extra-index-url https://download.pytorch.org/whl/cu128
And add a note:
Note: This assumes you're using CUDA 12.8. For other environments (CPU-only, different CUDA versions, ROCm), you may need to adjust the torch installation manually.
Let me know which direction you'd prefer. I'm happy to update the PR accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @houseroad @mgoin
Signed-off-by: zt2370 <ztang2370@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated by adding comment
| ```bash | ||
| pip install -r requirements/dev.txt | ||
| # The following command assumes CUDA 12.8. For CPU-only, other CUDA versions, | ||
| # or ROCm, etc., adjust the torch installation as needed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi there, I added the comment to make it clearer.
@DarkLight1337 @houseroad @mgoin
|
This pull request has merge conflicts that must be resolved before it can be |
Update the pip install dev requirements instruction in the doc, following up on #17576.
Running
pip install -r requirements/dev.txtwill give error: