Skip to content

Conversation

@nvmbreughe
Copy link
Contributor

@nvmbreughe nvmbreughe commented Oct 9, 2025

📌 Description

adds --force-reinstall when installing torch. This ensures we install the version we want, even if requirements.txt installed another one.

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @nvmbreughe, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a potential issue in Docker builds where the intended PyTorch version might not be correctly installed due to conflicts with requirements.txt. By introducing the --force-reinstall flag during the PyTorch installation step, the change guarantees that the specified version is always applied, ensuring a consistent and correct environment for subsequent operations.

Highlights

  • PyTorch Installation: Added the --force-reinstall flag to the pip install torch command within the Docker build script.
  • Dependency Management: Ensures that the desired PyTorch version is installed, overriding any potentially conflicting versions pulled by requirements.txt earlier in the Docker build process.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@nvmbreughe nvmbreughe mentioned this pull request Oct 9, 2025
5 tasks
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly identifies the need to ensure a specific version of torch is installed in the Docker images, overriding what might be installed from requirements.txt. The proposed solution of using --force-reinstall is functional. However, it is inefficient as it leads to torch being installed twice. I've provided a suggestion to improve the Docker build efficiency by filtering torch out of the requirements.txt installation step and then installing it separately. This avoids the redundant download and installation, speeding up the build process.

Comment on lines 26 to +27
pip3 install -r /install/requirements.txt
pip3 install torch --index-url https://download.pytorch.org/whl/${CUDA_VERSION}
pip3 install --force-reinstall torch --index-url https://download.pytorch.org/whl/${CUDA_VERSION}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While using --force-reinstall works, it's inefficient because it installs torch twice (once from requirements.txt with the default index, and then again from the specific index). This can significantly slow down the Docker build process as torch is a large package.

A more efficient approach is to avoid installing torch from requirements.txt in the first place. You can filter it out and then install it separately from the correct index. This avoids the unnecessary installation and uninstallation cycle.

Suggested change
pip3 install -r /install/requirements.txt
pip3 install torch --index-url https://download.pytorch.org/whl/${CUDA_VERSION}
pip3 install --force-reinstall torch --index-url https://download.pytorch.org/whl/${CUDA_VERSION}
pip3 install -r <(grep -vE '^torch\b' /install/requirements.txt)
pip3 install torch --index-url https://download.pytorch.org/whl/${CUDA_VERSION}

Copy link
Collaborator

@yzh119 yzh119 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@yzh119 yzh119 enabled auto-merge (squash) October 9, 2025 19:56
@yzh119 yzh119 merged commit c3ff7e7 into flashinfer-ai:main Oct 9, 2025
14 checks passed
yzh119 added a commit that referenced this pull request Oct 10, 2025
<!-- .github/pull_request_template.md -->

## 📌 Description

Github default runner's disk space is too small and results in OOM
issues:
https://github.com/flashinfer-ai/flashinfer/actions/runs/18389642821/job/52402828516

## 🔍 Related Issues

#1901 

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [x] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [x] I have installed the hooks with `pre-commit install`.
- [x] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [x] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

cc @nvmbreughe
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants