- 
          
- 
                Notifications
    You must be signed in to change notification settings 
- Fork 10.9k
[Bugfix] [pytorch] Patch AOTAutogradCache._get_shape_env #17142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| 👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run  Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add  🚀 | 
3393d47    to
    e0f6153      
    Compare
  
    There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM assuming the tests pass
e0f6153    to
    4499dd6      
    Compare
  
    There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add some unittest in the following PR to ensure we catch the issue in future?
| could you fix the pre-commit? | 
4499dd6    to
    497e9af      
    Compare
  
    | 
 | 
Signed-off-by: James Wu <jjwu@meta.com>
497e9af    to
    450f999      
    Compare
  
    …t#17142) Signed-off-by: James Wu <jjwu@meta.com>
…t#17142) Signed-off-by: James Wu <jjwu@meta.com>
…t#17142) Signed-off-by: James Wu <jjwu@meta.com> Signed-off-by: Agata Dobrzyniewicz <adobrzyniewicz@habana.ai>
…t#17142) Signed-off-by: James Wu <jjwu@meta.com> Signed-off-by: Mu Huai <tianbowen.tbw@antgroup.com>
…t#17142) Signed-off-by: James Wu <jjwu@meta.com> Signed-off-by: Yuqi Zhang <yuqizhang@google.com>
Description
After https://github.com/pytorch/pytorch/pull/151563/files, VLLM needs to patch an extra _get_shape_env() function when running inductor, as AOTAutogradCache now uses its own shape env function. The implementation is technically shared at
torch._inductor.codecache.GuardedCache._get_shape_env, but adding it like this preserves backward compatibility with PyTorch 2.6.Test Plan
The following program now runs when linking VLLM with pytorch main