-
-
Notifications
You must be signed in to change notification settings - Fork 11.1k
Fix incorrect filenames in vllm_compile_cache.py #15494
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
f4d2ed7 to
b76a150
Compare
Test Plan: - I ran `vllm serve meta-llama/Llama-3.2-1B-Instruct` locally and verified that vllm_compile_cache.py's filenames look like the following: ``` /home/rzou/.cache/vllm/torch_compile_cache/a785fdbf18/rank_0_0/inductor_cache/w2/cw22szhsitonk3z4ek x476g3hzaet3vy5qmsoewpx3kgbokhcob2.py')} ``` Signed-off-by: rzou <zou3519@gmail.com> Signed-off-by: <zou3519@gmail.com>
b76a150 to
69a2dd7
Compare
|
cc @youkaichao could you review this please? |
Signed-off-by: youkaichao <youkaichao@gmail.com>
youkaichao
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the fix! i added the logic from pytorch 2.5, when file_path = inductor_compiled_graph.current_callable is not a function from closure, hoping it will be more robust.
|
@zou3519 looking forward to the interface from inductor to get the hash key and the final file path from pytorch, so that we don't need to hack it in the future! |
Signed-off-by: <zou3519@gmail.com> Signed-off-by: youkaichao <youkaichao@gmail.com> Co-authored-by: youkaichao <youkaichao@gmail.com> Signed-off-by: xinyuxiao <xinyuxiao2024@gmail.com>
Signed-off-by: <zou3519@gmail.com> Signed-off-by: youkaichao <youkaichao@gmail.com> Co-authored-by: youkaichao <youkaichao@gmail.com> Signed-off-by: Louis Ulmer <ulmerlouis@gmail.com>
Signed-off-by: <zou3519@gmail.com> Signed-off-by: youkaichao <youkaichao@gmail.com> Co-authored-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: <zou3519@gmail.com> Signed-off-by: youkaichao <youkaichao@gmail.com> Co-authored-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: <zou3519@gmail.com> Signed-off-by: youkaichao <youkaichao@gmail.com> Co-authored-by: youkaichao <youkaichao@gmail.com> Signed-off-by: Mu Huai <tianbowen.tbw@antgroup.com>
Test Plan:
vllm serve meta-llama/Llama-3.2-1B-Instructlocally and verified that vllm_compile_cache.py's filenames look like the following: