-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Misc][XPU] Avoid torch compile for XPU platform #10747
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
@youkaichao can you take a review on this PR? |
@@ -25,6 +26,8 @@ def load_general_plugins(): | |||
os.environ['TORCHINDUCTOR_COMPILE_THREADS'] = '1' | |||
# see https://github.com/vllm-project/vllm/issues/10619 | |||
torch._inductor.config.compile_threads = 1 | |||
if current_platform.is_xpu(): | |||
os.environ['TORCH_COMPILE_DISABLE'] = 'True' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any documentation for this env var?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean where this config comes from? If so, please refer here.
I think we need more intel gpus for testing. it has been waiting for very long. cc @jikunshang @yma11 |
We recently modified the intel cpu/gpu queue to run against "CI pipeline", I guess the migration work is still in progress -yuan |
Signed-off-by: yan ma <yan.ma@intel.com>
Signed-off-by: yan ma <yan.ma@intel.com>
Signed-off-by: yan ma <yan.ma@intel.com>
Signed-off-by: yan ma <yan.ma@intel.com> Co-authored-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: yan ma <yan.ma@intel.com> Co-authored-by: youkaichao <youkaichao@gmail.com>
After torch.jit.script --> torch.compile in PR, triton is eventually called during tensor parallel on XPU platform. We need disable torch.compile() for xpu.