Skip to content

Conversation

@BoyuanFeng
Copy link
Contributor

@BoyuanFeng BoyuanFeng commented Oct 29, 2025

aot_compile is part of torch.compile. So use_aot_compile should respect VLLM_DISABLE_COMPILE_CACHE.

cc @zou3519 @ProExpertProg @zhxchen17

Signed-off-by: Boyuan Feng <boyuan@meta.com>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly ensures that use_aot_compile respects the VLLM_DISABLE_COMPILE_CACHE environment variable by adjusting its default behavior. The changes also include a nice refactoring that centralizes the logic for VLLM_DISABLE_COMPILE_CACHE into a dedicated function, removing code duplication. The implementation is clean, correct, and improves the maintainability of the code.

Copy link
Contributor

@zhxchen17 zhxchen17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah this makes sense to me

@zou3519 zou3519 added the ready ONLY add when PR is ready to merge/full CI is needed label Oct 29, 2025
@zou3519 zou3519 enabled auto-merge (squash) October 29, 2025 14:43
@zou3519 zou3519 merged commit a9fe079 into vllm-project:main Oct 29, 2025
49 checks passed
MatthewBonanni pushed a commit to MatthewBonanni/vllm that referenced this pull request Oct 30, 2025
ilmarkov pushed a commit to neuralmagic/vllm that referenced this pull request Nov 7, 2025
ZhengHongming888 pushed a commit to ZhengHongming888/vllm that referenced this pull request Nov 8, 2025
rtourgeman pushed a commit to rtourgeman/vllm that referenced this pull request Nov 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants