Skip to content

Commit 69a2dd7

Browse files
committed
Fix incorrect filenames in vllm_compile_cache.py
Test Plan: - I ran `vllm serve meta-llama/Llama-3.2-1B-Instruct` locally and verified that vllm_compile_cache.py's filenames look like the following: ``` /home/rzou/.cache/vllm/torch_compile_cache/a785fdbf18/rank_0_0/inductor_cache/w2/cw22szhsitonk3z4ek x476g3hzaet3vy5qmsoewpx3kgbokhcob2.py')} ``` Signed-off-by: rzou <zou3519@gmail.com> Signed-off-by: <zou3519@gmail.com>
1 parent 5d8e1c9 commit 69a2dd7

File tree

1 file changed

+5
-1
lines changed

1 file changed

+5
-1
lines changed

vllm/compilation/compiler_interface.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -228,7 +228,11 @@ def hijacked_compile_fx_inner(*args, **kwargs):
228228
inductor_compiled_graph = output
229229
if inductor_compiled_graph is not None:
230230
nonlocal file_path
231-
file_path = inductor_compiled_graph.current_callable.__code__.co_filename # noqa
231+
# the current_callable captures two things, the inputs_idx
232+
# (0) and the function (1) from which we are going to pull
233+
# the filename from.
234+
file_path = inductor_compiled_graph.current_callable.__closure__[ # noqa
235+
1].cell_contents.__code__.co_filename # noqa
232236
hash_str = inductor_compiled_graph._fx_graph_cache_key
233237
return output
234238

0 commit comments

Comments
 (0)