Skip to content

Commit ef0273a

Browse files
fixed sha-256 for backends.py
Signed-off-by: WorldExplored <srreyansh.sethi@gmail.com> Signed-off-by: vnadathur <glvikramn@gmail.com> Co-Authored-By: Srreyansh Sethi <107075589+worldexplored@users.noreply.github.com> Co-Authored-By: vnadathur <236933696+vnadathur@users.noreply.github.com>
1 parent 1175fed commit ef0273a

File tree

1 file changed

+3
-5
lines changed

1 file changed

+3
-5
lines changed

vllm/compilation/backends.py

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -556,12 +556,10 @@ def __call__(self, graph: fx.GraphModule, example_inputs) -> Callable:
556556
# that affects the compilation. if none of the factors change,
557557
# the cache dir will be the same so that we can reuse the compiled
558558
# graph.
559-
560559
factors = [env_hash, config_hash, code_hash, compiler_hash]
561-
hash_key = hashlib.md5(
562-
str(factors).encode(), usedforsecurity=False
563-
).hexdigest()[:10]
564-
560+
# Use SHA-256 for cache key hashing to be consistent across
561+
# compute_hash functions. Truncate for a short, stable dir name.
562+
hash_key = hashlib.sha256(str(factors).encode()).hexdigest()[:10]
565563
cache_dir = os.path.join(
566564
envs.VLLM_CACHE_ROOT, "torch_compile_cache", hash_key
567565
)

0 commit comments

Comments
 (0)