Skip to content

Commit 92709f7

Browse files
author
qqma
committed
fix test failure on imports
Signed-off-by: qqma <qqma@amazon.com>
1 parent 9c6c81d commit 92709f7

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

vllm/v1/attention/backends/flash_attn.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@
77
import numpy as np
88
import torch
99

10-
from vllm import envs
1110
from vllm import _custom_ops as ops
11+
from vllm import envs
1212
from vllm.attention.backends.abstract import (AttentionBackend, AttentionImpl,
1313
AttentionMetadata, AttentionType,
1414
is_quantized_kv_cache)

0 commit comments

Comments
 (0)