Skip to content

Commit d3ea7fe

Browse files
committed
update comment
Signed-off-by: Qubitium <qubitium@modelcloud.ai>
1 parent 1f731ae commit d3ea7fe

File tree

1 file changed

+1
-1
lines changed
  • vllm/model_executor/layers/quantization/kernels/mixed_precision

1 file changed

+1
-1
lines changed

vllm/model_executor/layers/quantization/kernels/mixed_precision/marlin.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ def apply_weights(self,
116116
x: torch.Tensor,
117117
bias: Optional[torch.Tensor] = None) -> torch.Tensor:
118118
# marlin requires contiguous memory layout
119-
# kv/prefill caching may cause x to be non-contiguous
119+
# prefix caching may cause x to be non-contiguous
120120
x = x.contiguous() # no-op if already contiguous
121121

122122
c = self.config

0 commit comments

Comments
 (0)