Skip to content

Commit 22fd4a6

Browse files
Update vllm/config/__init__.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com>
1 parent 20d93dc commit 22fd4a6

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

vllm/config/__init__.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2120,9 +2120,9 @@ def __post_init__(self):
21202120
if self.num_speculative_tokens is None:
21212121
self.num_speculative_tokens = max_num_speculative_tokens
21222122
else:
2123-
assert self.num_speculative_tokens < max_num_speculative_tokens, (
2123+
assert self.num_speculative_tokens <= max_num_speculative_tokens, (
21242124
"num_speculative_tokens should be None or must be less than or equal to the "
2125-
"max value in num_speculative_tokens_per_method.")
2125+
"max value in num_speculative_tokens_per_method.")
21262126

21272127
# Automatically configure the method for ngram when "model" is used
21282128
# instead of "method"

0 commit comments

Comments
 (0)