Skip to content

Commit 8a4a2ef

Browse files
authored
[V1][Core] using cached vocab_size for Structured Outputs (#14630)
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
1 parent 8e9ffd3 commit 8a4a2ef

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

vllm/v1/structured_output/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,6 @@
2727
class StructuredOutputManager:
2828

2929
def __init__(self, vllm_config: VllmConfig):
30-
self.vocab_size = vllm_config.model_config.get_vocab_size()
3130
self.vllm_config = vllm_config
3231
self.init_complete = False
3332

@@ -41,6 +40,7 @@ def _delayed_init(self):
4140
tokenizer_group.ping()
4241

4342
tokenizer = tokenizer_group.get_lora_tokenizer(None)
43+
self.vocab_size = tokenizer.max_token_id
4444
if isinstance(tokenizer, MistralTokenizer):
4545
# NOTE: ideally, xgrammar should handle this accordingly.
4646
# refer to https://github.com/mlc-ai/xgrammar/blob/d77c0a0173ef14779c918e3be7966ba852f7910f/python/xgrammar/tokenizer_info.py#L98

0 commit comments

Comments
 (0)