-
Couldn't load subscription status.
- Fork 524
[Bugfix] Fix grammar_bitmask IndexError caused by outdated apply_grammar_bitmask method
#2314
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…sk method Signed-off-by: shen-shanshan <467638484@qq.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @shen-shanshan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
I've addressed a critical bug in the apply_grammar_bitmask method within the model runner. This change resolves an IndexError by updating how grammar bitmasks are reordered and applied to logits, particularly when speculative decoding is active. The core purpose is to ensure that structured output generation remains stable and correct under various decoding scenarios, preventing crashes and improving the reliability of the system.
Highlights
- Bug Fix for Grammar Bitmasking: I've fixed an
IndexErrorthat could occur in theapply_grammar_bitmaskmethod. This was caused by an outdated approach to reordering the grammar bitmask, especially when speculative decoding was involved. Thesorted_bitmaskis now correctly sized based onlogits.shape[0], preventing out-of-bounds access. - Improved Speculative Decoding Compatibility: The logic within
apply_grammar_bitmaskhas been significantly revised to properly account for speculative decode tokens. This ensures that thegrammar_bitmaskis applied to all relevant tokens, including those generated speculatively, by correctly calculating logit indices with acumulative_offset. - Refactored Bitmask Reordering: I've refactored the process of reordering the
grammar_bitmaskto align with the batch order. The new approach ensures that the bitmask accurately corresponds to the current batch's request and token sequence, improving the robustness of structured output generation.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request provides a bugfix for an IndexError in the apply_grammar_bitmask method and adds support for speculative decoding. The changes appear correct and robust. The sorted_bitmask is now sized appropriately to match the logits tensor, which should resolve the reported error. The logic also correctly handles the variable number of tokens per request introduced by speculative decoding. My review includes two suggestions to improve the code's clarity and maintainability by using more descriptive variable names, which is important given the complexity of the indexing logic.
| seq = sorted(self.input_batch.req_id_to_index.items(), | ||
| key=lambda x: x[1]) | ||
| for req_id, batch_index in seq: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For clarity and to avoid confusion, consider renaming seq to something more descriptive, like requests_in_batch_order. This is especially helpful because seq is reused later with a different meaning.
| seq = sorted(self.input_batch.req_id_to_index.items(), | |
| key=lambda x: x[1]) | |
| for req_id, batch_index in seq: | |
| requests_in_batch_order = sorted(self.input_batch.req_id_to_index.items(), | |
| key=lambda x: x[1]) | |
| for req_id, batch_index in requests_in_batch_order: |
| seq = sorted(scheduler_output.structured_output_request_ids.items(), | ||
| key=lambda x: x[1]) | ||
| for req_id, _ in seq: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The variable seq is reused from a previous loop. To improve readability and prevent potential bugs during future modifications, it's better to use a distinct and descriptive name, such as grammar_requests_in_mask_order.
| seq = sorted(scheduler_output.structured_output_request_ids.items(), | |
| key=lambda x: x[1]) | |
| for req_id, _ in seq: | |
| grammar_requests_in_mask_order = sorted(scheduler_output.structured_output_request_ids.items(), | |
| key=lambda x: x[1]) | |
| for req_id, _ in grammar_requests_in_mask_order: |
What this PR does / why we need it?
Fix #2033
Sync vllm-project/vllm#14702 to solve
grammar_bitmaskIndexError caused by outdatedapply_grammar_bitmaskmethodDoes this PR introduce any user-facing change?
No
How was this patch tested?