-
-
Notifications
You must be signed in to change notification settings - Fork 11.2k
Support allowed_token_ids in ChatCompletionRequest #19143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @xu-song, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello! Gemini here, providing a summary of this pull request. This PR, authored by xu-song, introduces support for the allowed_token_token_ids parameter within the ChatCompletionRequest object in the OpenAI compatible entrypoint. The primary goal is to enable users to specify a list of token IDs that the model is permitted to generate during the sampling process, effectively constraining the output vocabulary. The changes involve updating the request protocol definition to include this new optional field and ensuring it is correctly passed down to the underlying sampling parameter logic.
Highlights
- Add allowed_token_ids: An optional field
allowed_token_idsof typelist[int]is added to theChatCompletionRequestPydantic model, allowing clients to specify which tokens are permissible in the generated output. - Integrate with sampling: The
allowed_token_idsvalue from theChatCompletionRequestis now passed to the internalto_sampling_paramsconversion function, ensuring that the constraint is applied during the model's generation process.
Changelog
- vllm/entrypoints/openai/protocol.py
- Added
allowed_token_ids: Optional[list[int]] = Noneto theChatCompletionRequestclass definition (around line 230). - Included
allowed_token_ids=self.allowed_token_idsin the call toto_sampling_paramswithin theChatCompletionRequest.to_sampling_paramsmethod (around line 553).
- Added
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Tokens allowed now,
Constrain the words the model speaks,
Precise output flows.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request cleanly adds support for allowed_token_ids in the ChatCompletionRequest, propagating it to the SamplingParams. The changes are straightforward and well-implemented. Nice work!
Merge Readiness
The changes look good and address the stated purpose effectively. I believe this PR is ready for merge. As always, ensure all CI checks pass and consider if any additional testing for this new parameter is warranted in a broader context, though the changes themselves are minimal and direct.
Signed-off-by: Xu Song <xusong.vip@gmail.com>
Signed-off-by: Xu Song <xusong.vip@gmail.com>
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for adding this!
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
Signed-off-by: Xu Song <xusong.vip@gmail.com>
Purpose
Support
allowed_token_idsin ChatCompletionRequest