-
Notifications
You must be signed in to change notification settings - Fork 488
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: Removed the max tokens limitation and boost performance by avoid unnecessary repeated cuda device detection. #1429
ENH: Removed the max tokens limitation and boost performance by avoid unnecessary repeated cuda device detection. #1429
Conversation
Can you show that without modified the max_token_field, if |
Because
And I printed the {
"title": "CreateChatCompletion",
"type": "object",
"properties": {
"frequency_penalty": {
"title": "Frequency Penalty",
"default": 0.0,
"type": "number"
},
"logit_bias": {
"title": "Logit Bias",
"type": "object",
"additionalProperties": {
"type": "number"
}
},
"logprobs": {
"title": "Logprobs",
"type": "integer"
},
"max_tokens": {
"title": "Max Tokens",
"description": "The maximum number of tokens to generate.",
"default": 1024,
"minimum": 1,
"maximum": 32768,
"type": "integer"
},
"stream_interval": {
"title": "Stream Interval",
"default": 2,
"type": "integer"
}
},
"required": [
"model"
],
"additionalProperties": false
} The limitation for |
Could you rebase the main branch? |
synchronized |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Revert
codeqwen1.5
to 64k context length.Also optimized code to avoid query the whether
cuda
device exists and count thecuda
device to boost performance.