-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add ChatCompletionStream Usage return #215
Conversation
Codecov Report
@@ Coverage Diff @@
## master #215 +/- ##
=========================================
Coverage ? 71.94%
=========================================
Files ? 21
Lines ? 581
Branches ? 0
=========================================
Hits ? 418
Misses ? 124
Partials ? 39
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the PR!
Thank you this is what I was looking for! |
I'm not actually seeing any usage data come back in the streaming response. I added a roundtripper to log request/response data and it looks like this (formatting mine): POST https://api.openai.com/v1/chat/completions
HTTP/2.0 200 OK
data: {
"id": "chatcmpl-70smbYYzwcwpb19BkzLb3NFlvkIsP",
"object": "chat.completion.chunk",
"created": 1680444941,
"model": "gpt-3.5-turbo-0301",
"choices": [
{
"delta": {
"role": "assistant"
},
"index": 0,
"finish_reason": null
}
]
}
data: {
"id": "chatcmpl-70smbYYzwcwpb19BkzLb3NFlvkIsP",
"object": "chat.completion.chunk",
"created": 1680444941,
"model": "gpt-3.5-turbo-0301",
"choices": [
{
"delta": {
"content": "How"
},
"index": 0,
"finish_reason": null
}
]
} Is there something I need to do to signal to openai that usage data should be returned? |
@collinvandyck please upgrade to 1.5.8 |
Hey @sashabaranov ! I'm seeing this behavior on 1.5.8, go.mod: github.com/sashabaranov/go-openai v1.5.8-0.20230401160622-b542086cbb22 After giving it a spin I saw that usage was always blank in the resp returned by I'm kicking off the chat completion stream like this: req := openai.ChatCompletionRequest{
Model: c.model,
Messages: messages,
Stream: true,
}
resp, err := c.openai.CreateChatCompletionStream(ctx, req)
if err != nil {
return nil, fmt.Errorf("stream: %w", err)
} |
too embarrassed |
It would be neat for go-openai to include a tokenizer to do this, but it seems like a pretty big lift. It looks like most of the folks that are doing this are calling out to a rust tokenizer since there's not an official go tokenizer. Not suggesting that you should do this, but it was surprising to me that there wasn't any official way to do this using openai's libraries in Go. |
sir, see 223 |
add ChatCompletionStream Usage return