You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For example the chat completion ids are different (chatcmpl-2EHCQqsRzdOlFskNehCMu2oOMTXhSjey, chatcmpl-Cm7q0Ru5uEGVlW4r6cZaGNQrlS7oF724) in the following response:
Example NodeJS code that generates the above chunks:
importOpenAIfrom"openai";process.env["OPENAI_API_KEY"]="no-key";constopenai=newOpenAI({baseURL: "http://127.0.0.1:8080/v1",apiKey: "no-key",});asyncfunctionmain(){conststream=awaitopenai.chat.completions.create({model: "gpt-3.5-turbo",messages: [{role: "user",content: "Say this is a test"}],stream: true,});forawait(constchunkofstream){process.stdout.write(JSON.stringify(chunk));}}main();
This is fine for nodejs server side generation, but if I stream the HTTP response and consume it with OpenAI's NodeJS SDK, I would get this error: missing finish_reason for choice 0. It seems that when a different chunk id is supplied here, #endRequest is called prematurely and the correspondent chunk would not have a finish_reason.
importfetchfrom'node-fetch';import{ChatCompletionStream}from'openai/lib/ChatCompletionStream';fetch('http://localhost:3000',{method: 'POST',body: 'Tell me why dogs are better than cats',headers: {'Content-Type': 'text/plain'},}).then(async(res)=>{// @ts-ignore ReadableStream on different environments can be strangeconstrunner=ChatCompletionStream.fromReadableStream(res.body);runner.on('content',(delta,snapshot)=>{process.stdout.write(delta);// or, in a browser, you might display like this:// document.body.innerText += delta; // or:// document.body.innerText = snapshot;});console.dir(awaitrunner.finalChatCompletion(),{depth: null});});
The text was updated successfully, but these errors were encountered:
xyc
changed the title
consistent chat completion id in openai compatible server endpoint
consistent chat completion id in openai compatible chat completion endpoint
Mar 4, 2024
xyc
changed the title
consistent chat completion id in openai compatible chat completion endpoint
Consistent chat completion id in OpenAI compatible chat completion endpoint
Mar 4, 2024
First thank you for this great project!
I was wondering if for OpenAI compatible chat completion endpoint, the streaming responses should return the same completion id (
chatcmpl-
) for each chunk.For example the chat completion ids are different (
chatcmpl-2EHCQqsRzdOlFskNehCMu2oOMTXhSjey
,chatcmpl-Cm7q0Ru5uEGVlW4r6cZaGNQrlS7oF724
) in the following response:Example NodeJS code that generates the above chunks:
This is fine for nodejs server side generation, but if I stream the HTTP response and consume it with OpenAI's NodeJS SDK, I would get this error:
missing finish_reason for choice 0
. It seems that when a different chunk id is supplied here,#endRequest
is called prematurely and the correspondent chunk would not have afinish_reason
.Example client side code:
The text was updated successfully, but these errors were encountered: