v4.0.0 Beta #182
Replies: 54 comments 310 replies
-
This is great news! Will you be adding token counting for streaming support, perhaps on the final SSE chunk? This is needed for accurate usage-based billing. I've adapted the following code from the web which produces similar results to your billing reports for gpt-3.5-turbo. It's not accurate for gpt-4. It uses https://www.npmjs.com/package/gpt-3-encoder.
|
Beta Was this translation helpful? Give feedback.
-
This is great, appreciate the simplified config and error handling. I had some type issue in completions(chat) that was triggered by a change in how you are handing streaming. I dug in a little, and feel like the typing in there a bit redundant at times, and maybe slightly incomplete. Ultimately it feels like CompletionCreateParams should just be a base interface and have the stream-enabled variant extend it. I don't need to maintain this, but it is a lot of duplicate code, with the only subtle differences I see being the stream param and some small bits of the output/event respectively. As an example, one minor difference between ChatCompletion and ChatCompletionEvent is the usage property. In ChatCompletion, usage is optional (but I'm pretty sure i'll always be there in this case) while in ChatCompletionEvent usage doesn't exist. Similarly, in ChatCompletion*.Choices, the difference there is delta, which is optional in ChatCompletionEvent (again seems like it's there though in this context), and non-existent in ChatCompletion. These could simply extend a base that had an optional param. The optional types make me think this was the intention at some time but then went a more complex route? The namespaces in there also seems overkill, esp as its unused outside of that file. |
Beta Was this translation helpful? Give feedback.
-
@schnerd I'm running into the following error with the
Issue is reproduced here: https://stackblitz.com/edit/node-ddzjcn?file=src%2Findex.ts |
Beta Was this translation helpful? Give feedback.
-
I don't believe that the
I'm expecting to be able to do something along the lines of
|
Beta Was this translation helpful? Give feedback.
-
Thanks for your great work! I'm getting this error when running in streaming mode on a NextJS vercel edge function:
Do you know of any way to make this package work with the edge function limitations? |
Beta Was this translation helpful? Give feedback.
-
I'm having some issues using fileFromPath to load the audio. Here's my code and error:
I think the result from fileFromPath is coming as null. |
Beta Was this translation helpful? Give feedback.
-
I tried streaming and it's working well so far, thank you. Also thank you to @RobertCraigie for letting me know about it. Would it be helpful to document an example directory for this somewhere? |
Beta Was this translation helpful? Give feedback.
-
I'm hitting an error:
I'm using commonJS for import:
and here's the relevant code:
|
Beta Was this translation helpful? Give feedback.
-
Similar to #182 (comment), I think that My original code was: async function getCompletion(
args: CreateChatCompletionRequest,
): Promise<CompletionResult> {
try {
const completion = (await openai.createChatCompletion(args)).data;
return { completion };
} catch (error) {
return { error: new Error((error as Error).message) };
}
} which I've replaced with: async function getCompletion(
args: OpenAI.CompletionCreateParams,
): Promise<CompletionResult> {
try {
const completion = await openai.chat.completions.create(args);
return { completion };
} catch (error) {
return { error: new Error((error as Error).message) };
}
} and I get a
with no clear way to resolve it. |
Beta Was this translation helpful? Give feedback.
-
Amazing. This new version tackles several issues we were having to code ourselves. Thanks so much. Regarding your last question: "Any other things you love or that bother you?", I'd love to have some utilities to help us deal with input sanitization. Safety is a main concern and any features that can make our applications safer are very welcome. |
Beta Was this translation helpful? Give feedback.
-
Hopefully not too off-topic, but does anyone have thoughts on Vercel's AI open AI SDK: https://sdk.vercel.ai/docs/guides/openai vs the one being discussed here? pros and cons and benefits of using this Node version? The vercel version is optimized for streaming "on the edge" (non-Node, I think). |
Beta Was this translation helpful? Give feedback.
-
@RobertCraigie I think that's part of the problem. I am confused about the runtime used for Nuxt's new server engine (Nitro). Nitro produces a standalone server |
Beta Was this translation helpful? Give feedback.
-
I was unable to configure v4 (4.0.0-beta.2) to work with Azure OpenAI Service. // Azure OpenAI in v3
const openai = new OpenAIApi(new Configuration({
apiKey,
basePath: `https://${deploymentName}.openai.azure.com/openai/deployments/${model}`,
baseOptions: {
headers: {
"api-key": apiKey,
},
params: {
"api-version": apiVersion,
},
},
})) // Not working: Azure OpenAI in v4
const openai = new OpenAI({
apiKey,
baseURL: `https://${deploymentName}.openai.azure.com/openai/deployments/${model}`,
defaultHeaders: {
"api-key": apiKey,
},
// where to specify default param `api-version`?
}); I was actually receiving a 404 Happy to test out new versions to confirm they can be configured to talk to the Azure API. |
Beta Was this translation helpful? Give feedback.
-
I am seeing this error with beta.2 and NextJS:
If I edit |
Beta Was this translation helpful? Give feedback.
-
I am waiting for the stream mode. Just be curious whether the streaming feature can be used in Jupyter Notebook with JavaScript kernel. If so, a demo is welcomed. |
Beta Was this translation helpful? Give feedback.
-
Another issue with using v4 with next js. In next js, fetch is patched and it additionally allow passing {next: { revalidate: number }} to RequestInit. However, currently there is no way to pass this options with v4 nor any other headers (e.g. for tracing purpose). Would it be possible to allow us to pass any of these extra in future releases? |
Beta Was this translation helpful? Give feedback.
-
Separately, I have another issue related to the async iterator streaming interface. I've the following implementation for streaming response via the next js . It kinda work, except that most of the time the response is not streaming but returning the whole response at the very end. However, if I artificially add an async I guess it's related to the event loop??? const response = await openai.chat.completions.create({
user: 'nextjs',
model: 'gpt-3.5-turbo',
stream: true,
messages,
});
// const stream = OpenAIStream(response); // using OpenAIStream from 'ai' doesn't work either
const encoder = new TextEncoder();
const iterable = response[Symbol.asyncIterator]();
const stream = new ReadableStream({
async pull(controller) {
const chunk = await iterable.next();
if (chunk.done) {
controller.close();
} else {
const value = chunk.value.choices[0].delta.content;
if (value) {
controller.enqueue(encoder.encode(value));
}
// await sleep(10); // if we don't sleep, streaming doesn't work
}
},
});
// Respond with the stream
return new StreamingTextResponse(stream); |
Beta Was this translation helpful? Give feedback.
-
I'm seeing an issue with a messages object: Is there a conflicting type somewhere causing this, requiring a typed declaration? i.e.
|
Beta Was this translation helpful? Give feedback.
-
I'm trying to use the Beta on Cloudflare pages, but I'm running into issues:
Is there something specific I have to take care of when using the library on Cloudflare workers? |
Beta Was this translation helpful? Give feedback.
-
Hey there - apologies if this is already covered by others. I was just using the simple function calling example, and I really did not expect that the arguments value of the function to call would be streamed. Is that expected? I guess if the response from the API is set to stream on your end, you're just stuffing the returned values into your model as well - be it content or arguments? That's going to take a bit of strategy to make it work correctly. The following is print screening chunks from the initial completions call:
Before I invest that time, just want to clarify this is expected and will hopefully remain stable-ish before release? Cheers |
Beta Was this translation helpful? Give feedback.
-
Sorry if this has already been mentioned, but I couldn't find it, so let me ask you a question.Is there a callback function that can be triggered when a stream completes? I develop and run a service called 'Q, ChatGPT for Slack' and I need to post to Slack with the text of the stream, but due to a strict rate limit I can only update once per second. Therefore, if we cannot clearly catch that the stream is completed, we risk missing the final update. We want to know clearly which triggers have been completed! |
Beta Was this translation helpful? Give feedback.
-
This seems like it might be Azure OpenAI specific, but on a substantial number of requests, the initial promise awaiting the stream hangs, or during async iteration the object can be undefined: logger.info(`🚀 Sending prompt of ${promptLength} tokens to OpenAI`);
const stream = await client.chat.completions.create({
stream: true,
messages: promptMessages,
model: args.model,
});
// 💥 This log statement is never reached, the await hangs.
logger.info('Prompt sent');
const responseStream = new ReadableStream({
async pull(controller) {
let tokens = 0;
for await (const chunk of stream) {
// Sometimes `chunk.choices` is undefined, and we panic with:
// error unhandledRejection: Error [TypeError]: Cannot read properties of undefined (reading '0')
const content = chunk.choices[0].delta.content;
// ^
// Error message points to exactly this line, indicating `chunk.choices` is undefined
// i.e.: chunk isn't a valid item object?
controller.enqueue(encoder.encode(content));
}
controller.close();
await cleanup();
},
async cancel() {
stream.controller.abort();
await cleanup();
},
});
return new ai.StreamingTextResponse(responseStream, {
headers: {
// ...
},
});
|
Beta Was this translation helpful? Give feedback.
-
If anyone is looking for a way to help – or just wants to migrate their codebase from v3.3.0 to v4.0.0 – we'd love for you to try using this tool to automatically migrate your repo! You can try it locally now with these commands in the root of your project: npx -y @getgrit/launcher apply openai_v4 (this will soon be made available as Please share feedback, even if just to say that it worked well for you! |
Beta Was this translation helpful? Give feedback.
-
I keep getting a connection reset error: |
Beta Was this translation helpful? Give feedback.
-
when i use "openai": "4.0.0-beta.11", sometimes see the error log in server: Cannot read properties of undefined (reading 'stream'), but i can't reproduct the error in my local computer. |
Beta Was this translation helpful? Give feedback.
-
Thanks everybody for all the feedback on v4 of the SDK, we're super excited with how it turned out and your input was super helpful. Please check out the v3 -> v4 migration guide to get started with v4. |
Beta Was this translation helpful? Give feedback.
-
(https://github.com/openai/openai-node/assets/136312930/bbddfb32-843d-40bc-9452-f8f279858932) |
Beta Was this translation helpful? Give feedback.
-
import axios from "axios"; const ConversationPage = () => {
} Please help me with this code I'm getting error at import |
Beta Was this translation helpful? Give feedback.
-
What about |
Beta Was this translation helpful? Give feedback.
-
I'm loading a couple pieces like this in my frontend, mostly to use
I'm loading the abort error to handle when the stream gets aborted. |
Beta Was this translation helpful? Give feedback.
-
Important
OpenAI v4 has launched and is no longer in beta! Give it a spin with
npm install openai
and check out the migration guideWe're working on a new major version of our SDK and are excited to have you test it!
Please use this discussion page as a place to report issues and feedback about the new release.
What's new
View the v4 API
Things we'd love feedback on
Getting started
Migration guide
➡️ View v3 to v4 migration guide
Beta Was this translation helpful? Give feedback.
All reactions