-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nuxt - Streaming Issue with the example "Nuxt OpenAI Starter" on Vercel #196
Comments
Nuxt doesn't support streaming at the moment. You can track the issue here: nitrojs/nitro#1327 |
@jaredpalmer This should currently be working with node runtimes, but support for (edge) streaming landed with h3 1.7.0, here's a working example with cloudflare pages : (I haven't tried it on vercel-edge but this should work too) Note that this custom @JuBertoo install h3 1.7.0, and update your code to do something like this : export default defineEventHandler(async (event: any) => {
// Extract the `prompt` from the body of the request
const { messages } = await readBody(event)
// Ask OpenAI for a streaming chat completion given the prompt
const response = await openai.createChatCompletion({
model: 'gpt-3.5-turbo',
stream: true,
messages: messages.map((message: any) => ({
content: message.content,
role: message.role
}))
})
// Convert the response into a friendly text-stream
const stream = OpenAIStream(response)
// Respond with the stream
return sendStream(event, stream)
})
function sendStream(event: H3Event, stream: ReadableStream) {
// Mark to prevent h3 handling response
event._handled = true
// Workers (unenv)
// @ts-expect-error _data will be there.
event.node.res._data = stream
// Node.js
if (event.node.res.socket) {
stream.pipeTo(
new WritableStream({
write(chunk) {
event.node.res.write(chunk)
},
close() {
event.node.res.end()
}
})
)
}
} |
Thank you for your response, @Hebilicious , but it still doesn't work... I have installed the dependency 'h3': '^1.7.0' and updated the code. It may not be compatible with Vercel's Edge Functions.
|
I assume that since it runs on cloudflare it should run on vercel edge, I will try to deploy an example. |
@Hebilicious Any update on whether you got it working on Vercel edge? |
I've been able to deploy with the CLI (ie running https://nuxt-openai-vercel-hebilicious.vercel.app/ Nuxt config export default defineNuxtConfig({
devtools: { enabled: true },
modules: ['@nuxtjs/tailwindcss'],
nitro: {
preset: 'vercel-edge'
},
// You might not need it if you're not using pnpm
alias: {
'node:util': path.resolve(
__dirname,
'node_modules/unenv/runtime/node/util/index.cjs'
),
'node:net': path.resolve(
__dirname,
'node_modules/unenv/runtime/node/net/index.cjs'
)
},
runtimeConfig: {
openaiApiKey: ''
}
}) Server API // ./api/chat.ts
import { Configuration, OpenAIApi } from 'openai-edge'
import { OpenAIStream } from 'ai'
import type { H3Event } from 'h3'
let openai: OpenAIApi
export default defineEventHandler(async (event: any) => {
// You can probably move this out of the event handler with vercel-edge
if (!openai) {
let apiKey = useRuntimeConfig().openaiApiKey as string
const config = new Configuration({ apiKey })
openai = new OpenAIApi(config)
}
// Extract the `prompt` from the body of the request
const { messages } = await readBody(event)
// Ask OpenAI for a streaming chat completion given the prompt
const response = await openai.createChatCompletion({
model: 'gpt-3.5-turbo',
stream: true,
messages: messages.map((message: any) => ({
content: message.content,
role: message.role
}))
})
// Convert the response into a friendly text-stream
const stream = OpenAIStream(response)
// Respond with the stream
return sendStream(event, stream)
})
// This will be provided by the framework in a future version
function sendStream(event: H3Event, stream: ReadableStream) {
// Mark to prevent h3 handling response
event._handled = true
// Workers (unenv)
// @ts-expect-error _data will be there.
event.node.res._data = stream
// Node.js
if (event.node.res.socket) {
stream.pipeTo(
new WritableStream({
write(chunk) {
event.node.res.write(chunk)
},
close() {
event.node.res.end()
}
})
)
}
} Edit: Going through the UI, it looks like it's using edge functions properly. @jaredpalmer What can we do from the Nuxt side to resolve this ? Update the example and add some information in the README for edge-functions caveats ? |
@Hebilicious those seem like two good suggestions. Would you mind contributing a pull request? |
how can i intercept the streaming content to save it to db on cf worker |
@Giancarlo-Ma you can add callback handlers to the |
I've deployed the example "Nuxt OpenAI Starter" on Vercel and I'm encountering a streaming problem. Instead of sending data progressively, the system waits for the server's response to be complete before displaying any data.
This issue doesn't occur when I run it locally.
https://nuxt-openai-vert.vercel.app/
Could this be related to Vercel's configuration? Any help is appreciated.
The text was updated successfully, but these errors were encountered: