Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nuxt - Streaming Issue with the example "Nuxt OpenAI Starter" on Vercel #196

Closed
JuBertoo opened this issue Jun 22, 2023 · 9 comments · Fixed by #295
Closed

Nuxt - Streaming Issue with the example "Nuxt OpenAI Starter" on Vercel #196

JuBertoo opened this issue Jun 22, 2023 · 9 comments · Fixed by #295

Comments

@JuBertoo
Copy link

I've deployed the example "Nuxt OpenAI Starter" on Vercel and I'm encountering a streaming problem. Instead of sending data progressively, the system waits for the server's response to be complete before displaying any data.
This issue doesn't occur when I run it locally.

https://nuxt-openai-vert.vercel.app/

Could this be related to Vercel's configuration? Any help is appreciated.

@jaredpalmer
Copy link
Collaborator

Nuxt doesn't support streaming at the moment. You can track the issue here: nitrojs/nitro#1327

@Hebilicious
Copy link
Contributor

Hebilicious commented Jun 23, 2023

@jaredpalmer This should currently be working with node runtimes, but support for (edge) streaming landed with h3 1.7.0, here's a working example with cloudflare pages :

https://github.com/Hebilicious/vercel-sdk-ai/blob/cloudflare-official/examples/nuxt-openai/server/api/chat.ts

(I haven't tried it on vercel-edge but this should work too)

Note that this custom sendStream utility will be provided by the framework soon.

@JuBertoo install h3 1.7.0, and update your code to do something like this :

export default defineEventHandler(async (event: any) => {
  // Extract the `prompt` from the body of the request
  const { messages } = await readBody(event)

  // Ask OpenAI for a streaming chat completion given the prompt
  const response = await openai.createChatCompletion({
    model: 'gpt-3.5-turbo',
    stream: true,
    messages: messages.map((message: any) => ({
      content: message.content,
      role: message.role
    }))
  })

  // Convert the response into a friendly text-stream
  const stream = OpenAIStream(response)
  // Respond with the stream
  return sendStream(event, stream)
})

function sendStream(event: H3Event, stream: ReadableStream) {
  // Mark to prevent h3 handling response
  event._handled = true

  // Workers (unenv)
  // @ts-expect-error _data will be there.
  event.node.res._data = stream

  // Node.js
  if (event.node.res.socket) {
    stream.pipeTo(
      new WritableStream({
        write(chunk) {
          event.node.res.write(chunk)
        },
        close() {
          event.node.res.end()
        }
      })
    )
  }
}

@JuBertoo
Copy link
Author

JuBertoo commented Jun 23, 2023

Thank you for your response, @Hebilicious , but it still doesn't work... I have installed the dependency 'h3': '^1.7.0' and updated the code. It may not be compatible with Vercel's Edge Functions.

import { Configuration, OpenAIApi } from 'openai-edge';
import { OpenAIStream } from 'ai';
import type { H3Event } from 'h3';

// Create an OpenAI API client (that's edge friendly!)
const config = new Configuration({
  // eslint-disable-next-line react-hooks/rules-of-hooks
  apiKey: useRuntimeConfig().openaiApiKey,
});
const openai = new OpenAIApi(config);

export default defineEventHandler(async (event: any) => {
  // Extract the `prompt` from the body of the request
  const { messages } = await readBody(event);

  // Ask OpenAI for a streaming chat completion given the prompt
  const response = await openai.createChatCompletion({
    model: 'gpt-3.5-turbo',
    stream: true,
    messages: messages.map((message: any) => ({
      content: message.content,
      role: message.role,
    })),
  });

  // Convert the response into a friendly text-stream
  const stream = OpenAIStream(response);
  // Respond with the stream
  return sendStream(event, stream);

  function sendStream(event: H3Event, stream: ReadableStream) {
    // Mark to prevent h3 handling response
    event._handled = true;

    // Workers (unenv)
    event.node.res._data = stream;

    // Node.js
    if (event.node.res.socket) {
      stream.pipeTo(
        new WritableStream({
          write(chunk) {
            event.node.res.write(chunk);
          },
          close() {
            event.node.res.end();
          },
        })
      );
    }
  }
});

@Hebilicious
Copy link
Contributor

Thank you for your response, @Hebilicious , but it still doesn't work... I have installed the dependency 'h3': '^1.7.0' and updated the code. It may not be compatible with Vercel's Edge Functions.

import { Configuration, OpenAIApi } from 'openai-edge';
import { OpenAIStream } from 'ai';
import type { H3Event } from 'h3';

// Create an OpenAI API client (that's edge friendly!)
const config = new Configuration({
  // eslint-disable-next-line react-hooks/rules-of-hooks
  apiKey: useRuntimeConfig().openaiApiKey,
});
const openai = new OpenAIApi(config);

export default defineEventHandler(async (event: any) => {
  // Extract the `prompt` from the body of the request
  const { messages } = await readBody(event);

  // Ask OpenAI for a streaming chat completion given the prompt
  const response = await openai.createChatCompletion({
    model: 'gpt-3.5-turbo',
    stream: true,
    messages: messages.map((message: any) => ({
      content: message.content,
      role: message.role,
    })),
  });

  // Convert the response into a friendly text-stream
  const stream = OpenAIStream(response);
  // Respond with the stream
  return sendStream(event, stream);

  function sendStream(event: H3Event, stream: ReadableStream) {
    // Mark to prevent h3 handling response
    event._handled = true;

    // Workers (unenv)
    event.node.res._data = stream;

    // Node.js
    if (event.node.res.socket) {
      stream.pipeTo(
        new WritableStream({
          write(chunk) {
            event.node.res.write(chunk);
          },
          close() {
            event.node.res.end();
          },
        })
      );
    }
  }
});

I assume that since it runs on cloudflare it should run on vercel edge, I will try to deploy an example.

@dosstx
Copy link
Contributor

dosstx commented Jun 26, 2023

@Hebilicious Any update on whether you got it working on Vercel edge?

@Hebilicious
Copy link
Contributor

Hebilicious commented Jun 27, 2023

@Hebilicious Any update on whether you got it working on Vercel edge?

I've been able to deploy with the CLI (ie running vercel deploy) without any issues.

https://nuxt-openai-vercel-hebilicious.vercel.app/

Nuxt config

export default defineNuxtConfig({
  devtools: { enabled: true },
  modules: ['@nuxtjs/tailwindcss'],
  nitro: {
    preset: 'vercel-edge'
  },
  // You might not need it if you're not using pnpm
  alias: {
    'node:util': path.resolve(
      __dirname,
      'node_modules/unenv/runtime/node/util/index.cjs'
    ),
    'node:net': path.resolve(
      __dirname,
      'node_modules/unenv/runtime/node/net/index.cjs'
    )
  },
  runtimeConfig: {
    openaiApiKey: ''
  }
})

Server API

// ./api/chat.ts
import { Configuration, OpenAIApi } from 'openai-edge'
import { OpenAIStream } from 'ai'
import type { H3Event } from 'h3'

let openai: OpenAIApi

export default defineEventHandler(async (event: any) => {
// You can probably move this out of the event handler with vercel-edge
  if (!openai) {
    let apiKey = useRuntimeConfig().openaiApiKey as string
    const config = new Configuration({ apiKey })
    openai = new OpenAIApi(config)
  }

  // Extract the `prompt` from the body of the request
  const { messages } = await readBody(event)

  // Ask OpenAI for a streaming chat completion given the prompt
  const response = await openai.createChatCompletion({
    model: 'gpt-3.5-turbo',
    stream: true,
    messages: messages.map((message: any) => ({
      content: message.content,
      role: message.role
    }))
  })

  // Convert the response into a friendly text-stream
  const stream = OpenAIStream(response)
  // Respond with the stream
  return sendStream(event, stream)
})

// This will be provided by the framework in a future version
function sendStream(event: H3Event, stream: ReadableStream) {
  // Mark to prevent h3 handling response
  event._handled = true

  // Workers (unenv)
  // @ts-expect-error _data will be there.
  event.node.res._data = stream

  // Node.js
  if (event.node.res.socket) {
    stream.pipeTo(
      new WritableStream({
        write(chunk) {
          event.node.res.write(chunk)
        },
        close() {
          event.node.res.end()
        }
      })
    )
  }
}

Edit: Going through the UI, it looks like it's using edge functions properly.

image

@jaredpalmer What can we do from the Nuxt side to resolve this ? Update the example and add some information in the README for edge-functions caveats ?

@MaxLeiter
Copy link
Member

@Hebilicious those seem like two good suggestions. Would you mind contributing a pull request?

@Giancarlo-Ma
Copy link

how can i intercept the streaming content to save it to db on cf worker

@lgrammel
Copy link
Collaborator

how can i intercept the streaming content to save it to db on cf worker

@Giancarlo-Ma you can add callback handlers to the OpenAIStream. Check out this example: https://sdk.vercel.ai/docs/guides/providers/openai#guide-save-to-database-after-completion

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants