Skip to content

Commit

Permalink
feat (ai/core): add toDataStream to streamText result (#2938)
Browse files Browse the repository at this point in the history
  • Loading branch information
lgrammel authored Sep 9, 2024
1 parent a88839b commit 6ee1f8e
Show file tree
Hide file tree
Showing 15 changed files with 500 additions and 28 deletions.
5 changes: 5 additions & 0 deletions .changeset/late-monkeys-beam.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
'ai': patch
---

feat (ai/core): add toDataStream to streamText result
29 changes: 27 additions & 2 deletions content/docs/07-reference/ai-sdk-core/02-stream-text.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -1171,13 +1171,38 @@ To see `streamText` in action, check out [these examples](#examples).
],
},
{
name: 'toDataStreamResponse',
name: 'toDataStream',
type: '(options?: ToDataStreamOptions) => Response',
description: 'Converts the result to a data stream.',
properties: [
{
type: 'ToDataStreamOptions',
parameters: [
{
name: 'data',
type: 'StreamData',
optional: true,
description: 'The stream data object.',
},
{
name: 'getErrorMessage',
type: '(error: unknown) => string',
description:
'A function to get the error message from the error object. By default, all errors are masked as "" for safety reasons.',
optional: true,
},
],
},
],
},
{
name: 'toDataStreamResponse',
type: '(options?: ToDataStreamResponseOptions) => Response',
description:
'Converts the result to a streamed response object with a stream data part stream. It can be used with the `useChat` and `useCompletion` hooks.',
properties: [
{
type: 'ToDataStreamOptions',
type: 'ToDataStreamResponseOptions',
parameters: [
{
name: 'init',
Expand Down
47 changes: 34 additions & 13 deletions content/examples/15-api-servers/10-node-js-http-server.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ description: Example of using Vercel AI SDK in a Node.js HTTP server.

# Node.js HTTP Server

You can use the Vercel AI SDK in a Node.js HTTP server to generate and stream text and objects to the client.
You can use the Vercel AI SDK in a Node.js HTTP server to generate text and stream it to the client.

## Examples

Expand All @@ -20,13 +20,13 @@ curl -X POST http://localhost:8080
set in the `OPENAI_API_KEY` environment variable.
</Note>

### Basic
### Data Stream

You can use the `pipeDataStreamToResponse` method to pipe the stream data to the server response.

```ts file='index.ts'
import { openai } from '@ai-sdk/openai';
import { StreamData, streamText } from 'ai';
import { streamText } from 'ai';
import { createServer } from 'http';

createServer(async (req, res) => {
Expand All @@ -39,26 +39,47 @@ createServer(async (req, res) => {
}).listen(8080);
```

### With Stream Data
### Data Stream With Stream Data

`pipeDataStreamToResponse` can be used with `StreamData` to send additional data to the client.

```ts file='index.ts' highlight="6-7,12-15,18"
import { openai } from '@ai-sdk/openai'
import { StreamData, streamText } from 'ai'
import { createServer } from 'http'

createServer(async (req, res) => {
const data = new StreamData()
data.append('initialized call')

const result = await streamText({
model: openai('gpt-4o'),
prompt: 'Invent a new holiday and describe its traditions.'
onFinish() {
data.append('call completed')
data.close()
}
})

result.pipeDataStreamToResponse(res, { data })
}).listen(8080)
```
### Text Stream
You can send a text stream to the client using `pipeTextStreamToResponse`.
```ts file='index.ts'
import { openai } from '@ai-sdk/openai';
import { StreamData, streamText } from 'ai';
import { streamText } from 'ai';
import { createServer } from 'http';

createServer(async (req, res) => {
const data = new StreamData();
data.append('initialized call');

const result = await streamText({
model: openai('gpt-4o'),
prompt: 'Invent a new holiday and describe its traditions.',
onFinish() {
data.append('call completed');
data.close();
},
});

result.pipeDataStreamToResponse(res, { data });
result.pipeTextStreamToResponse(res);
}).listen(8080);
```
120 changes: 120 additions & 0 deletions content/examples/15-api-servers/20-hono.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
---
title: Hono
description: Example of using Vercel AI SDK in a Hono server.
---

# Hono

You can use the Vercel AI SDK in a Hono server to generate and stream text and objects to the client.

## Examples

The examples start a simple HTTP server that listens on port 8080. You can e.g. test it using `curl`:

```bash
curl -X POST http://localhost:8080
```

<Note>
The examples use the OpenAI `gpt-4o` model. Ensure that the OpenAI API key is
set in the `OPENAI_API_KEY` environment variable.
</Note>

### Data Stream

You can use the `toDataStream` method to get a data stream from the result and then pipe it to the response.

```ts file='index.ts'
import { openai } from '@ai-sdk/openai';
import { serve } from '@hono/node-server';
import { streamText } from 'ai';
import { Hono } from 'hono';
import { stream } from 'hono/streaming';

const app = new Hono();

app.post('/', async c =>
stream(c, async stream => {
const result = await streamText({
model: openai('gpt-4o'),
prompt: 'Invent a new holiday and describe its traditions.',
});

// Mark the response as a v1 data stream:
c.header('X-Vercel-AI-Data-Stream', 'v1');
c.header('Content-Type', 'text/plain; charset=utf-8');

await stream.pipe(result.toDataStream());
}),
);

serve({ fetch: app.fetch, port: 8080 });
```

### Data Stream With Stream Data

`toDataStream` can be used with `StreamData` to send additional data to the client.

```ts file='index.ts' highlight="11-13,18-21,28"
import { openai } from '@ai-sdk/openai';
import { serve } from '@hono/node-server';
import { StreamData, streamText } from 'ai';
import { Hono } from 'hono';
import { stream } from 'hono/streaming';

const app = new Hono();

app.post('/', async c =>
stream(c, async stream => {
// use stream data (optional):
const data = new StreamData();
data.append('initialized call');

const result = await streamText({
model: openai('gpt-4o'),
prompt: 'Invent a new holiday and describe its traditions.',
onFinish() {
data.append('call completed');
data.close();
},
});

// Mark the response as a v1 data stream:
c.header('X-Vercel-AI-Data-Stream', 'v1');
c.header('Content-Type', 'text/plain; charset=utf-8');

await stream.pipe(result.toDataStream({ data }));
}),
);

serve({ fetch: app.fetch, port: 8080 });
```

### Text Stream

You can use the `toTextStream` method to get a text stream from the result and then pipe it to the response.

```ts file='index.ts'
import { openai } from '@ai-sdk/openai';
import { serve } from '@hono/node-server';
import { streamText } from 'ai';
import { Hono } from 'hono';
import { stream } from 'hono/streaming';

const app = new Hono();

app.post('/', async c =>
stream(c, async stream => {
const result = await streamText({
model: openai('gpt-4o'),
prompt: 'Invent a new holiday and describe its traditions.',
});

c.header('Content-Type', 'text/plain; charset=utf-8');

await stream.pipe(result.toTextStream());
}),
);

serve({ fetch: app.fetch, port: 8080 });
```
5 changes: 5 additions & 0 deletions content/examples/15-api-servers/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,5 +14,10 @@ You can use the Vercel AI SDK in any JavaScript API server to generate text and
description: 'Stream text to a Node.js HTTP server.',
href: '/examples/api-servers/node-js-http-server',
},
{
title: 'Hono',
description: 'Stream text to a Hono server.',
href: '/examples/api-servers/hono',
},
]}
/>
7 changes: 7 additions & 0 deletions examples/hono/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
ANTHROPIC_API_KEY=""
OPENAI_API_KEY=""
MISTRAL_API_KEY=""
GOOGLE_GENERATIVE_AI_API_KEY=""
FIREWORKS_API_KEY=""
GROQ_API_KEY=""
PERPLEXITY_API_KEY=""
28 changes: 28 additions & 0 deletions examples/hono/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Hono + Vercel AI SDK Example

## Usage

1. Create .env file with the following content (and more settings, depending on the providers you want to use):

```sh
OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
```

2. Run the following commands from the root directory of the AI SDK repo:

```sh
pnpm install
pnpm build
```

3. Run the following command:

```sh
pnpm tsx src/server.ts
```

4. Test the endpoint with Curl:

```sh
curl -X POST http://localhost:8080
```
20 changes: 20 additions & 0 deletions examples/hono/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
{
"name": "ai-sdk-hono-example",
"version": "0.0.0",
"private": true,
"dependencies": {
"@ai-sdk/openai": "latest",
"@hono/node-server": "1.12.2",
"ai": "latest",
"dotenv": "16.4.5",
"hono": "4.5.11"
},
"scripts": {
"type-check": "tsc --noEmit"
},
"devDependencies": {
"@types/node": "20.11.20",
"tsx": "4.7.1",
"typescript": "5.5.4"
}
}
33 changes: 33 additions & 0 deletions examples/hono/src/server.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
import { openai } from '@ai-sdk/openai';
import { serve } from '@hono/node-server';
import { StreamData, streamText } from 'ai';
import 'dotenv/config';
import { Hono } from 'hono';
import { stream } from 'hono/streaming';

const app = new Hono();

app.post('/', async c =>
stream(c, async stream => {
// use stream data (optional):
const data = new StreamData();
data.append('initialized call');

const result = await streamText({
model: openai('gpt-4o'),
prompt: 'Invent a new holiday and describe its traditions.',
onFinish() {
data.append('call completed');
data.close();
},
});

// Mark the response as a v1 data stream:
c.header('X-Vercel-AI-Data-Stream', 'v1');
c.header('Content-Type', 'text/plain; charset=utf-8');

await stream.pipe(result.toDataStream({ data }));
}),
);

serve({ fetch: app.fetch, port: 8080 });
18 changes: 18 additions & 0 deletions examples/hono/tsconfig.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
{
"compilerOptions": {
"strict": true,
"declaration": true,
"sourceMap": true,
"target": "es2022",
"lib": ["es2022", "dom"],
"module": "esnext",
"types": ["node"],
"esModuleInterop": true,
"allowSyntheticDefaultImports": true,
"moduleResolution": "node",
"rootDir": "./src",
"outDir": "./build",
"skipLibCheck": true
},
"include": ["src/**/*.ts"]
}
3 changes: 1 addition & 2 deletions examples/node-http-server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@

```sh
OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
...
```

2. Run the following commands from the root directory of the AI SDK repo:
Expand All @@ -16,7 +15,7 @@ pnpm install
pnpm build
```

3. Run any example (from the `examples/ai-core` directory) with the following command:
3. Run the following command:

```sh
pnpm tsx src/server.ts
Expand Down
Loading

0 comments on commit 6ee1f8e

Please sign in to comment.