Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ A powerful, type-safe AI SDK for building AI-powered applications.
Import only the functionality you need for smaller bundle sizes:

```typescript
// Only chat functionality - no embedding or summarization code bundled
// Only chat functionality - no summarization code bundled
import { openaiText } from '@tanstack/ai-openai/adapters'
import { generate } from '@tanstack/ai'

Expand Down
31 changes: 7 additions & 24 deletions docs/adapters/anthropic.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,8 @@ npm install @tanstack/ai-anthropic
import { chat } from "@tanstack/ai";
import { anthropicText } from "@tanstack/ai-anthropic";

const adapter = anthropicText();

const stream = chat({
adapter,
model: "claude-sonnet-4-5",
adapter: anthropicText("claude-sonnet-4-5"),
messages: [{ role: "user", content: "Hello!" }],
});
```
Expand All @@ -38,8 +35,7 @@ const adapter = createAnthropicChat(process.env.ANTHROPIC_API_KEY!, {
});

const stream = chat({
adapter,
model: "claude-sonnet-4-5",
adapter: adapter("claude-sonnet-4-5"),
messages: [{ role: "user", content: "Hello!" }],
});
```
Expand All @@ -63,14 +59,11 @@ const adapter = createAnthropicChat(process.env.ANTHROPIC_API_KEY!, config);
import { chat, toStreamResponse } from "@tanstack/ai";
import { anthropicText } from "@tanstack/ai-anthropic";

const adapter = anthropicText();

export async function POST(request: Request) {
const { messages } = await request.json();

const stream = chat({
adapter,
model: "claude-sonnet-4-5",
adapter: anthropicText("claude-sonnet-4-5"),
messages,
});

Expand All @@ -85,8 +78,6 @@ import { chat, toolDefinition } from "@tanstack/ai";
import { anthropicText } from "@tanstack/ai-anthropic";
import { z } from "zod";

const adapter = anthropicText();

const searchDatabaseDef = toolDefinition({
name: "search_database",
description: "Search the database",
Expand All @@ -101,8 +92,7 @@ const searchDatabase = searchDatabaseDef.server(async ({ query }) => {
});

const stream = chat({
adapter,
model: "claude-sonnet-4-5",
adapter: anthropicText("claude-sonnet-4-5"),
messages,
tools: [searchDatabase],
});
Expand All @@ -114,8 +104,7 @@ Anthropic supports various provider-specific options:

```typescript
const stream = chat({
adapter: anthropicText(),
model: "claude-sonnet-4-5",
adapter: anthropicText("claude-sonnet-4-5"),
messages,
modelOptions: {
max_tokens: 4096,
Expand Down Expand Up @@ -148,8 +137,7 @@ Cache prompts for better performance and reduced costs:

```typescript
const stream = chat({
adapter: anthropicText(),
model: "claude-sonnet-4-5",
adapter: anthropicText("claude-sonnet-4-5"),
messages: [
{
role: "user",
Expand All @@ -166,7 +154,6 @@ const stream = chat({
],
},
],
model: "claude-sonnet-4-5",
});
```

Expand All @@ -178,11 +165,8 @@ Anthropic supports text summarization:
import { summarize } from "@tanstack/ai";
import { anthropicSummarize } from "@tanstack/ai-anthropic";

const adapter = anthropicSummarize();

const result = await summarize({
adapter,
model: "claude-sonnet-4-5",
adapter: anthropicSummarize("claude-sonnet-4-5"),
text: "Your long text to summarize...",
maxLength: 100,
style: "concise", // "concise" | "bullet-points" | "paragraph"
Expand Down Expand Up @@ -237,7 +221,6 @@ Creates an Anthropic summarization adapter with an explicit API key.

## Limitations

- **Embeddings**: Anthropic does not support embeddings natively. Use OpenAI or Gemini for embedding needs.
- **Image Generation**: Anthropic does not support image generation. Use OpenAI or Gemini for image generation.

## Next Steps
Expand Down
99 changes: 10 additions & 89 deletions docs/adapters/gemini.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ id: gemini-adapter
order: 3
---

The Google Gemini adapter provides access to Google's Gemini models, including text generation, embeddings, image generation with Imagen, and experimental text-to-speech.
The Google Gemini adapter provides access to Google's Gemini models, including text generation, image generation with Imagen, and experimental text-to-speech.

## Installation

Expand All @@ -18,11 +18,8 @@ npm install @tanstack/ai-gemini
import { chat } from "@tanstack/ai";
import { geminiText } from "@tanstack/ai-gemini";

const adapter = geminiText();

const stream = chat({
adapter,
model: "gemini-2.5-pro",
adapter: geminiText("gemini-2.5-pro"),
messages: [{ role: "user", content: "Hello!" }],
});
```
Expand All @@ -38,8 +35,7 @@ const adapter = createGeminiChat(process.env.GEMINI_API_KEY!, {
});

const stream = chat({
adapter,
model: "gemini-2.5-pro",
adapter: adapter("gemini-2.5-pro"),
messages: [{ role: "user", content: "Hello!" }],
});
```
Expand All @@ -63,14 +59,11 @@ const adapter = createGeminiChat(process.env.GEMINI_API_KEY!, config);
import { chat, toStreamResponse } from "@tanstack/ai";
import { geminiText } from "@tanstack/ai-gemini";

const adapter = geminiText();

export async function POST(request: Request) {
const { messages } = await request.json();

const stream = chat({
adapter,
model: "gemini-2.5-pro",
adapter: geminiText("gemini-2.5-pro"),
messages,
});

Expand All @@ -85,8 +78,6 @@ import { chat, toolDefinition } from "@tanstack/ai";
import { geminiText } from "@tanstack/ai-gemini";
import { z } from "zod";

const adapter = geminiText();

const getCalendarEventsDef = toolDefinition({
name: "get_calendar_events",
description: "Get calendar events for a date",
Expand All @@ -101,8 +92,7 @@ const getCalendarEvents = getCalendarEventsDef.server(async ({ date }) => {
});

const stream = chat({
adapter,
model: "gemini-2.5-pro",
adapter: geminiText("gemini-2.5-pro"),
messages,
tools: [getCalendarEvents],
});
Expand All @@ -114,8 +104,7 @@ Gemini supports various model-specific options:

```typescript
const stream = chat({
adapter: geminiText(),
model: "gemini-2.5-pro",
adapter: geminiText("gemini-2.5-pro"),
messages,
modelOptions: {
maxOutputTokens: 2048,
Expand Down Expand Up @@ -149,52 +138,6 @@ modelOptions: {
}
```

## Embeddings

Generate text embeddings for semantic search and similarity:

```typescript
import { embedding } from "@tanstack/ai";
import { geminiEmbedding } from "@tanstack/ai-gemini";

const adapter = geminiEmbedding();

const result = await embedding({
adapter,
model: "gemini-embedding-001",
input: "The quick brown fox jumps over the lazy dog",
});

console.log(result.embeddings);
```

### Batch Embeddings

```typescript
const result = await embedding({
adapter: geminiEmbedding(),
model: "gemini-embedding-001",
input: [
"First text to embed",
"Second text to embed",
"Third text to embed",
],
});
```

### Embedding Model Options

```typescript
const result = await embedding({
adapter: geminiEmbedding(),
model: "gemini-embedding-001",
input: "...",
modelOptions: {
taskType: "RETRIEVAL_DOCUMENT", // or "RETRIEVAL_QUERY", "SEMANTIC_SIMILARITY", etc.
},
});
```

## Summarization

Summarize long text content:
Expand All @@ -203,11 +146,8 @@ Summarize long text content:
import { summarize } from "@tanstack/ai";
import { geminiSummarize } from "@tanstack/ai-gemini";

const adapter = geminiSummarize();

const result = await summarize({
adapter,
model: "gemini-2.5-pro",
adapter: geminiSummarize("gemini-2.5-pro"),
text: "Your long text to summarize...",
maxLength: 100,
style: "concise", // "concise" | "bullet-points" | "paragraph"
Expand All @@ -224,11 +164,8 @@ Generate images with Imagen:
import { generateImage } from "@tanstack/ai";
import { geminiImage } from "@tanstack/ai-gemini";

const adapter = geminiImage();

const result = await generateImage({
adapter,
model: "imagen-3.0-generate-002",
adapter: geminiImage("imagen-3.0-generate-002"),
prompt: "A futuristic cityscape at sunset",
numberOfImages: 1,
});
Expand All @@ -240,8 +177,7 @@ console.log(result.images);

```typescript
const result = await generateImage({
adapter: geminiImage(),
model: "imagen-3.0-generate-002",
adapter: geminiImage("imagen-3.0-generate-002"),
prompt: "...",
modelOptions: {
aspectRatio: "16:9", // "1:1" | "3:4" | "4:3" | "9:16" | "16:9"
Expand All @@ -261,11 +197,8 @@ Generate speech from text:
import { generateSpeech } from "@tanstack/ai";
import { geminiSpeech } from "@tanstack/ai-gemini";

const adapter = geminiSpeech();

const result = await generateSpeech({
adapter,
model: "gemini-2.5-flash-preview-tts",
adapter: geminiSpeech("gemini-2.5-flash-preview-tts"),
text: "Hello from Gemini TTS!",
});

Expand Down Expand Up @@ -307,18 +240,6 @@ Creates a Gemini text/chat adapter with an explicit API key.

**Returns:** A Gemini text adapter instance.

### `geminiEmbed(config?)`

Creates a Gemini embedding adapter using environment variables.

**Returns:** A Gemini embed adapter instance.

### `createGeminiEmbed(apiKey, config?)`

Creates a Gemini embedding adapter with an explicit API key.

**Returns:** A Gemini embed adapter instance.

### `geminiSummarize(config?)`

Creates a Gemini summarization adapter using environment variables.
Expand Down
Loading
Loading