Skip to content
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ A powerful, type-safe AI SDK for building AI-powered applications.
Import only the functionality you need for smaller bundle sizes:

```typescript
// Only chat functionality - no embedding or summarization code bundled
// Only chat functionality - no summarization code bundled
import { openaiText } from '@tanstack/ai-openai/adapters'
import { generate } from '@tanstack/ai'

Expand Down
1 change: 0 additions & 1 deletion docs/adapters/anthropic.md
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,6 @@ Creates an Anthropic summarization adapter with an explicit API key.

## Limitations

- **Embeddings**: Anthropic does not support embeddings natively. Use OpenAI or Gemini for embedding needs.
- **Image Generation**: Anthropic does not support image generation. Use OpenAI or Gemini for image generation.

## Next Steps
Expand Down
55 changes: 1 addition & 54 deletions docs/adapters/gemini.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ id: gemini-adapter
order: 3
---

The Google Gemini adapter provides access to Google's Gemini models, including text generation, embeddings, image generation with Imagen, and experimental text-to-speech.
The Google Gemini adapter provides access to Google's Gemini models, including text generation, image generation with Imagen, and experimental text-to-speech.

## Installation

Expand Down Expand Up @@ -138,47 +138,6 @@ modelOptions: {
}
```

## Embeddings

Generate text embeddings for semantic search and similarity:

```typescript
import { embedding } from "@tanstack/ai";
import { geminiEmbedding } from "@tanstack/ai-gemini";

const result = await embedding({
adapter: geminiEmbedding("gemini-embedding-001"),
input: "The quick brown fox jumps over the lazy dog",
});

console.log(result.embeddings);
```

### Batch Embeddings

```typescript
const result = await embedding({
adapter: geminiEmbedding("gemini-embedding-001"),
input: [
"First text to embed",
"Second text to embed",
"Third text to embed",
],
});
```

### Embedding Model Options

```typescript
const result = await embedding({
adapter: geminiEmbedding("gemini-embedding-001"),
input: "...",
modelOptions: {
taskType: "RETRIEVAL_DOCUMENT", // or "RETRIEVAL_QUERY", "SEMANTIC_SIMILARITY", etc.
},
});
```

## Summarization

Summarize long text content:
Expand Down Expand Up @@ -281,18 +240,6 @@ Creates a Gemini text/chat adapter with an explicit API key.

**Returns:** A Gemini text adapter instance.

### `geminiEmbed(config?)`

Creates a Gemini embedding adapter using environment variables.

**Returns:** A Gemini embed adapter instance.

### `createGeminiEmbed(apiKey, config?)`

Creates a Gemini embedding adapter with an explicit API key.

**Returns:** A Gemini embed adapter instance.

### `geminiSummarize(config?)`

Creates a Gemini summarization adapter using environment variables.
Expand Down
51 changes: 0 additions & 51 deletions docs/adapters/ollama.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,45 +170,6 @@ modelOptions: {
}
```

## Embeddings

Generate text embeddings locally:

```typescript
import { embedding } from "@tanstack/ai";
import { ollamaEmbedding } from "@tanstack/ai-ollama";

const result = await embedding({
adapter: ollamaEmbedding("nomic-embed-text"),
input: "The quick brown fox jumps over the lazy dog",
});

console.log(result.embeddings);
```

### Embedding Models

First, pull an embedding model:

```bash
ollama pull nomic-embed-text
# or
ollama pull mxbai-embed-large
```

### Batch Embeddings

```typescript
const result = await embedding({
adapter: ollamaEmbedding("nomic-embed-text"),
input: [
"First text to embed",
"Second text to embed",
"Third text to embed",
],
});
```

## Summarization

Summarize long text content locally:
Expand Down Expand Up @@ -299,18 +260,6 @@ Creates an Ollama text/chat adapter with a custom host.

**Returns:** An Ollama text adapter instance.

### `ollamaEmbed(options?)`

Creates an Ollama embedding adapter.

**Returns:** An Ollama embed adapter instance.

### `createOllamaEmbed(host?, options?)`

Creates an Ollama embedding adapter with a custom host.

**Returns:** An Ollama embed adapter instance.

### `ollamaSummarize(options?)`

Creates an Ollama summarization adapter.
Expand Down
57 changes: 1 addition & 56 deletions docs/adapters/openai.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ id: openai-adapter
order: 1
---

The OpenAI adapter provides access to OpenAI's models, including GPT-4o, GPT-5, embeddings, image generation (DALL-E), text-to-speech (TTS), and audio transcription (Whisper).
The OpenAI adapter provides access to OpenAI's models, including GPT-4o, GPT-5, image generation (DALL-E), text-to-speech (TTS), and audio transcription (Whisper).

## Installation

Expand Down Expand Up @@ -132,49 +132,6 @@ modelOptions: {

When reasoning is enabled, the model's reasoning process is streamed separately from the response text and appears as a collapsible thinking section in the UI.

## Embeddings

Generate text embeddings for semantic search and similarity:

```typescript
import { embedding } from "@tanstack/ai";
import { openaiEmbedding } from "@tanstack/ai-openai";

const result = await embedding({
adapter: openaiEmbedding("text-embedding-3-small"),
input: "The quick brown fox jumps over the lazy dog",
});

console.log(result.embeddings); // Array of embedding vectors
```

### Batch Embeddings

```typescript
const result = await embedding({
adapter: openaiEmbedding("text-embedding-3-small"),
input: [
"First text to embed",
"Second text to embed",
"Third text to embed",
],
});

// result.embeddings contains an array of vectors
```

### Embedding Model Options

```typescript
const result = await embedding({
adapter: openaiEmbedding("text-embedding-3-small"),
input: "...",
modelOptions: {
dimensions: 512, // Reduce dimensions for smaller storage
},
});
```

## Summarization

Summarize long text content:
Expand Down Expand Up @@ -321,18 +278,6 @@ Creates an OpenAI chat adapter with an explicit API key.

**Returns:** An OpenAI chat adapter instance.

### `openaiEmbedding(config?)`

Creates an OpenAI embedding adapter using environment variables.

**Returns:** An OpenAI embedding adapter instance.

### `createOpenaiEmbedding(apiKey, config?)`

Creates an OpenAI embedding adapter with an explicit API key.

**Returns:** An OpenAI embed adapter instance.

### `openaiSummarize(config?)`

Creates an OpenAI summarization adapter using environment variables.
Expand Down
33 changes: 1 addition & 32 deletions docs/api/ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,30 +71,6 @@ const result = await summarize({

A `SummarizationResult` with the summary text.

## `embedding(options)`

Creates embeddings for text input.

```typescript
import { embedding } from "@tanstack/ai";
import { openaiEmbedding } from "@tanstack/ai-openai";

const result = await embedding({
adapter: openaiEmbedding("text-embedding-3-small"),
input: "Text to embed",
});
```

### Parameters

- `adapter` - An AI adapter instance with model
- `input` - Text or array of texts to embed
- `modelOptions?` - Model-specific options

### Returns

An `EmbeddingResult` with embeddings array.

## `toolDefinition(config)`

Creates an isomorphic tool definition that can be instantiated for server or client execution.
Expand Down Expand Up @@ -289,11 +265,10 @@ interface Tool {
## Usage Examples

```typescript
import { chat, summarize, embedding, generateImage } from "@tanstack/ai";
import { chat, summarize, generateImage } from "@tanstack/ai";
import {
openaiText,
openaiSummarize,
openaiEmbedding,
openaiImage,
} from "@tanstack/ai-openai";

Expand Down Expand Up @@ -356,12 +331,6 @@ const summary = await summarize({
maxLength: 100,
});

// --- Embeddings
const embeddings = await embedding({
adapter: openaiEmbedding("text-embedding-3-small"),
input: "Text to embed",
});

// --- Image generation
const image = await generateImage({
adapter: openaiImage("dall-e-3"),
Expand Down
Loading
Loading