Skip to content

Agent docs examples #1706

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 34 commits into from
Feb 13, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
f7be5e6
Uses image cards for the frameworks
samejr Feb 10, 2025
a002e78
Removes old snippets
samejr Feb 10, 2025
57a1ecf
New AI agents side menu section
samejr Feb 10, 2025
3925c62
WIP adding new ai agent pages
samejr Feb 10, 2025
a4e8f43
Better overview page
samejr Feb 10, 2025
a20e082
More copy added to the agent example pages
samejr Feb 11, 2025
134f6a2
Copy improvements
samejr Feb 11, 2025
2b8654e
Removes “Creating a project” page and side menu section
samejr Feb 11, 2025
555b70a
Fixes broken links
samejr Feb 11, 2025
b8a44e5
Updates to the latest Mintlify version, fixes issues, changes theme
samejr Feb 11, 2025
584c826
Adds descriptions to the main dropdown menu items
samejr Feb 11, 2025
8133b81
Reformatted Introduction docs ‘landing page’
samejr Feb 12, 2025
a13d7ed
Retry heartbeat timeouts by putting back in the queue (#1689)
matt-aitken Feb 10, 2025
f0029b8
OOM retrying on larger machines (#1691)
matt-aitken Feb 10, 2025
39b4a4c
Kubernetes OOMs appear as non-zero sigkills, adding support for treat…
matt-aitken Feb 11, 2025
535cae9
Complete the original attempt span if retrying due to an OOM
matt-aitken Feb 11, 2025
dd651ab
Revert "Complete the original attempt span if retrying due to an OOM"
matt-aitken Feb 11, 2025
e375d81
chore: Update version for release (#1666)
github-actions[bot] Feb 11, 2025
0bcf18b
Release 3.3.14
matt-aitken Feb 11, 2025
23095ba
Set machine when triggering docs
matt-aitken Feb 11, 2025
7ca39d8
Batch queue runs that are waiting for deploy (#1693)
matt-aitken Feb 11, 2025
8a24c03
Detect ffmpeg OOM errors, added manual OutOfMemoryError (#1694)
matt-aitken Feb 12, 2025
90de1c8
Improved the machines docs, including the new OutOfMemoryError
matt-aitken Feb 12, 2025
4b50354
chore: Update version for release (#1695)
github-actions[bot] Feb 12, 2025
31d8941
Release 3.3.15
matt-aitken Feb 12, 2025
2c02c8b
Create new partitioned TaskEvent table, and switch to it gradually as…
ericallam Feb 12, 2025
ed972ac
Don't create an attempt if the run is final, batchTriggerAndWait bad …
matt-aitken Feb 12, 2025
3bc5ead
Fix missing logs on child runs by using the root task run createdAt i…
ericallam Feb 12, 2025
37db88b
Provider changes to support image cache (#1700)
nicktrn Feb 12, 2025
d88f5bc
Fix run container exits after OOM retries (#1701)
nicktrn Feb 12, 2025
baa5ead
Upgrade local dev to use electric beta.15 (#1699)
ericallam Feb 13, 2025
3f6b934
Text fixes
samejr Feb 13, 2025
a32be10
Merge remote-tracking branch 'origin/main' into agent-docs-examples
samejr Feb 13, 2025
5b41766
Removed pnpm files
samejr Feb 13, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
484 changes: 484 additions & 0 deletions docs/docs.json

Large diffs are not rendered by default.

Binary file added docs/guides/ai-agents/evaluator-optimizer.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
120 changes: 120 additions & 0 deletions docs/guides/ai-agents/generate-translate-copy.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
---
title: "Generate and translate copy"
sidebarTitle: "Generate & translate copy"
description: "Create an AI agent workflow that generates and translates copy"
---

## Overview

**Prompt chaining** is an AI workflow pattern that decomposes a complex task into a sequence of steps, where each LLM call processes the output of the previous one. This approach trades off latency for higher accuracy by making each LLM call an easier, more focused task, with the ability to add programmatic checks between steps to ensure the process remains on track.

![Generating and translating copy](/guides/ai-agents/prompt-chaining.png)

## Example task

In this example, we'll create a workflow that generates and translates copy. This approach is particularly effective when tasks require different models or approaches for different inputs.

**This task:**

- Uses `generateText` from [Vercel's AI SDK](https://sdk.vercel.ai/docs/introduction) to interact with OpenAI models
- Uses `experimental_telemetry` to provide LLM logs
- Generates marketing copy based on subject and target word count
- Validates the generated copy meets word count requirements (±10 words)
- Translates the validated copy to the target language while preserving tone

```typescript
import { openai } from "@ai-sdk/openai";
import { task } from "@trigger.dev/sdk/v3";
import { generateText } from "ai";

export interface TranslatePayload {
marketingSubject: string;
targetLanguage: string;
targetWordCount: number;
}
Comment on lines +30 to +34
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add input validation for payload fields.

Consider adding validation for the payload fields:

+import { z } from "zod";
+
+const TranslatePayloadSchema = z.object({
+  marketingSubject: z.string().min(1),
+  targetLanguage: z.string().min(2),
+  targetWordCount: z.number().int().positive().max(1000),
+});
+
 export interface TranslatePayload {
   marketingSubject: string;
   targetLanguage: string;
   targetWordCount: number;
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export interface TranslatePayload {
marketingSubject: string;
targetLanguage: string;
targetWordCount: number;
}
import { z } from "zod";
const TranslatePayloadSchema = z.object({
marketingSubject: z.string().min(1),
targetLanguage: z.string().min(2),
targetWordCount: z.number().int().positive().max(1000),
});
export interface TranslatePayload {
marketingSubject: string;
targetLanguage: string;
targetWordCount: number;
}


export const generateAndTranslateTask = task({
id: "generate-and-translate-copy",
maxDuration: 300, // Stop executing after 5 mins of compute
run: async (payload: TranslatePayload) => {
// Step 1: Generate marketing copy
const generatedCopy = await generateText({
model: openai("o1-mini"),
messages: [
{
role: "system",
content: "You are an expert copywriter.",
},
{
role: "user",
content: `Generate as close as possible to ${payload.targetWordCount} words of compelling marketing copy for ${payload.marketingSubject}`,
},
],
experimental_telemetry: {
isEnabled: true,
functionId: "generate-and-translate-copy",
},
});

// Gate: Validate the generated copy meets the word count target
const wordCount = generatedCopy.text.split(/\s+/).length;

if (
wordCount < payload.targetWordCount - 10 ||
wordCount > payload.targetWordCount + 10
) {
throw new Error(
`Generated copy length (${wordCount} words) is outside acceptable range of ${
payload.targetWordCount - 10
}-${payload.targetWordCount + 10} words`
);
}

// Step 2: Translate to target language
const translatedCopy = await generateText({
model: openai("o1-mini"),
messages: [
{
role: "system",
content: `You are an expert translator specializing in marketing content translation into ${payload.targetLanguage}.`,
},
{
role: "user",
content: `Translate the following marketing copy to ${payload.targetLanguage}, maintaining the same tone and marketing impact:\n\n${generatedCopy}`,
},
],
experimental_telemetry: {
isEnabled: true,
functionId: "generate-and-translate-copy",
},
});

return {
englishCopy: generatedCopy,
translatedCopy,
};
},
});
```

## Run a test

On the Test page in the dashboard, select the `generate-and-translate-copy` task and include a payload like the following:

```json
{
marketingSubject: "The controversial new Jaguar electric concept car",
targetLanguage: "Spanish",
targetWordCount: 100,
}
```

This example payload generates copy and then translates it using sequential LLM calls. The translation only begins after the generated copy has been validated against the word count requirements.

<video
src="https://content.trigger.dev/agent-prompt-chaining-3.mp4"
controls
muted
autoPlay
loop
/>
Binary file added docs/guides/ai-agents/orchestrator-workers.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
17 changes: 17 additions & 0 deletions docs/guides/ai-agents/overview.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
---
title: "AI agents overview"
sidebarTitle: "Overview"
description: "Real world AI agent example tasks using Trigger.dev"
---

## Overview

This guide will show you how to set up different types of AI agent workflows with Trigger.dev. The examples take inspiration from Athropic's blog post on [building effective agents](https://www.anthropic.com/research/building-effective-agents).

<CardGroup cols={2}>
<Card title="Prompt chaining" img="/guides/ai-agents/prompt-chaining.png" href="/guides/ai-agents/generate-translate-copy">Chain prompts together to generate and translate marketing copy automatically</Card>
<Card title="Routing" img="/guides/ai-agents/routing.png" href="/guides/ai-agents/route-question">Send questions to different AI models based on complexity analysis</Card>
<Card title="Parallelization" img="/guides/ai-agents/parallelization.png" href="/guides/ai-agents/respond-and-check-content">Simultaneously check for inappropriate content while responding to customer inquiries</Card>
<Card title="Orchestrator" img="/guides/ai-agents/orchestrator-workers.png" href="/guides/ai-agents/verify-news-article">Coordinate multiple AI workers to verify news article accuracy</Card>
<Card title="Evaluator-optimizer" img="/guides/ai-agents/evaluator-optimizer.png" href="/guides/ai-agents/translate-and-refine">Translate text and automatically improve quality through feedback loops</Card>
</CardGroup>
Binary file added docs/guides/ai-agents/parallelization.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/guides/ai-agents/prompt-chaining.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
134 changes: 134 additions & 0 deletions docs/guides/ai-agents/respond-and-check-content.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
---
title: "Respond to customer inquiry and check for inappropriate content"
sidebarTitle: "Respond & check content"
description: "Create an AI agent workflow that responds to customer inquiries while checking if their text is inappropriate"
---

## Overview

**Parallelization** is a workflow pattern where multiple tasks or processes run simultaneously instead of sequentially, allowing for more efficient use of resources and faster overall execution. It's particularly valuable when different parts of a task can be handled independently, such as running content analysis and response generation at the same time.
![Parallelization](/guides/ai-agents/parallelization.png)

## Example task

In this example, we'll create a workflow that simultaneously checks content for issues while responding to customer inquiries. This approach is particularly effective when tasks require multiple perspectives or parallel processing streams, with the orchestrator synthesizing the results into a cohesive output.

**This task:**

- Uses `generateText` from [Vercel's AI SDK](https://sdk.vercel.ai/docs/introduction) to interact with OpenAI models
- Uses `experimental_telemetry` to provide LLM logs
- Uses [`batch.triggerByTaskAndWait`](/triggering#batch-triggerbytaskandwait) to run customer response and content moderation tasks in parallel
- Generates customer service responses using an AI model
- Simultaneously checks for inappropriate content while generating responses

```typescript
import { openai } from "@ai-sdk/openai";
import { batch, task } from "@trigger.dev/sdk/v3";
import { generateText } from "ai";

// Task to generate customer response
export const generateCustomerResponse = task({
id: "generate-customer-response",
run: async (payload: { question: string }) => {
const response = await generateText({
model: openai("o1-mini"),
messages: [
{
role: "system",
content: "You are a helpful customer service representative.",
},
{ role: "user", content: payload.question },
],
experimental_telemetry: {
isEnabled: true,
functionId: "generate-customer-response",
},
});

return response.text;
},
});

// Task to check for inappropriate content
export const checkInappropriateContent = task({
id: "check-inappropriate-content",
run: async (payload: { text: string }) => {
const response = await generateText({
model: openai("o1-mini"),
messages: [
{
role: "system",
content:
"You are a content moderator. Respond with 'true' if the content is inappropriate or contains harmful, threatening, offensive, or explicit content, 'false' otherwise.",
},
{ role: "user", content: payload.text },
],
experimental_telemetry: {
isEnabled: true,
functionId: "check-inappropriate-content",
},
});

return response.text.toLowerCase().includes("true");
},
});

// Main task that coordinates the parallel execution
export const handleCustomerQuestion = task({
id: "handle-customer-question",
run: async (payload: { question: string }) => {
const {
runs: [responseRun, moderationRun],
} = await batch.triggerByTaskAndWait([
{
task: generateCustomerResponse,
payload: { question: payload.question },
},
{
task: checkInappropriateContent,
payload: { text: payload.question },
},
]);

// Check moderation result first
if (moderationRun.ok && moderationRun.output === true) {
return {
response:
"I apologize, but I cannot process this request as it contains inappropriate content.",
wasInappropriate: true,
};
}

// Return the generated response if everything is ok
if (responseRun.ok) {
return {
response: responseRun.output,
wasInappropriate: false,
};
}

// Handle any errors
throw new Error("Failed to process customer question");
},
});
```

## Run a test

On the Test page in the dashboard, select the `handle-customer-question` task and include a payload like the following:

``` json
{
"question": "Can you explain 2FA?"
}
```

When triggered with a question, the task simultaneously generates a response while checking for inappropriate content using two parallel LLM calls. The main task waits for both operations to complete before delivering the final response.

<video
src="https://content.trigger.dev/agent-parallelization.mp4"
controls
muted
autoPlay
loop
/>
114 changes: 114 additions & 0 deletions docs/guides/ai-agents/route-question.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
---
title: "Route a question to a different AI model"
sidebarTitle: "Route a question"
description: "Create an AI agent workflow that routes a question to a different AI model depending on its complexity"
---

## Overview

**Routing** is a workflow pattern that classifies an input and directs it to a specialized followup task. This pattern allows for separation of concerns and building more specialized prompts, which is particularly effective when there are distinct categories that are better handled separately. Without routing, optimizing for one kind of input can hurt performance on other inputs.

![Routing](/guides/ai-agents/routing.png)

## Example task

In this example, we'll create a workflow that routes a question to a different AI model depending on its complexity. This approach is particularly effective when tasks require different models or approaches for different inputs.

**This task:**

- Uses `generateText` from [Vercel's AI SDK](https://sdk.vercel.ai/docs/introduction) to interact with OpenAI models
- Uses `experimental_telemetry` in the source verification and historical analysis tasks to provide LLM logs
- Routes questions using a lightweight model (`o1-mini`) to classify complexity
- Directs simple questions to `gpt-4o` and complex ones to `gpt-o3-mini`
- Returns both the answer and metadata about the routing decision

```typescript
import { openai } from "@ai-sdk/openai";
import { task } from "@trigger.dev/sdk/v3";
import { generateText } from "ai";
import { z } from "zod";

// Schema for router response
const routingSchema = z.object({
model: z.enum(["gpt-4o", "gpt-o3-mini"]),
reason: z.string(),
});

// Router prompt template
const ROUTER_PROMPT = `You are a routing assistant that determines the complexity of questions.
Analyze the following question and route it to the appropriate model:

- Use "gpt-4o" for simple, common, or straightforward questions
- Use "gpt-o3-mini" for complex, unusual, or questions requiring deep reasoning

Respond with a JSON object in this exact format:
{"model": "gpt-4o" or "gpt-o3-mini", "reason": "your reasoning here"}

Question: `;

export const routeAndAnswerQuestion = task({
id: "route-and-answer-question",
run: async (payload: { question: string }) => {
// Step 1: Route the question
const routingResponse = await generateText({
model: openai("o1-mini"),
messages: [
{
role: "system",
content:
"You must respond with a valid JSON object containing only 'model' and 'reason' fields. No markdown, no backticks, no explanation.",
},
{
role: "user",
content: ROUTER_PROMPT + payload.question,
},
],
temperature: 0.1,
experimental_telemetry: {
isEnabled: true,
functionId: "route-and-answer-question",
},
});

// Add error handling and cleanup
let jsonText = routingResponse.text.trim();
if (jsonText.startsWith("```")) {
jsonText = jsonText.replace(/```json\n|\n```/g, "");
}

const routingResult = routingSchema.parse(JSON.parse(jsonText));
Comment on lines +73 to +79
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance JSON parsing error handling.

Consider adding more robust error handling for JSON parsing:

 let jsonText = routingResponse.text.trim();
 if (jsonText.startsWith("```")) {
   jsonText = jsonText.replace(/```json\n|\n```/g, "");
 }
+try {
+  const parsed = JSON.parse(jsonText);
+  const routingResult = routingSchema.parse(parsed);
+} catch (error) {
+  if (error instanceof SyntaxError) {
+    throw new Error(`Invalid JSON response from routing model: ${error.message}`);
+  }
+  throw error;
+}
-const routingResult = routingSchema.parse(JSON.parse(jsonText));


// Step 2: Get the answer using the selected model
const answerResult = await generateText({
model: openai(routingResult.model),
messages: [{ role: "user", content: payload.question }],
});

return {
answer: answerResult.text,
selectedModel: routingResult.model,
routingReason: routingResult.reason,
};
},
});
```



## Run a test

Triggering our task with a simple question shows it routing to the gpt-4o model and returning the answer with reasoning:

```json
{
"question": "How many planets are there in the solar system?"
}
```

<video
src="https://content.trigger.dev/agent-routing.mp4"
controls
muted
autoPlay
loop
/>
Binary file added docs/guides/ai-agents/routing.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Loading