Skip to content

Commit 42b914a

Browse files
authored
Refactor the Effect AI SDKs (#5469)
1 parent faf6314 commit 42b914a

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

73 files changed

+33628
-15593
lines changed

.changeset/neat-melons-care.md

Lines changed: 247 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,247 @@
1+
---
2+
"@effect/ai-amazon-bedrock": minor
3+
"@effect/ai-anthropic": minor
4+
"@effect/ai-google": minor
5+
"@effect/ai-openai": minor
6+
"@effect/ai": minor
7+
---
8+
9+
Refactor the Effect AI SDK and associated provider packages
10+
11+
This pull request contains a complete refactor of the base Effect AI SDK package
12+
as well as the associated provider integration packages to improve flexibility
13+
and enhance ergonomics. Major changes are outlined below.
14+
15+
## Modules
16+
17+
All modules in the base Effect AI SDK have had the leading `Ai` prefix dropped
18+
from their name (except for the `AiError` module).
19+
20+
For example, the `AiLanguageModel` module is now the `LanguageModel` module.
21+
22+
In addition, the `AiInput` module has been renamed to the `Prompt` module.
23+
24+
## Prompts
25+
26+
The `Prompt` module has been completely redesigned with flexibility in mind.
27+
28+
The `Prompt` module now supports building a prompt using either the constructors
29+
exposed from the `Prompt` module, or using raw prompt content parts / messages,
30+
which should be familiar to those coming from other AI SDKs.
31+
32+
In addition, the `system` option has been removed from all `LanguageModel` methods
33+
and must now be provided as part of the prompt.
34+
35+
**Prompt Constructors**
36+
37+
```ts
38+
import { LanguageModel, Prompt } from "@effect/ai"
39+
40+
const textPart = Prompt.makePart("text", {
41+
text: "What is machine learning?"
42+
})
43+
44+
const userMessage = Prompt.makeMessage("user", {
45+
content: [textPart]
46+
})
47+
48+
const systemMessage = Prompt.makeMessage("system", {
49+
content: "You are an expert in machine learning"
50+
})
51+
52+
const program = LanguageModel.generateText({
53+
prompt: Prompt.fromMessages([
54+
systemMessage,
55+
userMessage
56+
])
57+
})
58+
```
59+
60+
**Raw Prompt Input**
61+
62+
```ts
63+
import { LanguageModel } from "@effect/ai"
64+
65+
const program = LanguageModel.generateText({
66+
prompt: [
67+
{ role: "system", content: "You are an expert in machine learning" },
68+
{ role: "user", content: [{ type: "text", text: "What is machine learning?" }] }
69+
]
70+
})
71+
```
72+
73+
**NOTE**: Providing a plain string as a prompt is still supported, and will be converted
74+
internally into a user message with a single text content part.
75+
76+
### Provider-Specific Options
77+
78+
To support specification of provider-specific options when interacting with large
79+
language model providers, support has been added for adding provider-specific
80+
options to the parts of a `Prompt`.
81+
82+
```ts
83+
import { LanguageModel } from "@effect/ai"
84+
import { AnthropicLanguageModel } from "@effect/ai-anthropic"
85+
86+
const Claude = AnthropicLanguageModel.model("claude-sonnet-4-20250514")
87+
88+
const program = LanguageModel.generateText({
89+
prompt: [
90+
{
91+
role: "user",
92+
content: [{ type: "text", text: "What is machine learning?" }],
93+
options: {
94+
anthropic: { cacheControl: { type: "ephemeral", ttl: "1h" } }
95+
}
96+
}
97+
]
98+
}).pipe(Effect.provide(Claude))
99+
```
100+
101+
## Responses
102+
103+
The `Response` module has also been completely redesigned to support a wider
104+
variety of response parts, particularly when streaming.
105+
106+
### Streaming Responses
107+
108+
When streaming text via the `LanguageModel.streamText` method, you will now
109+
receive a stream of content parts instead of a stream of responses, which should
110+
make it much simpler to filter down the stream to the parts you are interested in.
111+
112+
In addition, additional content parts will be present in the stream to allow you to track,
113+
for example, when a text content part starts / ends.
114+
115+
### Tool Calls / Tool Call Results
116+
117+
The decoded parts of a `Response` (as returned by the methods of `LanguageModel`)
118+
are now fully type-safe on tool calls / tool call results. Filtering the content parts of a
119+
response to tool calls will narrow the type of the tool call `params` based on the tool
120+
`name`. Similarly, filtering the response to tool call results will narrow the type of the
121+
tool call `result` based on the tool `name`.
122+
123+
```ts
124+
import { LanguageModel, Tool, Toolkit } from "@effect/ai"
125+
import { Effect, Schema } from "effect"
126+
127+
const DadJokeTool = Tool.make("DadJokeTool", {
128+
parameters: { topic: Schema.String },
129+
success: Schema.Struct({ joke: Schema.String })
130+
})
131+
132+
const FooTool = Tool.make("FooTool", {
133+
parameters: { foo: Schema.Number },
134+
success: Schema.Struct({ bar: Schema.Boolean })
135+
})
136+
137+
const MyToolkit = Toolkit.make(DadJokeTool, FooTool)
138+
139+
const program = Effect.gen(function*() {
140+
const response = yield* LanguageModel.generateText({
141+
prompt: "Tell me a dad joke",
142+
toolkit: MyToolkit
143+
})
144+
145+
for (const toolCall of response.toolCalls) {
146+
if (toolCall.name === "DadJokeTool") {
147+
// ^? "DadJokeTool" | "FooTool"
148+
toolCall.params
149+
// ^? { readonly topic: string }
150+
}
151+
}
152+
153+
for (const toolResult of response.toolResults) {
154+
if (toolResult.name === "DadJokeTool") {
155+
// ^? "DadJokeTool" | "FooTool"
156+
toolResult.result
157+
// ^? { readonly joke: string }
158+
}
159+
}
160+
})
161+
```
162+
163+
### Provider Metadata
164+
165+
As with provider-specific options, provider-specific metadata is now returned as
166+
part of the response from the large language model provider.
167+
168+
```ts
169+
import { LanguageModel } from "@effect/ai"
170+
import { AnthropicLanguageModel } from "@effect/ai-anthropic"
171+
import { Effect } from "effect"
172+
173+
const Claude = AnthropicLanguageModel.model("claude-4-sonnet-20250514")
174+
175+
const program = Effect.gen(function*() {
176+
const response = yield* LanguageModel.generateText({
177+
prompt: "What is the meaning of life?"
178+
})
179+
180+
for (const part of response.content) {
181+
// When metadata **is not** defined for a content part, accessing the
182+
// provider's key on the part's metadata will return an untyped record
183+
if (part.type === "text") {
184+
const metadata = part.metadata.anthropic
185+
// ^? { readonly [x: string]: unknown } | undefined
186+
}
187+
// When metadata **is** defined for a content part, accessing the
188+
// provider's key on the part's metadata will return typed metadata
189+
if (part.type === "reasoning") {
190+
const metadata = part.metadata.anthropic
191+
// ^? AnthropicReasoningInfo | undefined
192+
}
193+
}
194+
}).pipe(Effect.provide(Claude))
195+
```
196+
197+
## Tool Calls
198+
199+
The `Tool` module has been enhanced to support provider-defined tools (e.g.
200+
web search, computer use, etc.). Large language model providers which support
201+
calling their own tools now have a separate module present in their provider
202+
integration packages which contain definitions for their tools.
203+
204+
These provider-defined tools can be included alongside user-defined tools in
205+
existing `Toolkit`s. Provider-defined tools that require a user-space handler
206+
will be raise a type error in the associated `Toolkit` layer if no such handler
207+
is defined.
208+
209+
```ts
210+
import { LanguageModel, Tool, Toolkit } from "@effect/ai"
211+
import { AnthropicTool } from "@effect/ai-anthropic"
212+
import { Schema } from "effect"
213+
214+
const DadJokeTool = Tool.make("DadJokeTool", {
215+
parameters: { topic: Schema.String },
216+
success: Schema.Struct({ joke: Schema.String })
217+
})
218+
219+
const MyToolkit = Toolkit.make(
220+
DadJokeTool,
221+
AnthropicTool.WebSearch_20250305({ max_uses: 1 })
222+
)
223+
224+
const program = LanguageModel.generateText({
225+
prompt: "Search the web for a dad joke",
226+
toolkit: MyToolkit
227+
})
228+
```
229+
230+
## AiError
231+
232+
The `AiError` type has been refactored into a union of different error types
233+
which can be raised by the Effect AI SDK. The goal of defining separate error
234+
types is to allow providing the end-user with more granular information about
235+
the error that occurred.
236+
237+
For now, the following errors have been defined. More error types may be added
238+
over time based upon necessity / use case.
239+
240+
```ts
241+
type AiError =
242+
| HttpRequestError,
243+
| HttpResponseError,
244+
| MalformedInput,
245+
| MalformedOutput,
246+
| UnknownError
247+
```

packages/ai/ai/docgen.json

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,7 @@
1818
"@effect/platform-node": ["../../../../platform-node/src/index.js"],
1919
"@effect/platform-node/*": ["../../../../platform-node/src/*.js"],
2020
"@effect/ai": ["../../../ai/src/index.js"],
21-
"@effect/ai/*": ["../../../ai/src/*.js"],
22-
"@effect/ai-openai": ["../../../ai-openai/src/index.js"],
23-
"@effect/ai-openai/*": ["../../../ai-openai/src/*.js"]
21+
"@effect/ai/*": ["../../../ai/src/*.js"]
2422
}
2523
}
2624
}

0 commit comments

Comments
 (0)