-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Description
Environment Information
Stagehand:
- Language/SDK: TypeScript
- Stagehand version: 3.0.3 (also affects 3.0.1, 3.0.2)
AI Provider:
- Provider: OpenAI (also affects Anthropic, Google, and all AI SDK providers)
- Model: Any model using the "provider/model" format (e.g.,
openai/gpt-4o-mini)
Issue Description
When providing custom fetch and headers options in ClientOptions, they are not forwarded to the underlying AI SDK provider. This prevents important use cases like:
- Proxy authentication - Adding custom auth headers (e.g.,
X-LLM-Request-Token) - Request logging - Intercepting requests for monitoring/debugging
- Custom retry logic - Implementing application-specific retry behavior
- Rate limiting - Custom rate limiting implementations
The options are accepted by Stagehand's TypeScript interface but silently ignored during execution, which is misleading to users.
Steps to Reproduce
- Create a Stagehand instance with custom
fetchandheaders - Use an AI SDK provider model (e.g.,
openai/gpt-4o-mini) - Make an
act()orextract()call - Observe that custom fetch function is never called
Minimal Reproduction Code
import { Stagehand } from '@browserbase/stagehand';
// Custom fetch function
const customFetch = async (url: string, options?: RequestInit) => {
console.log('✅ Custom fetch called:', url);
const headers = new Headers(options?.headers);
headers.set('X-Custom-Header', 'my-value');
return fetch(url, { ...options, headers });
};
// Initialize Stagehand with custom fetch
const stagehand = new Stagehand({
model: {
modelName: 'openai/gpt-4o-mini',
apiKey: process.env.OPENAI_API_KEY,
fetch: customFetch, // ← This is ignored!
headers: {
'X-Custom-Header': 'test' // ← This is also ignored!
}
}
});
await stagehand.init();
await stagehand.act('Click the login button');
// Expected: "✅ Custom fetch called: ..." logged
// Actual: Nothing logged (default fetch used instead)Error Messages / Log trace
No error is thrown. The options are silently ignored, which makes debugging difficult.
Root Cause
File: packages/core/lib/v3/llm/LLMProvider.ts
The getAISDKLanguageModel() function (lines 96-128) only accepts 4 parameters and doesn't forward fetch and headers:
// Current (buggy) implementation
export function getAISDKLanguageModel(
subProvider: string,
subModelName: string,
apiKey?: string,
baseURL?: string, // ← Only 4 parameters!
) {
const providerConfig: { apiKey: string; baseURL?: string } = { apiKey };
if (baseURL) {
providerConfig.baseURL = baseURL;
}
// ❌ fetch and headers never added to providerConfig
const provider = creator(providerConfig);
return provider(subModelName);
}Call site (lines 146-151) also doesn't pass these options:
const languageModel = getAISDKLanguageModel(
subProvider,
subModelName,
clientOptions?.apiKey,
clientOptions?.baseURL,
// ❌ MISSING: clientOptions?.headers
// ❌ MISSING: clientOptions?.fetch
);Expected Behavior
Custom fetch and headers options should be forwarded to the AI SDK provider, enabling:
- Custom authentication via fetch interceptors
- Request logging and monitoring
- Proxy integration with custom headers
- Custom retry and error handling logic
Additional Context
ClientOptionstype already includesfetchandheaders(defined asOpenAIClientOptions | AnthropicClientOptions)- All underlying AI SDK providers (OpenAI, Anthropic, Google, etc.) support these options
- The
baseURLoption IS forwarded correctly (inconsistent behavior) - This is a documented feature of the Vercel AI SDK that Stagehand should expose
Use Case: We need this for LLM proxy authentication where all requests must include custom authentication headers. Currently using a runtime patch as a workaround.
Related Issues
None found (searched for "fetch", "headers", "proxy")
Note: I have a PR ready with the fix and test case.