-
Notifications
You must be signed in to change notification settings - Fork 1.2k
fix: forward fetch and headers options to AI SDK providers #1297
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🦋 Changeset detectedLatest commit: 8384272 The changes in this PR will be included in the next version bump. This PR includes changesets to release 2 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
Greptile OverviewGreptile SummaryThis PR forwards Key changes:
Issue found:
Confidence Score: 3/5
Important Files ChangedFile Analysis
Sequence DiagramsequenceDiagram
participant User
participant Stagehand
participant LLMProvider
participant getAISDKLanguageModel
participant AISDKCustomProvider
participant AISDKClient
participant OpenAIAPI
User->>Stagehand: new Stagehand({model: {fetch, headers}})
User->>Stagehand: act() or extract()
Stagehand->>LLMProvider: getClient(modelName, clientOptions)
alt Model includes "/"
LLMProvider->>LLMProvider: Parse provider/model
LLMProvider->>getAISDKLanguageModel: call with apiKey, baseURL, headers, fetch
alt Has apiKey
getAISDKLanguageModel->>getAISDKLanguageModel: Build providerConfig with all options
getAISDKLanguageModel->>AISDKCustomProvider: creator(providerConfig)
AISDKCustomProvider-->>getAISDKLanguageModel: provider instance
getAISDKLanguageModel->>AISDKCustomProvider: provider(subModelName)
AISDKCustomProvider-->>LLMProvider: languageModel
else No apiKey
Note over getAISDKLanguageModel: ⚠️ headers/fetch ignored
getAISDKLanguageModel->>AISDKCustomProvider: AISDKProviders[subProvider](subModelName)
AISDKCustomProvider-->>LLMProvider: languageModel
end
LLMProvider->>AISDKClient: new AISdkClient({model: languageModel})
AISDKClient-->>LLMProvider: client
else Standard model
LLMProvider->>LLMProvider: Use provider-specific client
end
LLMProvider-->>Stagehand: LLMClient
Stagehand->>AISDKClient: Make LLM request
AISDKClient->>OpenAIAPI: HTTP request with custom fetch/headers
OpenAIAPI-->>AISDKClient: Response
AISDKClient-->>Stagehand: Result
Stagehand-->>User: Action completed
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (1)
-
packages/core/lib/v3/llm/LLMProvider.ts, line 136-145 (link)logic: the
elsepath (when no apiKey is provided) doesn't receiveheadersorfetchoptions - users without explicit apiKeys will have their options silently ignored
2 files reviewed, 1 comment
6e684de to
8384272
Compare
fix: forward fetch and headers options to AI SDK providers
Fixes #1296
why
When using AI SDK provider models (e.g.,
openai/gpt-4o-mini), customfetchandheadersoptions fromClientOptionsare silently ignored, even though:ClientOptionsTypeScript interface includes thembaseURLoption IS forwarded (inconsistent behavior)This blocks important use cases:
Users currently receive no error when these options are ignored, making this bug difficult to discover.
what changed
Modified:
packages/core/lib/v3/llm/LLMProvider.tsExtendedClientOptionsinterface for type-safe property accessgetAISDKLanguageModel()function signature to acceptheadersandfetchparametersproviderConfigobject (forwarded to AI SDK provider)getClient()to pass the options with type assertionsChanges:
ExtendedClientOptionsinterface for type-safe property access (lines 20-23)Type Safety:
ExtendedClientOptionsinterface to avoid@typescript-eslint/no-explicit-anyerrorsCompatibility:
baseURLparametertest plan
Manual Testing
Run the provided test example:
Expected output:
Runtime Verification ✅
Verified with actual OpenAI API call:
This confirms:
Test File
Added
examples/test-custom-fetch.tswhich:Real-World Use Case
This fix enables our production LLM proxy integration:
Code Quality
ExtendedClientOptions)baseURL)Existing Tests
All existing tests continue to pass (no breaking changes).
Context
We currently use a runtime patch in production that modifies the compiled
dist/index.jsto work around this bug. This PR provides a proper source-code fix that:The fix enables LLM proxy authentication, which is critical for production deployments where all LLM requests are routed through an authenticated proxy for billing, monitoring, and security.
Happy to help test and refine this fix!
Additional Notes: