Skip to content

Conversation

@iozfiliz
Copy link

fix: forward fetch and headers options to AI SDK providers

Fixes #1296

why

When using AI SDK provider models (e.g., openai/gpt-4o-mini), custom fetch and headers options from ClientOptions are silently ignored, even though:

  • The ClientOptions TypeScript interface includes them
  • The underlying AI SDK providers support them
  • The baseURL option IS forwarded (inconsistent behavior)

This blocks important use cases:

  • Proxy authentication - Adding custom auth headers for LLM proxies
  • Request logging - Intercepting requests for monitoring/debugging
  • Custom retry logic - Application-specific retry behavior
  • Rate limiting - Custom rate limiting implementations

Users currently receive no error when these options are ignored, making this bug difficult to discover.

what changed

Modified: packages/core/lib/v3/llm/LLMProvider.ts

  1. Added ExtendedClientOptions interface for type-safe property access
  2. Updated getAISDKLanguageModel() function signature to accept headers and fetch parameters
  3. Added these options to providerConfig object (forwarded to AI SDK provider)
  4. Updated call site in getClient() to pass the options with type assertions

Changes:

  • Added ExtendedClientOptions interface for type-safe property access (lines 20-23)
  • Function signature: Added 2 optional parameters (lines 101-107)
  • Provider config: Added headers/fetch to config type and conditionals (lines 118-132)
  • Call site: Passed new parameters with type assertions (lines 169-170)

Type Safety:

  • Used ExtendedClientOptions interface to avoid @typescript-eslint/no-explicit-any errors
  • Type assertion is minimal and localized to 2 lines
  • Both OpenAI and Anthropic SDKs support these properties at runtime

Compatibility:

  • ✅ Backward compatible (new parameters are optional)
  • ✅ No breaking changes to existing code
  • ✅ Follows same pattern as existing baseURL parameter

test plan

Manual Testing

Run the provided test example:

# Set up environment
export OPENAI_API_KEY="your-key-here"

# Build Stagehand
pnpm run build

# Run test
pnpm run example test-custom-fetch

Expected output:

✅ Custom fetch called (1 times)
   URL: https://api.openai.com/v1/responses
   Custom header: x-custom-header: test-value
   Custom header: x-custom-proxy-auth: proxy-token-123
✅ SUCCESS: Custom fetch was called 1 times

Runtime Verification ✅

Verified with actual OpenAI API call:

Initializing Stagehand with custom fetch and headers...
Making a simple LLM call via act()...

✅ Custom fetch called (1 times)
   URL: https://api.openai.com/v1/responses
   Custom header: x-custom-header: test-value
   Custom header: x-custom-proxy-auth: proxy-token-123

=== Test Results ===
✅ SUCCESS: Custom fetch was called 1 times
✅ Custom headers detected: x-custom-header: test-value, x-custom-proxy-auth: proxy-token-123

This confirms:

  • Custom fetch function is called for all LLM requests
  • Custom headers are properly forwarded
  • End-to-end flow works correctly

Test File

Added examples/test-custom-fetch.ts which:

  • Creates a custom fetch function that logs when called
  • Adds custom headers to the request
  • Verifies both fetch and headers are properly forwarded
  • Reports success/failure clearly

Real-World Use Case

This fix enables our production LLM proxy integration:

// Now works correctly with this fix
const stagehand = new Stagehand({
  model: {
    modelName: "openai/gpt-4o-mini",
    baseURL: "https://my-proxy.com/v1",
    fetch: async (url, options) => {
      // Inject authentication token
      const headers = new Headers(options?.headers);
      headers.set('X-LLM-Request-Token', await getProxyToken());
      return fetch(url, { ...options, headers });
    }
  }
});

Code Quality

  • ✅ Follows existing code style (minimal comments, matches patterns)
  • ✅ Type-safe (proper TypeScript types with ExtendedClientOptions)
  • ✅ Consistent with existing parameter handling (same pattern as baseURL)
  • ✅ No additional dependencies
  • ✅ Preserves all existing functionality
  • ✅ Passes lint and build checks

Existing Tests

All existing tests continue to pass (no breaking changes).


Context

We currently use a runtime patch in production that modifies the compiled dist/index.js to work around this bug. This PR provides a proper source-code fix that:

  • Modifies TypeScript source (not compiled output)
  • Is type-safe and maintainable
  • Has been verified with runtime testing
  • Will eliminate the need for runtime patching

The fix enables LLM proxy authentication, which is critical for production deployments where all LLM requests are routed through an authenticated proxy for billing, monitoring, and security.

Happy to help test and refine this fix!


Additional Notes:

  • This is a simple bug fix (< 30 lines changed)
  • Low risk (optional parameters, follows existing patterns)
  • High value (enables important use cases for production deployments)
  • Includes changeset for CHANGELOG

@changeset-bot
Copy link

changeset-bot bot commented Nov 20, 2025

🦋 Changeset detected

Latest commit: 8384272

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 2 packages
Name Type
@browserbasehq/stagehand Patch
@browserbasehq/stagehand-evals Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Nov 20, 2025

Greptile Overview

Greptile Summary

This PR forwards fetch and headers options from ClientOptions to AI SDK providers (e.g., openai/gpt-4o-mini), enabling proxy authentication, request logging, and custom retry logic.

Key changes:

  • Added ExtendedClientOptions interface for type-safe access to headers and fetch properties
  • Updated getAISDKLanguageModel() to accept and forward these options to provider config
  • Modified call site in getClient() to pass the new parameters with type assertions

Issue found:

  • The else path (lines 136-145) when no apiKey is provided doesn't forward headers/fetch options - this means users relying on environment variables or default API keys will have their custom options silently ignored, which partially defeats the purpose of the fix

Confidence Score: 3/5

  • This PR is mostly safe but has a logical gap that limits its effectiveness for users without explicit API keys
  • Score reflects correct implementation for the apiKey path with proper type safety, but the else path (no apiKey) silently ignores headers/fetch options, which will confuse users who rely on environment variables or default credentials
  • packages/core/lib/v3/llm/LLMProvider.ts requires attention to handle the non-apiKey path

Important Files Changed

File Analysis

Filename Score Overview
packages/core/lib/v3/llm/LLMProvider.ts 4/5 Added fetch and headers forwarding to AI SDK providers; implementation is correct but only handles the apiKey path
packages/core/examples/test-custom-fetch.ts 5/5 Well-structured test that validates custom fetch and headers forwarding with clear output
.changeset/forward-fetch-headers-options.md 5/5 Properly formatted changeset with clear description of the fix

Sequence Diagram

sequenceDiagram
    participant User
    participant Stagehand
    participant LLMProvider
    participant getAISDKLanguageModel
    participant AISDKCustomProvider
    participant AISDKClient
    participant OpenAIAPI

    User->>Stagehand: new Stagehand({model: {fetch, headers}})
    User->>Stagehand: act() or extract()
    Stagehand->>LLMProvider: getClient(modelName, clientOptions)
    
    alt Model includes "/"
        LLMProvider->>LLMProvider: Parse provider/model
        LLMProvider->>getAISDKLanguageModel: call with apiKey, baseURL, headers, fetch
        
        alt Has apiKey
            getAISDKLanguageModel->>getAISDKLanguageModel: Build providerConfig with all options
            getAISDKLanguageModel->>AISDKCustomProvider: creator(providerConfig)
            AISDKCustomProvider-->>getAISDKLanguageModel: provider instance
            getAISDKLanguageModel->>AISDKCustomProvider: provider(subModelName)
            AISDKCustomProvider-->>LLMProvider: languageModel
        else No apiKey
            Note over getAISDKLanguageModel: ⚠️ headers/fetch ignored
            getAISDKLanguageModel->>AISDKCustomProvider: AISDKProviders[subProvider](subModelName)
            AISDKCustomProvider-->>LLMProvider: languageModel
        end
        
        LLMProvider->>AISDKClient: new AISdkClient({model: languageModel})
        AISDKClient-->>LLMProvider: client
    else Standard model
        LLMProvider->>LLMProvider: Use provider-specific client
    end
    
    LLMProvider-->>Stagehand: LLMClient
    Stagehand->>AISDKClient: Make LLM request
    AISDKClient->>OpenAIAPI: HTTP request with custom fetch/headers
    OpenAIAPI-->>AISDKClient: Response
    AISDKClient-->>Stagehand: Result
    Stagehand-->>User: Action completed
Loading

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additional Comments (1)

  1. packages/core/lib/v3/llm/LLMProvider.ts, line 136-145 (link)

    logic: the else path (when no apiKey is provided) doesn't receive headers or fetch options - users without explicit apiKeys will have their options silently ignored

2 files reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

@iozfiliz iozfiliz force-pushed the fix/forward-fetch-headers branch from 6e684de to 8384272 Compare November 20, 2025 19:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Custom fetch and headers options not forwarded to AI SDK providers

1 participant