Skip to content

Provider: Refactor model list fetching with consistent error handling#2246

Closed
faces-of-eth wants to merge 2 commits intoblock:mainfrom
faces-of-eth:issue-2238-fetch-model-list
Closed

Provider: Refactor model list fetching with consistent error handling#2246
faces-of-eth wants to merge 2 commits intoblock:mainfrom
faces-of-eth:issue-2238-fetch-model-list

Conversation

@faces-of-eth
Copy link
Contributor

@faces-of-eth faces-of-eth commented Apr 17, 2025

Closes #2238

This PR refactors the model list fetching functionality across all providers (OpenAI, Anthropic, Google) to implement consistent error handling. Key changes:

  • Modified Provider trait to return Result<Option<Vec>, ProviderError>
  • Updated all provider implementations to use consistent error handling
  • Removed provider-specific error handling in configure_provider_dialog
  • Added generic error handling with styled error messages
  • Improved user experience with consistent model selection behavior

Testing:

  • Verified error handling across all providers
  • Tested model selection with and without available models
  • Confirmed proper error display with styled messages

Examples
Screenshot 2025-04-17 at 10 41 19 AM
Screenshot 2025-04-17 at 10 18 26 AM
Screenshot 2025-04-17 at 10 04 14 AM
image

@yingjiehe-xyz
Copy link
Contributor

Thanks for the contribution! To use list models api, there may be one issue that, not all these models can be used in goose as some models doesn't support tool calls, if we list all, we may confuse the users about the supported models

@faces-of-eth
Copy link
Contributor Author

faces-of-eth commented Apr 18, 2025

Thanks for the contribution! To use list models api, there may be one issue that, not all these models can be used in goose as some models doesn't support tool calls, if we list all, we may confuse the users about the supported models

Yes! But to me, having to find the names of the models somewhere else is much more confusing. Fortunately, some of the Providers (like Venice.ai which I implemented in another PR #2252) offer a model list with Capabilities explained. That one in particular I've filtered for only "tool calling" versions for better UX.

Additionally, if you go through the process and select one that's not capable, you'll receive an error message in the console.

@faces-of-eth
Copy link
Contributor Author

faces-of-eth commented Apr 18, 2025

Per your comment, here's a snippet of what the providers return. I've done my best to use the capabilities each returns from their API:

Venice.ai

{
  "object": "list",
  "type": "text",
  "data": [
    {
      "id": "llama-3.3-70b",
      "type": "text",
      "object": "model",
      "created": 1733768349,
      "owned_by": "venice.ai",
      "model_spec": {
        "availableContextTokens": 65536,
        "capabilities": {
          "optimizedForCode": false,
          "supportsVision": false,
          "supportsFunctionCalling": true,
          "supportsResponseSchema": false,
          "supportsWebSearch": true,
          "supportsReasoning": false
        },
        "constraints": {
          "temperature": { "default": 0.8 },
          "top_p": { "default": 0.9 }
        },
        "offline": false,
        "traits": [
          "function_calling_default",
          "default"
        ],
        "modelSource": "https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct"
      }
    },
   ...

Google

Object {
    "models": Array [
        Object {
            "description": String("A legacy text-only model optimized for chat conversations"),
            "displayName": String("PaLM 2 Chat (Legacy)"),
            "inputTokenLimit": Number(4096),
            "name": String("models/chat-bison-001"),
            "outputTokenLimit": Number(1024),
            "supportedGenerationMethods": Array [
                String("generateMessage"),
                String("countMessageTokens"),
            ],
            "temperature": Number(0.25),
            "topK": Number(40),
            "topP": Number(0.95),
            "version": String("001"),
        },
        ...

OpenAI

Object {
    "data": Array [
        Object {
            "created": Number(1734034239),
            "id": String("gpt-4o-audio-preview-2024-12-17"),
            "object": String("model"),
            "owned_by": String("system"),
        },
        Object {
            "created": Number(1698785189),
            "id": String("dall-e-3"),
            "object": String("model"),
            "owned_by": String("system"),
        },
        Object {
            "created": Number(1705953180),
            "id": String("text-embedding-3-large"),
            "object": String("model"),
            "owned_by": String("system"),
        },
        Object {
            "created": Number(1698798177),
            "id": String("dall-e-2"),
            "object": String("model"),
            "owned_by": String("system"),
        },
        ...

Anthropic

Object {
    "data": Array [
        Object {
            "created_at": String("2025-02-24T00:00:00Z"),
            "display_name": String("Claude 3.7 Sonnet"),
            "id": String("claude-3-7-sonnet-20250219"),
            "type": String("model"),
        },
        Object {
            "created_at": String("2024-10-22T00:00:00Z"),
            "display_name": String("Claude 3.5 Sonnet (New)"),
            "id": String("claude-3-5-sonnet-20241022"),
            "type": String("model"),
        },
        Object {
            "created_at": String("2024-10-22T00:00:00Z"),
            "display_name": String("Claude 3.5 Haiku"),
            "id": String("claude-3-5-haiku-20241022"),
            "type": String("model"),
        },
        ...

@zanesq zanesq requested review from baxen and yingjiehe-xyz June 18, 2025 21:30
@faces-of-eth
Copy link
Contributor Author

This was completed in another PR, so closing.

@faces-of-eth faces-of-eth deleted the issue-2238-fetch-model-list branch June 23, 2025 23:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

CLI configure should fetch models from provider

2 participants