-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Fix OpenRouter model cache validation for newer models #9601
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -912,6 +912,23 @@ export const webviewMessageHandler = async ( | |
| if (result.status === "fulfilled") { | ||
| routerModels[routerName] = result.value.models | ||
|
|
||
| // For OpenRouter: preserve the currently selected model if it's not in the cache | ||
| // This prevents newer models from being marked as invalid after cache refresh | ||
| if (routerName === "openrouter" && apiConfiguration.openRouterModelId) { | ||
| const selectedModelId = apiConfiguration.openRouterModelId | ||
| // Only add if not already in the models list | ||
| if (!routerModels[routerName][selectedModelId]) { | ||
| // Create a minimal model info for the selected model | ||
| // This allows users to continue using newer models that aren't in the API response yet | ||
| routerModels[routerName][selectedModelId] = { | ||
| maxTokens: 128000, // Default max tokens | ||
| contextWindow: 128000, // Default context window | ||
| supportsPromptCache: false, | ||
| description: `Model ${selectedModelId} (preserved from configuration)`, | ||
| } | ||
| } | ||
| } | ||
|
Comment on lines
+915
to
+930
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The preserved model is only added to the in-memory Fix it with Roo Code or mention @roomote and request a fix. |
||
|
|
||
| // Ollama and LM Studio settings pages still need these events. They are not fetched here. | ||
| } else { | ||
| // Handle rejection: Post a specific error message for this provider. | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The hardcoded default values (128000 tokens, no prompt cache support) may not accurately represent newer models' actual capabilities. This could lead to incorrect token limits being enforced or missing features. Consider using
nullformaxTokensto avoid imposing artificial limits, and document that these are placeholder values until the model appears in the OpenRouter API response.Fix it with Roo Code or mention @roomote and request a fix.