Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 17 additions & 0 deletions src/core/webview/webviewMessageHandler.ts
Original file line number Diff line number Diff line change
Expand Up @@ -912,6 +912,23 @@ export const webviewMessageHandler = async (
if (result.status === "fulfilled") {
routerModels[routerName] = result.value.models

// For OpenRouter: preserve the currently selected model if it's not in the cache
// This prevents newer models from being marked as invalid after cache refresh
if (routerName === "openrouter" && apiConfiguration.openRouterModelId) {
const selectedModelId = apiConfiguration.openRouterModelId
// Only add if not already in the models list
if (!routerModels[routerName][selectedModelId]) {
// Create a minimal model info for the selected model
// This allows users to continue using newer models that aren't in the API response yet
routerModels[routerName][selectedModelId] = {
maxTokens: 128000, // Default max tokens
contextWindow: 128000, // Default context window
supportsPromptCache: false,
description: `Model ${selectedModelId} (preserved from configuration)`,
}
Comment on lines +923 to +928
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The hardcoded default values (128000 tokens, no prompt cache support) may not accurately represent newer models' actual capabilities. This could lead to incorrect token limits being enforced or missing features. Consider using null for maxTokens to avoid imposing artificial limits, and document that these are placeholder values until the model appears in the OpenRouter API response.

Fix it with Roo Code or mention @roomote and request a fix.

}
}
Comment on lines +915 to +930
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The preserved model is only added to the in-memory routerModels object sent to the webview, but it's not persisted to the cache. This means the model will be lost on the next cache refresh (which occurs every 5 minutes according to the cache TTL in modelCache.ts). Consider also updating the memory cache and disk cache with the preserved model to ensure it persists across cache refreshes.

Fix it with Roo Code or mention @roomote and request a fix.


// Ollama and LM Studio settings pages still need these events. They are not fetched here.
} else {
// Handle rejection: Post a specific error message for this provider.
Expand Down
Loading