Skip to content

Comments

fix bugs#206

Closed
ding113 wants to merge 3 commits intodevfrom
main
Closed

fix bugs#206
ding113 wants to merge 3 commits intodevfrom
main

Conversation

@ding113
Copy link
Owner

@ding113 ding113 commented Nov 26, 2025

Summary

This PR fixes multiple bugs related to provider statistics, group settings persistence, and usage records filtering. It also updates model pricing data and bumps the version to 0.3.10.

Problem

Several issues were affecting user experience:

  1. Provider statistics mismatch (供应商管理的统计和使用记录有出入 #204): When requests were retried on a different provider after initial failure, the statistics were incorrectly attributed to the first provider, causing discrepancies between provider management statistics and usage records.

  2. Group settings not saved ([bug] group not saved #201): Provider group name and user group settings were not being persisted to the database after modification.

  3. Usage records date filtering (使用记录的筛选条件那里有错误 #198): Current day's usage records were not displayed when filtering by date range (e.g., from 00:00 to current time).

Solution

  • Fixed provider statistics attribution during fallback/retry scenarios
  • Corrected the group settings persistence logic in the database layer
  • Fixed the date range filtering query for usage records to properly include current day's data

Changes

  • CHANGELOG.md: Updated with version 0.3.10 release notes
  • VERSION: Bumped from 0.3.9 to 0.3.10
  • public/seed/litellm-prices.json: Added pricing data for new Claude models:
    • anthropic.claude-opus-4-5-20251101-v1:0 (Bedrock)
    • anthropic.claude-sonnet-4-5-20250929-v1:0 (Bedrock)
    • claude-sonnet-4-5-20250929-v1:0 (Bedrock)
    • claude-opus-4-5-20251101 (Anthropic)
    • us.anthropic.claude-opus-4-5-20251101-v1:0 (Bedrock US)
    • text-embedding-ada-002-v2 (OpenAI)
    • Enhanced capabilities for existing Vertex AI Claude models

Testing

  • Manual testing performed for provider statistics accuracy
  • Manual testing performed for group settings persistence
  • Manual testing performed for date filtering on current day
  • No breaking changes

Related Issues

Closes #204, #201, #198

@ding113 ding113 closed this Nov 26, 2025
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ding113, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on a significant set of bug fixes and minor enhancements across various components of the application. The changes aim to improve the stability, reliability, and user experience by resolving several reported issues, refining API testing mechanisms, and updating model configurations. The update also includes the addition of new AI models and improved logging practices.

Highlights

  • Bug Fixes: Addressed numerous issues including Gemini model redirect failures, incorrect model redirect data saving, provider multi-tag matching problems, error rules regex and cache refresh issues, a 'Body has already been read' error in proxy fallback, an ErrorRuleDetector race condition, duplicate auth headers in Anthropic API tests, incorrect Codex API test request body format, Pino logger timestamp configuration, and data import compatibility.
  • API Test Enhancements: Improved provider API testing with better streaming response detection and enhanced error parsing. A new configurable API test timeout via API_TEST_TIMEOUT_MS environment variable has also been added.
  • Model Configuration Updates: Updated default provider timeout to unlimited and adjusted the streaming silent period timeout from 10 seconds to 300 seconds. Several new Anthropic Claude models and an OpenAI embedding model have been added to the price list, along with updated capabilities for existing models.
  • User Experience Improvements: Enhanced the display of usage records status code colors for better visibility and clarified provider response model labeling.
  • Version Bump: The project version has been updated from 0.3.9 to 0.3.10.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@ding113 ding113 added size/S Small PR (< 200 lines) bug Something isn't working labels Nov 26, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a version bump, updates the changelog, and adds several new AI model definitions to the pricing data. The changes are extensive and seem to cover multiple bug fixes and enhancements. My review focused on the consistency and maintainability of the new data added to the litellm-prices.json file. I've identified a duplicated model entry and a potential inconsistency in model capabilities that could affect maintainability. The other changes appear to be in good order.

Comment on lines +6525 to +6549
"claude-sonnet-4-5-20250929-v1:0": {
"cache_creation_input_token_cost": 3.75e-06,
"cache_read_input_token_cost": 3e-07,
"input_cost_per_token": 3e-06,
"input_cost_per_token_above_200k_tokens": 6e-06,
"output_cost_per_token_above_200k_tokens": 2.25e-05,
"cache_creation_input_token_cost_above_200k_tokens": 7.5e-06,
"cache_read_input_token_cost_above_200k_tokens": 6e-07,
"litellm_provider": "bedrock",
"max_input_tokens": 200000,
"max_output_tokens": 64000,
"max_tokens": 64000,
"mode": "chat",
"output_cost_per_token": 1.5e-05,
"supports_assistant_prefill": true,
"supports_computer_use": true,
"supports_function_calling": true,
"supports_pdf_input": true,
"supports_prompt_caching": true,
"supports_reasoning": true,
"supports_response_schema": true,
"supports_tool_choice": true,
"supports_vision": true,
"tool_use_system_prompt_tokens": 159
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's an inconsistency in the properties for this model compared to its anthropic. prefixed counterpart (anthropic.claude-sonnet-4-5-20250929-v1:0). This entry is missing the search_context_cost_per_query object. If this is an oversight and the model supports this feature via the bedrock provider, please consider adding it for consistency and to ensure correct cost calculation. If the feature is not supported, this difference is fine.

Comment on lines +23181 to +23206
"us.anthropic.claude-opus-4-5-20251101-v1:0": {
"cache_creation_input_token_cost": 6.25e-06,
"cache_read_input_token_cost": 5e-07,
"input_cost_per_token": 5e-06,
"litellm_provider": "bedrock_converse",
"max_input_tokens": 200000,
"max_output_tokens": 64000,
"max_tokens": 64000,
"mode": "chat",
"output_cost_per_token": 2.5e-05,
"search_context_cost_per_query": {
"search_context_size_high": 0.01,
"search_context_size_low": 0.01,
"search_context_size_medium": 0.01
},
"supports_assistant_prefill": true,
"supports_computer_use": true,
"supports_function_calling": true,
"supports_pdf_input": true,
"supports_prompt_caching": true,
"supports_reasoning": true,
"supports_response_schema": true,
"supports_tool_choice": true,
"supports_vision": true,
"tool_use_system_prompt_tokens": 159
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There appears to be a duplicated model definition. The entry for us.anthropic.claude-opus-4-5-20251101-v1:0 is identical to the entry for anthropic.claude-opus-4-5-20251101-v1:0 added earlier in this file. This duplication can lead to maintenance issues. If these are meant to be the same, please consider removing one. If they are for different regions with potentially different pricing or capabilities in the future, it would be good to either reflect the current differences or add a comment clarifying this.

Copy link
Owner Author

@ding113 ding113 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Code Review Summary

This PR merges release changes from main to dev, containing version 0.3.10 updates. The changes are exclusively documentation (CHANGELOG.md), version metadata (VERSION), and static model pricing data (litellm-prices.json). No executable code changes are present.

🔍 Issues Found

  • Critical (🔴): 0 issues
  • High (🟠): 0 issues
  • Medium (🟡): 0 issues
  • Low (🟢): 0 issues

🎯 Priority Actions

No significant issues identified. The PR consists entirely of:

  1. Version bump from 0.3.9 to 0.3.10
  2. Changelog documentation for PR #199 bug fixes and features
  3. New model pricing entries (Claude Opus 4.5, Sonnet 4.5 variants, embedding model)
  4. Capability flag updates for existing models

💡 General Observations

  • The JSON data is syntactically valid
  • New model entry structures are consistent with existing patterns
  • All changes are backward-compatible static data additions

🤖 Automated review by Claude AI - focused on identifying issues for improvement

Copy link
Owner Author

@ding113 ding113 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Code Review Summary

This PR merges release changes from main to dev, containing version 0.3.10 updates. The changes are exclusively documentation (CHANGELOG.md), version metadata (VERSION), and static model pricing data (litellm-prices.json). No executable code changes are present.

🔍 Issues Found

  • Critical (🔴): 0 issues
  • High (🟠): 0 issues
  • Medium (🟡): 0 issues
  • Low (🟢): 0 issues

🎯 Priority Actions

No significant issues identified. The PR consists entirely of:

  1. Version bump from 0.3.9 to 0.3.10
  2. Changelog documentation for PR #199 bug fixes and features
  3. New model pricing entries (Claude Opus 4.5, Sonnet 4.5 variants, embedding model)
  4. Capability flag updates for existing models

💡 General Observations

  • The JSON data is syntactically valid
  • New model entry structures are consistent with existing patterns
  • All changes are backward-compatible static data additions

🤖 Automated review by Claude AI - focused on identifying issues for improvement

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working size/S Small PR (< 200 lines)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant