Conversation
…ature rectifier Extend the thinking signature rectifier to detect and handle the Anthropic API error when thinking/redacted_thinking blocks have been modified from their original response. This error occurs when clients inadvertently modify these blocks in multi-turn conversations. The rectifier will now remove these blocks and retry the request, similar to how it handles other thinking-related signature errors. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Bumps the npm_and_yarn group with 1 update in the / directory: [jspdf](https://github.com/parallax/jsPDF). Updates `jspdf` from 3.0.4 to 4.1.0 - [Release notes](https://github.com/parallax/jsPDF/releases) - [Changelog](https://github.com/parallax/jsPDF/blob/master/RELEASE.md) - [Commits](parallax/jsPDF@v3.0.4...v4.1.0) --- updated-dependencies: - dependency-name: jspdf dependency-version: 4.1.0 dependency-type: direct:production dependency-group: npm_and_yarn ... Signed-off-by: dependabot[bot] <support@github.com>
…Words (#710) * fix: hot-reload request filters via globalThis singleton pattern EventEmitter and RequestFilterEngine now use globalThis caching to ensure the same instance is shared across different Next.js worker contexts. This fixes the issue where filter changes required Docker restart. Added diagnostic logging for event subscription and propagation. * fix(redis): wait for subscriber connection ready before subscribe - ensureSubscriber now returns Promise<Redis>, waits for 'ready' event - subscribeCacheInvalidation returns null on failure instead of noop - RequestFilterEngine checks cleanup !== null before logging success - Fixes false "Subscribed" log when Redis connection fails * feat(sensitive-words): add hot-reload via Redis pub/sub Enable real-time cache invalidation for sensitive words detector, matching the pattern used by request-filter-engine and error-rule-detector. * fix(redis): harden cache invalidation subscriptions Ensure sensitive-words CRUD emits update events so hot-reload propagates across workers. Roll back failed pub/sub subscriptions, add retry/timeout coverage, and avoid sticky provider-cache subscription state. * fix(codex): bump default User-Agent fallback Update the hardcoded Codex UA used when requests lack an effective user-agent (e.g. filtered out). Keep unit tests in sync with the new default. * fix(redis): resubscribe cache invalidation after reconnect Clear cached subscription state on disconnect and resubscribe on ready so cross-worker cache invalidation survives transient Redis reconnects. Add unit coverage, avoid misleading publish logs, track detector cleanup handlers, and translate leftover Russian comments to English. * fix(sensitive-words): use globalThis singleton detector Align SensitiveWordDetector with existing __CCH_* singleton pattern to avoid duplicate instances across module reloads. Extend singleton unit tests to cover the detector. * chore: format code (req-fix-dda97fd) * fix: address PR review comments - pubsub.ts: use `once` instead of `on` for ready event to prevent duplicate resubscription handlers on reconnect - forwarder.ts: extract DEFAULT_CODEX_USER_AGENT constant - provider-cache.ts: wrap subscribeCacheInvalidation in try/catch - tests: use exported constant instead of hardcoded UA string * fix(redis): resubscribe across repeated reconnects Ensure pub/sub resubscribe runs on every reconnect, extend unit coverage, and keep emitRequestFiltersUpdated resilient when logger import fails. --------- Co-authored-by: John Doe <johndoe@example.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
…tion (#705) (#716) The `extractUsageMetrics` function was missing support for OpenAI chat completion format fields (`prompt_tokens`/`completion_tokens`), causing token statistics to not be recorded for OpenAI-compatible providers. Changes: - Add `prompt_tokens` -> `input_tokens` mapping - Add `completion_tokens` -> `output_tokens` mapping - Preserve priority: Claude > Gemini > OpenAI format - Add 5 unit tests for OpenAI format handling Closes #705 Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Fixes #678 - Currency display unit configuration was not applied. Root cause: - `users-page-client.tsx` hardcoded `currencyCode="USD"` - `UserLimitBadge` and `LimitStatusIndicator` had hardcoded `unit="$"` default - `big-screen/page.tsx` used hardcoded "$" in multiple places Changes: - Add `getCurrencySymbol()` helper function to currency.ts - Fetch system settings in `users-page-client.tsx` and pass to table - Pass `currencySymbol` from `user-key-table-row.tsx` to limit badges - Remove hardcoded "$" defaults from badge components - Update big-screen page to fetch settings and use dynamic symbol - Add unit tests for `getCurrencySymbol` Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
…iders (#721) * feat(gemini): add Google Search web access preference for Gemini providers Add provider-level preference for Gemini API type providers to control Google Search (web access) tool injection: - inherit: Follow client request (default) - enabled: Force inject googleSearch tool into request - disabled: Force remove googleSearch tool from request Changes: - Add geminiGoogleSearchPreference field to provider schema - Add GeminiGoogleSearchPreference type and validation - Implement applyGeminiGoogleSearchOverride with audit trail - Add UI controls in provider form (Gemini Overrides section) - Add i18n translations for 5 languages (en, zh-CN, zh-TW, ja, ru) - Integrate override logic in proxy forwarder for Gemini requests - Add 22 unit tests for the override logic Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix(gemini): address code review feedback - Use explicit else-if for disabled preference check (gemini-code-assist) - Use i18n for SelectValue placeholder instead of hardcoded string (coderabbitai) - Sync overridden body back to session.request.message for log consistency (coderabbitai) - Persist Gemini special settings immediately, matching Anthropic pattern (coderabbitai) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix(gemini): use strict types for provider config and audit - Narrow preference type to "enabled" | "disabled" (exclude unreachable "inherit") - Use ProviderType and GeminiGoogleSearchPreference types instead of string Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* fix(api): 透传 /api/actions 认证会话以避免 getUsers 返回空数据 * fix(auth): 让 scoped session 继承 allowReadOnlyAccess 语义并支持内部降权校验
* fix(session): bound active_sessions zsets by env ttl Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode) Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai> * fix(rate-limit): pass session ttl to lua cleanup Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode) Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai> * fix(session): validate SESSION_TTL env and prevent ZSET leak on invalid values - Add input validation for SESSION_TTL (reject NaN, 0, negative; default 300) - Guard against invalid TTL in Lua script to prevent clearing all sessions - Use dynamic EXPIRE based on SESSION_TTL instead of hardcoded 3600s - Add unit tests for TTL validation and dynamic expiry behavior --------- Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
* feat(providers): add endpoint status mapping * feat(providers): add endpoint pool hover * feat(providers): show vendor endpoints in list rows * feat(providers): extract vendor endpoint CRUD table * chore(i18n): add provider endpoint UI strings * fix(providers): integrate endpoint pool into provider form
…ditions (#730) * fix(provider): wrap provider create/update endpoint sync in transactions Provider create and update operations now run vendor resolution and endpoint sync inside database transactions to prevent race conditions that could leave orphaned or inconsistent endpoint rows. Key changes: - createProvider: wrap vendor + insert + endpoint seed in a single tx - updateProvider: wrap vendor + update + endpoint sync in a single tx - Add syncProviderEndpointOnProviderEdit for atomic URL/type/vendor migration with in-place update, soft-delete, and conflict handling - Vendor cleanup failures degrade to warnings instead of propagating - Add comprehensive unit and integration tests for sync edge cases Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(provider): defer endpoint circuit reset until transaction commit Avoid running endpoint circuit reset side effects inside DB transactions to prevent rollback inconsistency. Run resets only after commit and add regression tests for deferred reset behavior in helper and provider update flows. * fix(provider): distinguish noop from created-next in endpoint sync action label When ensureNextEndpointActive() returns "noop" (concurrent transaction already created an active next endpoint), the action was incorrectly labelled "kept-previous-and-created-next". Add a new "kept-previous-and-kept-next" action to ProviderEndpointSyncAction and use a three-way branch so callers and logs reflect the true outcome. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthrough引入 Gemini Google Search 偏好与端点池/端点管理 UI,扩展数据库/schema 与验证,增加 Scoped Auth 会话支持、Redis pub/sub 与敏感词事件联动、会话 TTL 可配置化、成本列可见性、货币符号工具及大量相关单元与集成测试。 Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~140 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @ding113, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This release focuses on strengthening the application's infrastructure for managing AI providers and their endpoints. It introduces a robust system for handling multiple endpoints per vendor, enabling more granular control and improved resilience. Key changes include new UI elements for endpoint configuration, the ability to override specific Gemini API parameters, and significant under-the-hood improvements to ensure data consistency during provider lifecycle events. Additionally, it refines cache invalidation, session management, and introduces dynamic currency display for a better user experience. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
| }); | ||
| endpointCandidates.push(...preferred.map((e) => ({ endpointId: e.id, baseUrl: e.url }))); | ||
| } catch (error) { | ||
| endpointSelectionError = |
There was a problem hiding this comment.
Strict policy doesn’t exit
When shouldEnforceStrictEndpointPool is true and endpointCandidates.length === 0, the code sets attemptCount = maxAttemptsPerProvider but does not continue/break. The function then continues and can proceed with an empty endpointCandidates array, leading to undefined endpoint selection / request base URL downstream.
| }); | |
| endpointCandidates.push(...preferred.map((e) => ({ endpointId: e.id, baseUrl: e.url }))); | |
| } catch (error) { | |
| endpointSelectionError = | |
| failedProviderIds.push(currentProvider.id); | |
| attemptCount = maxAttemptsPerProvider; | |
| continue; |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/app/v1/_lib/proxy/forwarder.ts
Line: 252:255
Comment:
**Strict policy doesn’t exit**
When `shouldEnforceStrictEndpointPool` is true and `endpointCandidates.length === 0`, the code sets `attemptCount = maxAttemptsPerProvider` but does not `continue`/`break`. The function then continues and can proceed with an empty `endpointCandidates` array, leading to undefined endpoint selection / request base URL downstream.
```suggestion
failedProviderIds.push(currentProvider.id);
attemptCount = maxAttemptsPerProvider;
continue;
```
How can I resolve this? If you propose a fix, please make it concise.| // 优先读取 adapter 注入的请求级会话(适配 /api/actions 等非 Next 原生上下文场景) | ||
| const scoped = getScopedAuthContext(); | ||
| if (scoped) { | ||
| // 关键:scoped 会话必须遵循其“创建时语义”,并允许内部显式降权校验 | ||
| const effectiveAllowReadOnlyAccess = options?.allowReadOnlyAccess ?? scoped.allowReadOnlyAccess; | ||
| if (!effectiveAllowReadOnlyAccess && !scoped.session.key.canLoginWebUi) { | ||
| return null; | ||
| } | ||
| return scoped.session; |
There was a problem hiding this comment.
Caller can widen access
In the scoped-session path, effectiveAllowReadOnlyAccess uses options?.allowReadOnlyAccess ?? scoped.allowReadOnlyAccess, which lets an inner caller override the adapter’s original allowReadOnlyAccess decision. That can effectively “upgrade” a scoped read-only session to allow usage where the adapter meant to enforce stricter semantics. Consider making the scoped context authoritative (or only allow further restriction).
| // 优先读取 adapter 注入的请求级会话(适配 /api/actions 等非 Next 原生上下文场景) | |
| const scoped = getScopedAuthContext(); | |
| if (scoped) { | |
| // 关键:scoped 会话必须遵循其“创建时语义”,并允许内部显式降权校验 | |
| const effectiveAllowReadOnlyAccess = options?.allowReadOnlyAccess ?? scoped.allowReadOnlyAccess; | |
| if (!effectiveAllowReadOnlyAccess && !scoped.session.key.canLoginWebUi) { | |
| return null; | |
| } | |
| return scoped.session; | |
| const effectiveAllowReadOnlyAccess = scoped.allowReadOnlyAccess && (options?.allowReadOnlyAccess ?? true); |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/auth.ts
Line: 162:170
Comment:
**Caller can widen access**
In the scoped-session path, `effectiveAllowReadOnlyAccess` uses `options?.allowReadOnlyAccess ?? scoped.allowReadOnlyAccess`, which lets an inner caller override the adapter’s original `allowReadOnlyAccess` decision. That can effectively “upgrade” a scoped read-only session to allow usage where the adapter meant to enforce stricter semantics. Consider making the scoped context authoritative (or only allow further restriction).
```suggestion
const effectiveAllowReadOnlyAccess = scoped.allowReadOnlyAccess && (options?.allowReadOnlyAccess ?? true);
```
How can I resolve this? If you propose a fix, please make it concise.| } | ||
|
|
||
| function onReady(): void { | ||
| cleanup(); |
There was a problem hiding this comment.
Clearing subscriptions breaks resubscribe
On reconnect onReady() does subscribedChannels.clear() and then triggers resubscribeAll(sub), but resubscribeAll only subscribes channels not in subscribedChannels. Because subscribedChannels is cleared, this is fine; the real issue is subscribedChannels is also cleared on close/end without awaiting/serializing against any in-flight subscribeCacheInvalidation() calls, so you can end up with subscribedChannels missing a channel while Redis is subscribed (or vice versa), causing missed resubscribe / duplicate subscribe attempts.
A safer pattern is to (a) avoid using subscribedChannels as the source of truth, or (b) guard mutations with a single reconnect/subscription mutex and always derive desired channels from subscriptions.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/redis/pubsub.ts
Line: 86:89
Comment:
**Clearing subscriptions breaks resubscribe**
On reconnect `onReady()` does `subscribedChannels.clear()` and then triggers `resubscribeAll(sub)`, but `resubscribeAll` only subscribes channels *not* in `subscribedChannels`. Because `subscribedChannels` is cleared, this is fine; the real issue is `subscribedChannels` is also cleared on `close`/`end` without awaiting/serializing against any in-flight `subscribeCacheInvalidation()` calls, so you can end up with `subscribedChannels` missing a channel while Redis is subscribed (or vice versa), causing missed resubscribe / duplicate subscribe attempts.
A safer pattern is to (a) avoid using `subscribedChannels` as the source of truth, or (b) guard mutations with a single reconnect/subscription mutex and always derive desired channels from `subscriptions`.
How can I resolve this? If you propose a fix, please make it concise.| * Apply Gemini Google Search override with audit trail | ||
| */ | ||
| export function applyGeminiGoogleSearchOverrideWithAudit( | ||
| provider: GeminiProviderOverrideConfig, | ||
| request: Record<string, unknown> | ||
| ): { request: Record<string, unknown>; audit: GeminiGoogleSearchOverrideSpecialSetting | null } { | ||
| if (provider.providerType !== "gemini" && provider.providerType !== "gemini-cli") { | ||
| return { request, audit: null }; | ||
| } | ||
|
|
||
| const preference = provider.geminiGoogleSearchPreference; | ||
|
|
There was a problem hiding this comment.
Unknown preference is silently ignored
If geminiGoogleSearchPreference is set to an unexpected string, applyGeminiGoogleSearchOverrideWithAudit returns {audit: null} and performs no override, which makes misconfigurations hard to detect. Since this preference comes from persisted provider settings, it’s better to treat unknown values as an error/audit event (or coerce to inherit) so operators can spot bad data.
| * Apply Gemini Google Search override with audit trail | |
| */ | |
| export function applyGeminiGoogleSearchOverrideWithAudit( | |
| provider: GeminiProviderOverrideConfig, | |
| request: Record<string, unknown> | |
| ): { request: Record<string, unknown>; audit: GeminiGoogleSearchOverrideSpecialSetting | null } { | |
| if (provider.providerType !== "gemini" && provider.providerType !== "gemini-cli") { | |
| return { request, audit: null }; | |
| } | |
| const preference = provider.geminiGoogleSearchPreference; | |
| } else { | |
| return { request, audit: { | |
| type: "gemini_google_search_override", | |
| scope: "request", | |
| hit: true, | |
| providerId: provider.id ?? null, | |
| providerName: provider.name ?? null, | |
| action: "passthrough", | |
| preference: "disabled", | |
| hadGoogleSearchInRequest: hadGoogleSearch, | |
| }}; | |
| } |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/gemini/provider-overrides.ts
Line: 93:104
Comment:
**Unknown preference is silently ignored**
If `geminiGoogleSearchPreference` is set to an unexpected string, `applyGeminiGoogleSearchOverrideWithAudit` returns `{audit: null}` and performs no override, which makes misconfigurations hard to detect. Since this preference comes from persisted provider settings, it’s better to treat unknown values as an error/audit event (or coerce to `inherit`) so operators can spot bad data.
```suggestion
} else {
return { request, audit: {
type: "gemini_google_search_override",
scope: "request",
hit: true,
providerId: provider.id ?? null,
providerName: provider.name ?? null,
action: "passthrough",
preference: "disabled",
hadGoogleSearchInRequest: hadGoogleSearch,
}};
}
```
How can I resolve this? If you propose a fix, please make it concise.
Additional Comments (1)
Prompt To Fix With AIThis is a comment left during a code review.
Path: src/repository/provider-endpoints.ts
Line: 736:742
Comment:
**Behavior change: now throws**
`ensureProviderEndpointExistsForUrl` previously returned `false` on empty/invalid URLs; it now throws. Callers that didn’t already wrap this (or rely on a best-effort seed) will now fail hard inside transactions (e.g., provider create/update). If the intent is still “best effort”, convert these back to a non-throwing result or ensure *all* call sites handle the exception and return a user-facing validation error.
How can I resolve this? If you propose a fix, please make it concise. |
There was a problem hiding this comment.
Code Review
这个 PR 主要是为 v0.5.4 版本发布做准备,包含了大量的新功能、优化和修复。
主要变更包括:
- 供应商端点管理:引入了供应商端点池的概念,允许为每个供应商配置多个 API 端点,并实现了健康探测和故障转移。这是本次更新的核心功能,涉及前后端大量改动。
- Gemini 参数覆写:新增了在供应商层面覆写 Gemini
googleSearch参数的功能。 - 架构改进:
- 对供应商和端点的创建、更新操作实现了事务性,保证了数据一致性。
- 使用
AsyncLocalStorage改造了认证会话的传递方式,解决了在某些环境下getSession无法获取会话的问题。 - 增强了 Redis Pub/Sub 机制的可靠性,包括处理重连、连接错误和超时。
- 将多个核心服务(如
eventEmitter,requestFilterEngine)改造为使用globalThis的单例模式,以确保在多 worker 环境下的一致性。
- UI/UX 优化:
- 供应商表单和列表页重构,以支持新的端点管理功能。
- 在仪表盘、用户管理等多个页面增加了多币种显示支持。
- 可靠性增强:
- 对多个模块(如 Redis Pub/Sub、Session 追踪、端点同步逻辑)增加了更健壮的错误处理和日志记录。
- 增加了大量的单元测试和集成测试,覆盖了新功能和关键的重构部分。
整体来看,这是一个高质量的 PR,代码结构清晰,考虑周全,特别是对系统稳定性和数据一致性的改进非常出色。
我提出了一些关于版本号、依赖项变更的疑问,请确认。
|
|
||
| --- | ||
|
|
||
| ## v0.5.3 (2026-02-03) |
| @@ -1,5 +1,5 @@ | |||
| { | |||
| "$schema": "https://biomejs.dev/schemas/2.3.11/schema.json", | |||
| "$schema": "https://biomejs.dev/schemas/2.3.10/schema.json", | |||
| "html2canvas": "^1", | ||
| "ioredis": "^5", | ||
| "jspdf": "^3", | ||
| "jspdf": "^4", |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
package.json (1)
3-3:⚠️ Potential issue | 🟡 Minor
package.json版本号与 PR 标题不一致。PR 标题为 "release v0.5.4",但
version字段仍为"0.5.0"。如果此 PR 用于发布 v0.5.4,请更新版本号。src/lib/redis/lua-scripts.ts (1)
84-112:⚠️ Potential issue | 🟡 Minor
BATCH_CHECK_SESSION_LIMITS定义中硬编码的 TTL 与CHECK_AND_TRACK_SESSION的可配置 TTL 不一致,但此脚本当前未在代码中使用。
BATCH_CHECK_SESSION_LIMITS(第 87 行)硬编码ttl = 300000,而CHECK_AND_TRACK_SESSION接受自定义ttlMs参数(ARGV[4]),可通过SESSION_TTL_MS配置。若未来需要使用此脚本,当SESSION_TTL_MS被修改时,两个脚本的清理行为将不一致。建议:若
BATCH_CHECK_SESSION_LIMITS计划使用,应改为接受 TTL 参数;否则考虑移除此未使用的脚本定义。src/lib/session-tracker.ts (1)
504-525:⚠️ Potential issue | 🟡 MinorJSDoc 和行内注释中的 "5 分钟" 描述已过时。
countFromZSet的文档注释(Line 508)和行内注释(Line 524)仍写着 "5 分钟前",但实际清理窗口现在由SESSION_TTL环境变量驱动。建议更新为准确描述(如 "SESSION_TTL 秒前")。类似地,文件顶部 Line 9 的 "5 分钟无活动" 也已不再准确。建议修改注释
* 实现步骤: - * 1. ZREMRANGEBYSCORE 清理过期 session(5 分钟前) + * 1. ZREMRANGEBYSCORE 清理过期 session(SESSION_TTL 秒前)- // 1. 清理过期 session(5 分钟前) + // 1. 清理过期 session(基于 SESSION_TTL)tests/unit/settings/providers/provider-form-total-limit-ui.test.tsx (1)
141-191:⚠️ Potential issue | 🟠 Major添加缺失的
geminiGoogleSearchPreference字段。
ProviderDisplay类型(src/types/provider.ts第 212 行)要求该字段为geminiGoogleSearchPreference: GeminiGoogleSearchPreference | null,但测试对象中缺少此字段,会导致 TypeScript 编译错误。修复建议
anthropicMaxTokensPreference: null, anthropicThinkingBudgetPreference: null, + geminiGoogleSearchPreference: null, tpm: null,
🤖 Fix all issues with AI agents
In `@CHANGELOG.md`:
- Line 7: The changelog currently only contains the heading "## v0.5.3
(2026-02-03)" while the PR is titled "release v0.5.4"; either add a new "##
v0.5.4 (YYYY-MM-DD)" section to CHANGELOG.md with the release date and a short
bullet list of changes introduced in this PR, or if the PR title is incorrect,
rename the PR to match the existing "v0.5.3" entry; locate the header "## v0.5.3
(2026-02-03)" in CHANGELOG.md and add the new v0.5.4 header and entries (or
update the PR title) accordingly.
In `@messages/ru/settings/providers/strings.json`:
- Around line 114-115: The values for the keys "circuitOpen" and
"circuitHalfOpen" use an inconsistent English/Russian mix; update their
translations to match the existing Russian terminology (e.g., replace "Circuit
открыт" with "Цепь открыта" and "Circuit полуоткрыт" with "Цепь полуоткрыта") so
they are consistent with other entries like "Цепь разомкнута" and "Состояние
цепи"; ensure you only change the values for the keys "circuitOpen" and
"circuitHalfOpen" and keep JSON syntax intact.
In `@messages/zh-CN/settings/providers/form/errors.json`:
- Around line 5-7: The new keys nameRequired and urlRequired use the term "供应商"
while existing keys addFailed, updateFailed, deleteFailed use "服务商"; update the
messages for nameRequired and urlRequired to use the same term "服务商" so
terminology is consistent across the file (leave keyRequired as-is), locating
the strings by the JSON keys nameRequired and urlRequired and changing their
values to mirror the wording style of addFailed/updateFailed/deleteFailed.
In
`@src/app/`[locale]/settings/providers/_components/provider-endpoints-table.tsx:
- Around line 304-307: The time string uses formatDistanceToNow without a
locale, causing English output for non-English users; update the component that
renders endpoint.lastProbedAt to pass a date-fns locale via the options (e.g.,
formatDistanceToNow(..., { addSuffix: true, locale: dateFnsLocale })). Add or
reuse a small mapping or helper (e.g., getDateFnsLocale or a switch keyed by the
current locale variable used in the route like locale) that returns the correct
date-fns locale object for zh-CN, zh-TW, en, ja, ru, import the needed locales
from date-fns/locale, and pass that mapped value to formatDistanceToNow so the
displayed relative time is localized.
In `@src/lib/redis/pubsub.ts`:
- Around line 60-133: Update error-rule-detector.ts to check the return value of
subscribeCacheInvalidation (same as in sensitive-word-detector.ts,
request-filter-engine.ts, provider-cache.ts): after awaiting
subscribeCacheInvalidation(CHANNEL_ERROR_RULES_UPDATED, handleUpdated) verify it
is not null and handle the null case (log and/or return) so callers know
subscription failed. Also harden ensureSubscriber: when the duplicated client
emits "close" or "end" mark subscriberReady = null and subscriberClient = null
(in the handlers already wired in ensureSubscriber) so a permanently lost
connection doesn't leave subscriberReady resolved with a dead sub; ensure fail()
and onError() also clear subscriberClient/subscriberReady (they already clear
subscriberReady but add subscriberClient = null) so subsequent calls to
ensureSubscriber will recreate the subscriber and trigger resubscribeAll as
intended (references: ensureSubscriber, subscriberReady, subscriberClient,
resubscribeAll, subscribeCacheInvalidation, CHANNEL_ERROR_RULES_UPDATED,
handleUpdated).
In `@tests/unit/settings/providers/provider-endpoint-hover.test.tsx`:
- Around line 210-229: The test "sorts endpoints correctly: Healthy > Unknown >
Unhealthy" asserts display names but the component (ProviderEndpointHover)
actually renders endpoint URLs; update the assertion in that test (the
expect(labels).toEqual(...)) to assert the endpoint URLs in the correct sorted
order (matching mockEndpoints returned by
providerEndpointsActionMocks.getProviderEndpointsByVendor) — i.e., replace the
human-readable labels with the corresponding URL strings for Healthy, Unknown,
and Unhealthy endpoints so the expected array matches the rendered content.
🧹 Nitpick comments (28)
src/app/v1/_lib/proxy/thinking-signature-rectifier.ts (1)
73-82:lower.includes("redacted_thinking")条件是冗余的。
"redacted_thinking"包含子串"thinking",所以lower.includes("thinking")已经覆盖了lower.includes("redacted_thinking")的情况,||右侧永远不会被求值到。当前代码功能正确,保留也无害(可视为自文档化),仅提醒注意。
src/app/v1/_lib/proxy/thinking-signature-rectifier.test.ts (1)
92-106: 测试描述语言不一致:此处使用英文,其余均为中文。文件中其他所有测试用例(如
"应命中:...","不应命中:..."等)均使用中文描述,建议保持一致。建议修改
- test("should detect 'cannot be modified' error", () => { + test("应命中:thinking/redacted_thinking blocks cannot be modified(thinking 块被修改)", () => {src/app/[locale]/dashboard/logs/_components/virtualized-logs-table.tsx (1)
610-654: Cost 列的 tooltip 内容与usage-logs-table.tsx存在差异。虚拟化表格中的 Cost tooltip(Lines 639-646)仅显示 input/output tokens,而
usage-logs-table.tsx(Lines 380-428)还包含 cache write 5m/1h、cache read、>200k上下文加价提示和 multiplier 信息。如果这是有意简化可以忽略,否则建议保持两处一致,避免用户在不同视图下看到不同程度的计费明细。tests/unit/actions/providers-recluster.test.ts (1)
214-239: 事务上下文传播测试看起来正确。
tx对象的 mock 链(update → set → where)以及在getOrCreateProviderVendorIdFromUrlsMock断言中验证{ tx }传播是合理的,确保了事务性操作的正确性。注意到第 261-269 行的 "publishes cache invalidation after apply" 测试中仍使用内联
tx对象且未验证{ tx }传播。如果后续需要统一事务上下文验证,可以考虑在该测试中也提取tx并添加类似断言。tests/unit/lib/utils/currency.test.ts (1)
28-35:codes数组可以从CURRENCY_CONFIG的键派生,避免手动维护。如果将来新增货币类型,这个硬编码列表可能遗忘更新。可以用
Object.keys(CURRENCY_CONFIG)代替。♻️ 建议的改进
test("all CURRENCY_CONFIG entries have valid symbols", () => { - const codes: CurrencyCode[] = ["USD", "CNY", "EUR", "JPY", "GBP", "HKD", "TWD", "KRW", "SGD"]; + const codes = Object.keys(CURRENCY_CONFIG) as CurrencyCode[]; for (const code of codes) {tests/unit/lib/session-tracker-cleanup.test.ts (1)
1-53: 测试辅助代码与session-ttl-validation.test.ts存在大量重复。
makePipeline工厂函数、pipelineCalls数组、logger mock 和 redis mock 在两个测试文件中几乎完全相同。可以考虑提取为共享的测试辅助模块(如tests/unit/lib/__helpers__/session-tracker-mocks.ts),减少维护成本。src/app/[locale]/settings/providers/_components/provider-endpoints-table.tsx (2)
339-345: 删除确认使用了浏览器原生confirm(),与应用其他地方的 UX 不一致。同文件中的
DeleteVendorDialog(在provider-vendor-view.tsx中)使用了AlertDialog组件实现两步确认。此处删除 endpoint 的操作直接调用confirm(),体验不一致且无法被主题样式覆盖。建议改用AlertDialog或至少使用自定义确认对话框。
389-391: 类型过滤使用硬编码排除列表。
["claude-auth", "gemini-cli"]被硬编码为不可选类型。如果后续新增类似的非用户可选类型,这里容易被遗漏。考虑在getProviderTypeConfig中增加selectable或userCreatable属性来驱动过滤逻辑。src/app/[locale]/dashboard/users/users-page-client.tsx (1)
195-204: 系统设置的货币获取逻辑合理,有一个小冗余。
staleTime: 30_000与外层QueryClient默认配置(第 38 行staleTime: 30000)重复,可以省略。不影响功能,仅为冗余配置。src/lib/emit-event.ts (1)
57-82:emitRequestFiltersUpdated新增日志可观测性,但三个emit*函数之间的日志一致性值得考虑。
emitRequestFiltersUpdated添加了详细的 info/warn 日志,而emitErrorRulesUpdated和emitSensitiveWordsUpdated仍然是静默的。如果这是有意为之(例如 request filters 调试需求更高),可以忽略;否则建议统一三个函数的日志策略。src/lib/auth-session-storage.node.ts (1)
11-13: 在断言处添加注释解释类型不兼容的原因。
AsyncLocalStorage<ScopedAuthContext>的run方法签名接受callback: (...args: unknown[]) => T,而AuthSessionStorage要求callback: () => T。类型不兼容导致 TypeScript 无法直接转换,双重断言是必要的。建议添加代码注释说明:// AsyncLocalStorage.run 的 callback 接受可变参数,而 AuthSessionStorage 不接受参数 // 因此需要通过 as unknown 规避类型检查 new AsyncLocalStorage<ScopedAuthContext>() as unknown as AuthSessionStorage;src/lib/auth.ts (1)
37-45:runWithAuthSession在 storage 不可用时静默降级,建议确认是否需要日志。当
globalThis.__cchAuthSessionStorage未初始化时,fn()会在无 scoped 上下文的情况下执行。这在非 Node.js 环境(如 Edge Runtime)下是合理的降级,但在本应有 storage 却因初始化顺序问题缺失的场景下,可能导致难以排查的 session 丢失问题。tests/unit/settings/providers/provider-rich-list-item-endpoints.test.tsx (2)
61-68: 清理多余的注释。第 61-63 行的注释看起来是开发过程中的思考记录,建议精简:
-// Mock ProviderEndpointHover to avoid complex children rendering if needed, -// but we want to check if it's rendered. -// Actually, let's NOT mock it fully, or mock it to render a simple test id. +// Mock ProviderEndpointHover with a simple stub to verify rendering
143-165:renderWithProviders未在beforeEach中自动清理,依赖手动调用unmount()。当前模式下,如果测试中途抛异常导致
unmount()未执行,DOM 容器会残留。beforeEach中的document.body清理(第 183-185 行)可以兜底,但建议考虑使用afterEach自动清理或@testing-library/react的render来简化。当前实现可以正常工作,仅作为改进建议。
src/app/[locale]/settings/providers/_components/provider-rich-list-item.tsx (1)
100-104: 冗余的await。
queryFn中async () => await getProviderVendors()的await是多余的,直接返回 promise 即可。建议简化
const { data: vendors = [] } = useQuery({ queryKey: ["provider-vendors"], - queryFn: async () => await getProviderVendors(), + queryFn: () => getProviderVendors(), staleTime: 60000, });src/app/[locale]/settings/providers/_components/provider-endpoint-hover.tsx (1)
132-135:labelKey前缀剥离方式脆弱,考虑改用已限定作用域的 key。
statusModel.labelKey是全限定 key(如"settings.providers.endpointStatus.healthy"),而useTranslations("settings.providers")返回的t已经限定了命名空间作用域,因此这里用.replace("settings.providers.", "")来剥离前缀。如果endpoint-status.ts中的labelKey前缀发生变化,此处会静默失败并显示 fallback 文本。建议让
EndpointStatusModel直接提供相对 key(如"endpointStatus.healthy"),或在endpoint-status.ts中导出命名空间常量以消除硬编码耦合。tests/unit/settings/providers/provider-form-endpoint-pool.test.tsx (1)
11-14:src/目录下的文件应使用@/路径别名。第 11、12、14 行使用了相对路径引用
src/目录下的文件,而同文件中的vi.mock调用(第 33、38、43、57 行)已经正确使用了@/路径别名。建议统一使用@/以保持一致性。建议修改
-import { ProviderForm } from "../../../../src/app/[locale]/settings/providers/_components/forms/provider-form"; -import { Dialog } from "../../../../src/components/ui/dialog"; +import { ProviderForm } from "@/app/[locale]/settings/providers/_components/forms/provider-form"; +import { Dialog } from "@/components/ui/dialog"; import enMessages from "../../../../messages/en"; -import type { ProviderEndpoint, ProviderVendor } from "../../../../src/types/provider"; +import type { ProviderEndpoint, ProviderVendor } from "@/types/provider";As per coding guidelines: "Use path alias
@/to reference files in./src/directory"src/lib/sensitive-word-detector.ts (1)
36-38: 构造函数中触发异步初始化,建议添加注释说明。
setupEventListener()返回的 Promise 在构造函数中被隐式丢弃。虽然内部已有完善的 try-catch 保护(不会产生未处理的 rejection),但建议加一行注释或显式void标记,使 fire-and-forget 的意图更明确。建议修改
constructor() { - this.setupEventListener(); + // Fire-and-forget: errors are caught internally + void this.setupEventListener(); }src/app/[locale]/settings/providers/_components/forms/provider-form/index.tsx (3)
702-702: 已有命名导出,默认导出冗余且不符合编码规范。第 624 行已经有
export function ProviderForm,此处的export default多余。根据编码规范应优先使用命名导出。建议修改
-export default ProviderForm;As per coding guidelines, "Prefer named exports over default exports".
127-137:queryKey回退值在enabled: false时不会触发请求,但可读性欠佳。当
endpointPoolQueryKey为null时,enabled为false,此时queryKey回退到["provider-endpoints", "unresolved", "provider-form"]。虽然功能上不会发起请求,但这个"占位" key 可能让维护者困惑。可以考虑添加注释说明意图。
94-98: Vendor 查询缺少staleTime配置。
["provider-vendors"]查询没有设置staleTime,在组件重渲染时可能频繁重新请求。考虑根据 vendor 数据变更频率设置合理的staleTime。src/lib/redis/pubsub.ts (1)
20-58:resubscribeAll的逐通道订阅可优化为批量操作。当前逐个通道调用
sub.subscribe(channel)进行重新订阅。如果通道数量较多,可以考虑使用 ioredis 的批量订阅(sub.subscribe(channel1, channel2, ...))来减少往返次数。鉴于当前通道数量不多,这只是一个优化建议。tests/unit/settings/providers/provider-endpoints-table.test.tsx (1)
106-127: 使用createRoot+ 手动 DOM 管理替代@testing-library/react。当前通过
createRoot+act手动管理渲染和卸载。如果项目已使用@testing-library/react,考虑使用render+cleanup以获得更好的测试工具支持(如screen.getByText、waitFor等),可减少手动flushTicks的需要。不过作为集成度较低的单元测试,当前方式也可以接受。
src/lib/gemini/provider-overrides.ts (1)
95-138:applyGeminiGoogleSearchOverrideWithAudit内部重复了类型检查和偏好判断逻辑(可选优化)。
WithAudit版本在 Lines 99-108 重复了与applyGeminiGoogleSearchOverride相同的 provider type 和 preference 前置检查,然后在 Line 124 又调用了后者(后者会再次执行这些检查)。当前这种方式正确且易读,但如果未来前置逻辑发生变化,需要同步修改两处。可以考虑提取一个共享的
resolvePreference内部函数来去重,但鉴于逻辑简单且当前可维护,不阻塞合并。src/repository/provider.ts (1)
86-145:returning()列选择在文件中重复了 5 次,建议提取为共享常量。
createProvider、findProviderList、findAllProvidersFresh、findProviderById、updateProvider中的.returning()/.select()列定义几乎完全相同(约 50 个字段)。每次新增字段(如本次的geminiGoogleSearchPreference)都需要同步修改 5 处,容易遗漏。可以提取一个
providerColumns常量对象:const providerColumns = { id: providers.id, name: providers.name, // ... 其余字段 geminiGoogleSearchPreference: providers.geminiGoogleSearchPreference, createdAt: providers.createdAt, updatedAt: providers.updatedAt, deletedAt: providers.deletedAt, } as const;然后在各处复用:
.select(providerColumns)/.returning(providerColumns)。Also applies to: 168-222, 245-299, 326-380, 531-588
tests/unit/settings/providers/provider-form-total-limit-ui.test.tsx (2)
18-23:hasOwn辅助函数实现过于迂回。当前通过
(Object as unknown as { hasOwn: ... }).hasOwn(obj, prop)绕了一层不安全的类型断言来调用Object.hasOwn。如果测试环境不支持Object.hasOwn(ES2022),这段代码在运行时一样会抛异常。建议使用更可靠的方式:
推荐替代方案
-function hasOwn(obj: object, prop: PropertyKey): boolean { - return (Object as unknown as { hasOwn: (obj: object, prop: PropertyKey) => boolean }).hasOwn( - obj, - prop - ); -} +function hasOwn(obj: object, prop: PropertyKey): boolean { + return Object.prototype.hasOwnProperty.call(obj, prop); +}
11-14: 导入路径混用相对路径和@/别名。Lines 11-14 使用
../../../../src/...相对路径导入组件和类型,而 Lines 42、47、52、59 的vi.mock()调用则使用@/别名。如果测试环境已配置@/别名(从 mock 调用来看是支持的),建议统一使用@/前缀。As per coding guidelines, "Use path alias@/to reference files in./src/directory".src/repository/provider-endpoints.ts (1)
1049-1068: Savepoint 检测的类型断言存在脆弱性,建议添加注释说明适用范围。Line 1049-1051 将
tx断言为可能包含transaction方法的类型来检测 savepoint 支持。实际上,当通过db.transaction()或外部options.tx传入时,Drizzle 的TransactionExecutor确实支持.transaction()(作为 savepoint)。else分支(Line 1064-1067)理论上不会被触发。但
QueryExecutor类型定义(Line 22-25)中刻意没有包含transaction方法,导致这里需要运行时检测。如果未来有人传入一个真正不支持 savepoint 的 executor,Line 1066 的 unique violation 会直接抛出到调用方,中断整个事务。建议在
QueryExecutor类型或此处添加简短注释说明这一设计取舍。
|
|
||
| --- | ||
|
|
||
| ## v0.5.3 (2026-02-03) |
There was a problem hiding this comment.
PR 标题为 v0.5.4,但 CHANGELOG 仅新增了 v0.5.3 条目。
当前 PR 标题为 "release v0.5.4",但 CHANGELOG 中未见 v0.5.4 的变更记录。请确认是否需要补充 v0.5.4 的条目,或者 PR 标题有误。
🤖 Prompt for AI Agents
In `@CHANGELOG.md` at line 7, The changelog currently only contains the heading
"## v0.5.3 (2026-02-03)" while the PR is titled "release v0.5.4"; either add a
new "## v0.5.4 (YYYY-MM-DD)" section to CHANGELOG.md with the release date and a
short bullet list of changes introduced in this PR, or if the PR title is
incorrect, rename the PR to match the existing "v0.5.3" entry; locate the header
"## v0.5.3 (2026-02-03)" in CHANGELOG.md and add the new v0.5.4 header and
entries (or update the PR title) accordingly.
src/app/[locale]/settings/providers/_components/provider-endpoints-table.tsx
Outdated
Show resolved
Hide resolved
| function ensureSubscriber(baseClient: Redis): Promise<Redis> { | ||
| if (subscriberReady) return subscriberReady; | ||
|
|
||
| subscriberReady = new Promise<Redis>((resolve, reject) => { | ||
| const sub = baseClient.duplicate(); | ||
| let timeoutId: ReturnType<typeof setTimeout> | null = null; | ||
|
|
||
| function cleanup(): void { | ||
| sub.off("ready", onReady); | ||
| sub.off("error", onError); | ||
|
|
||
| if (timeoutId) { | ||
| clearTimeout(timeoutId); | ||
| timeoutId = null; | ||
| } | ||
| } | ||
|
|
||
| function fail(error: Error): void { | ||
| cleanup(); | ||
| subscriberReady = null; | ||
| try { | ||
| sub.disconnect(); | ||
| } catch { | ||
| // ignore | ||
| } | ||
| reject(error); | ||
| } | ||
|
|
||
| function onReady(): void { | ||
| cleanup(); | ||
| subscriberClient = sub; | ||
| subscribedChannels.clear(); | ||
|
|
||
| sub.on("error", (error) => | ||
| logger.warn("[RedisPubSub] Subscriber connection error", { error }) | ||
| ); | ||
| sub.on("close", () => subscribedChannels.clear()); | ||
| sub.on("end", () => subscribedChannels.clear()); | ||
| sub.on("ready", () => void resubscribeAll(sub)); | ||
|
|
||
| sub.on("message", (channel: string) => { | ||
| const callbacks = subscriptions.get(channel); | ||
| if (!callbacks || callbacks.size === 0) return; | ||
| for (const cb of callbacks) { | ||
| try { | ||
| cb(); | ||
| } catch (error) { | ||
| logger.error("[RedisPubSub] Callback error", { channel, error }); | ||
| } | ||
| } | ||
| }); | ||
|
|
||
| logger.info("[RedisPubSub] Subscriber connection ready"); | ||
| resolve(sub); | ||
| } | ||
|
|
||
| function onError(error: Error): void { | ||
| logger.warn("[RedisPubSub] Subscriber connection error", { error }); | ||
| fail(error); | ||
| } | ||
|
|
||
| sub.once("ready", onReady); | ||
| sub.once("error", onError); | ||
|
|
||
| // Timeout 10 seconds | ||
| timeoutId = setTimeout(() => { | ||
| if (sub.status !== "ready") { | ||
| fail(new Error("Redis subscriber connection timeout")); | ||
| } | ||
| }, 10000); | ||
| }); | ||
|
|
||
| return subscriberClient; | ||
| return subscriberReady; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/lib/redis/pubsub.tsRepository: ding113/claude-code-hub
Length of output: 6980
🏁 Script executed:
rg -n 'subscribeCacheInvalidation' --type=ts --type=tsx --type=js --type=jsx -C3Repository: ding113/claude-code-hub
Length of output: 94
🏁 Script executed:
rg 'subscribeCacheInvalidation' --type ts -C3Repository: ding113/claude-code-hub
Length of output: 11572
🏁 Script executed:
# Check context around subscribeCacheInvalidation calls in error-rule-detector.ts
sed -n '1,100p' src/lib/error-rule-detector.ts | tail -50Repository: ding113/claude-code-hub
Length of output: 1393
🏁 Script executed:
# Check sensitive-word-detector.ts around the subscribeCacheInvalidation call
sed -n '1,150p' src/lib/sensitive-word-detector.ts | tail -80Repository: ding113/claude-code-hub
Length of output: 2061
🏁 Script executed:
# Check request-filter-engine.ts around the subscribeCacheInvalidation call
sed -n '1,150p' src/lib/request-filter-engine.ts | tail -80Repository: ding113/claude-code-hub
Length of output: 2806
🏁 Script executed:
# Check provider-cache.ts around the subscribeCacheInvalidation call
sed -n '1,150p' src/lib/cache/provider-cache.ts | tail -80Repository: ding113/claude-code-hub
Length of output: 1941
🏁 Script executed:
# Get the exact context around subscribeCacheInvalidation in error-rule-detector.ts
rg -n 'subscribeCacheInvalidation' src/lib/error-rule-detector.ts -B5 -A5Repository: ding113/claude-code-hub
Length of output: 545
🏁 Script executed:
# Get the exact context in sensitive-word-detector.ts
rg -n 'subscribeCacheInvalidation' src/lib/sensitive-word-detector.ts -B5 -A10Repository: ding113/claude-code-hub
Length of output: 834
🏁 Script executed:
# Get the exact context in request-filter-engine.ts
rg -n 'subscribeCacheInvalidation' src/lib/request-filter-engine.ts -B5 -A10Repository: ding113/claude-code-hub
Length of output: 899
🏁 Script executed:
# Get the exact context in provider-cache.ts
rg -n 'subscribeCacheInvalidation' src/lib/cache/provider-cache.ts -B5 -A10Repository: ding113/claude-code-hub
Length of output: 1077
需要确认 error-rule-detector.ts 中未捕获返回值的问题。
在 src/lib/error-rule-detector.ts 第 116 行,subscribeCacheInvalidation 的返回值未被捕获:
await subscribeCacheInvalidation(CHANNEL_ERROR_RULES_UPDATED, handleUpdated);而其他三个调用者(sensitive-word-detector.ts、request-filter-engine.ts、provider-cache.ts)都正确地检查了返回值是否为 null。虽然 subscribeCacheInvalidation 内部会记录失败日志,但这里的实现方式隐匿了错误信息,使得调用者无法感知订阅是否失败。
此外,还需要注意一个潜在的架构问题:如果 ensureSubscriber 成功建立连接后,Redis 连接意外断开且无法自动重连(如 Redis 服务停止),subscriberReady 将保持为已解析状态并持有一个失效的 sub 对象。后续对 ensureSubscriber 的调用会直接返回这个已解析的 Promise,导致调用者拿到失效连接并在 sub.subscribe() 时失败。虽然 resubscribeAll 机制在重连成功时会被触发,但对于永久性连接失败的情况并无重试机制。
🤖 Prompt for AI Agents
In `@src/lib/redis/pubsub.ts` around lines 60 - 133, Update error-rule-detector.ts
to check the return value of subscribeCacheInvalidation (same as in
sensitive-word-detector.ts, request-filter-engine.ts, provider-cache.ts): after
awaiting subscribeCacheInvalidation(CHANNEL_ERROR_RULES_UPDATED, handleUpdated)
verify it is not null and handle the null case (log and/or return) so callers
know subscription failed. Also harden ensureSubscriber: when the duplicated
client emits "close" or "end" mark subscriberReady = null and subscriberClient =
null (in the handlers already wired in ensureSubscriber) so a permanently lost
connection doesn't leave subscriberReady resolved with a dead sub; ensure fail()
and onError() also clear subscriberClient/subscriberReady (they already clear
subscriberReady but add subscriberClient = null) so subsequent calls to
ensureSubscriber will recreate the subscriber and trigger resubscribeAll as
intended (references: ensureSubscriber, subscriberReady, subscriberClient,
resubscribeAll, subscribeCacheInvalidation, CHANNEL_ERROR_RULES_UPDATED,
handleUpdated).
There was a problem hiding this comment.
Code Review Summary
This is a large release PR consolidating 13 merged PRs with critical bug fixes and new features. After comprehensive analysis through 6 review perspectives, I identified 1 Critical issue that must be addressed.
PR Size: XL
- Lines changed: 11,326 (10,249 additions + 1,077 deletions)
- Files changed: 103
Recommendation for future releases: Consider splitting XL PRs into smaller batches:
- Batch 1: Critical bug fixes (session tracker, provider endpoints, hot-reload)
- Batch 2: New features (Gemini Google Search, endpoint pool UI, cost column toggle)
- Batch 3: Minor improvements and dependency updates
This improves reviewability and reduces risk of introducing regressions.
Issues Found
| Category | Critical | High | Medium | Low |
|---|---|---|---|---|
| Error Handling | 1 | 0 | 0 | 0 |
| Security | 0 | 0 | 0 | 0 |
| Logic/Bugs | 0 | 0 | 0 | 0 |
| Types | 0 | 0 | 0 | 0 |
| Tests | 0 | 0 | 0 | 0 |
| Comments/Docs | 0 | 0 | 0 | 0 |
Critical Issues (Must Fix)
1. [ERROR-SILENT] Empty catch blocks in cache invalidation events
File: src/lib/emit-event.ts
Lines: 16-18, 25-27, 39-41, 53-55, 62-64
Confidence: 95/100
Problem:
All five functions (emitErrorRulesUpdated, emitSensitiveWordsUpdated, emitRequestFiltersUpdated) contain empty catch blocks that swallow errors without logging. This violates the "No Silent Failures" principle.
Current code:
try {
const { eventEmitter } = await import("@/lib/event-emitter");
eventEmitter.emitErrorRulesUpdated();
} catch {
// 忽略导入错误 <-- NO LOGGING!
}Why this is critical:
- Cache invalidation failures go unnoticed in production
- Multi-instance deployments can have stale caches indefinitely
- Debugging cache inconsistency becomes impossible without logs
- Operators have zero visibility into event propagation failures
Suggested fix:
try {
const { eventEmitter } = await import("@/lib/event-emitter");
eventEmitter.emitErrorRulesUpdated();
} catch (error) {
logger.warn("[emit-event] Failed to emit local event (graceful degradation)", {
event: "errorRulesUpdated",
error: error instanceof Error ? error.message : String(error),
});
}
try {
const { CHANNEL_ERROR_RULES_UPDATED, publishCacheInvalidation } = await import(
"@/lib/redis/pubsub"
);
await publishCacheInvalidation(CHANNEL_ERROR_RULES_UPDATED);
} catch (error) {
logger.error("[emit-event] Failed to publish cache invalidation to Redis", {
channel: "error_rules",
error: error instanceof Error ? error.message : String(error),
});
}Note: Use logger.warn for local EventEmitter failures (non-critical) and logger.error for Redis pub/sub failures (critical for multi-instance consistency).
Review Coverage
- Logic and correctness - Clean
- Security (OWASP Top 10) - Clean
- Error handling - 1 Critical issue found
- Type safety - Clean
- Documentation accuracy - Clean
- Test coverage - Excellent (29 test files added/modified)
- Code clarity - Good
Positive Observations
Strong test coverage:
- 29 test files covering new features
- Integration tests for race conditions (
provider-endpoint-sync-race.test.ts) - Unit tests for session cleanup, TTL validation, and hot-reload patterns
Good architectural decisions:
- Transaction wrapping for provider endpoint sync prevents race conditions
- AsyncLocalStorage for session propagation is the correct pattern
- Redis pub/sub + EventEmitter dual-channel approach is solid
Code quality:
- Comprehensive error handling in most areas (except emit-event.ts)
- Good use of TypeScript types
- Clear comments explaining complex logic
Recommendation
APPROVE with required changes:
- Add logging to all catch blocks in
src/lib/emit-event.ts(Critical) - After fix, this PR is ready to merge
The core functionality is solid, test coverage is excellent, and the architectural decisions are sound. The single critical issue is straightforward to fix and does not require code restructuring.
Automated review by Claude AI
| pipeline.zadd(`provider:${providerId}:active_sessions`, now, sessionId); | ||
| pipeline.zadd(providerZSetKey, now, sessionId); | ||
| // Use dynamic TTL based on session TTL (at least 1h to cover active sessions) | ||
| pipeline.expire(providerZSetKey, Math.max(3600, ttlSeconds)); |
There was a problem hiding this comment.
[HIGH] [LOGIC-BUG] refreshSession() only refreshes TTL for provider ZSET; global/key/user ZSETs can expire or lose TTL under long-lived sessions
Evidence: src/lib/session-tracker.ts:189 only sets TTL for provider:${providerId}:active_sessions:
pipeline.zadd("global:active_sessions", now, sessionId);
pipeline.zadd(`key:${keyId}:active_sessions`, now, sessionId);
pipeline.zadd(providerZSetKey, now, sessionId);
// Use dynamic TTL based on session TTL (at least 1h to cover active sessions)
pipeline.expire(providerZSetKey, Math.max(3600, ttlSeconds));
if (userId !== undefined) {
pipeline.zadd(`user:${userId}:active_sessions`, now, sessionId);
}Why this is a problem: With SESSION_TTL > 3600 (or when a key sees no new sessions for > 1h), global:active_sessions / key:*:active_sessions / user:*:active_sessions can still expire at their previous TTL and then be recreated by ZADD without an EXPIRE, causing inaccurate session counts and potential Redis key leakage (no TTL).
Suggested fix:
const ttlSeconds = SessionTracker.SESSION_TTL_SECONDS;
const zsetTtlSeconds = Math.max(3600, ttlSeconds);
pipeline.zadd("global:active_sessions", now, sessionId);
pipeline.expire("global:active_sessions", zsetTtlSeconds);
pipeline.zadd(`key:${keyId}:active_sessions`, now, sessionId);
pipeline.expire(`key:${keyId}:active_sessions`, zsetTtlSeconds);
pipeline.zadd(providerZSetKey, now, sessionId);
pipeline.expire(providerZSetKey, zsetTtlSeconds);
if (userId !== undefined) {
pipeline.zadd(`user:${userId}:active_sessions`, now, sessionId);
pipeline.expire(`user:${userId}:active_sessions`, zsetTtlSeconds);
}(Optional) Extend tests/unit/lib/session-tracker-cleanup.test.ts to assert expire is called for global:/key:/user: when SESSION_TTL=7200.
| <EndpointLatencySparkline endpointId={endpoint.id} limit={12} /> | ||
| {endpoint.lastProbedAt ? ( | ||
| <span className="text-muted-foreground text-[10px] whitespace-nowrap"> | ||
| {formatDistanceToNow(new Date(endpoint.lastProbedAt), { addSuffix: true })} |
There was a problem hiding this comment.
[MEDIUM] [STANDARD-VIOLATION] Relative time display uses date-fns default (English), bypassing i18n
Evidence: src/app/[locale]/settings/providers/_components/provider-endpoints-table.tsx:306
{formatDistanceToNow(new Date(endpoint.lastProbedAt), { addSuffix: true })}Why this is a problem: formatDistanceToNow defaults to en-US without an explicit locale, so non-English UIs will render English strings (for example, "about 1 hour ago"), violating CLAUDE.md:
i18n Required - All user-facing strings must use i18n (5 languages supported). Never hardcode display text
Suggested fix (use existing locale-aware helper):
import { useLocale } from "next-intl";
import { formatDateDistance } from "@/lib/utils/date-format";
const locale = useLocale();
{formatDateDistance(endpoint.lastProbedAt, new Date(), locale, { addSuffix: true })}(Alternative: pass locale: getDateFnsLocale(locale) to formatDistanceToNow.)
There was a problem hiding this comment.
Code Review Summary
This PR is an XL release bundle that spans provider endpoint pooling (DB + UI), Redis hot-reload/pubsub, session TTL/rate-limit behavior, Gemini request overrides, and a dependency bump. The breadth makes regression risk high and makes it hard to reason about behavior changes (especially around Redis/session lifecycle) as a single unit.
PR Size: XL
- Lines changed: 11326
- Files changed: 103
Split suggestions (recommended for reviewability and safer rollback):
- Provider endpoint pools UI + i18n message updates + UI tests
- Provider endpoint sync/transaction changes + migration + repository tests
- Redis pub/sub hot-reload + globalThis singleton refactors + unit tests
- Session TTL + Lua/rate-limit/session-tracker adjustments + unit tests
- Gemini Google Search override feature (schema/types/forwarder/tests)
- Dependency bump(s) (jspdf) isolated
Issues Found
| Category | Critical | High | Medium | Low |
|---|---|---|---|---|
| Logic/Bugs | 1 | 0 | 1 | 0 |
| Security | 0 | 0 | 0 | 0 |
| Error Handling | 0 | 0 | 0 | 0 |
| Types | 0 | 0 | 0 | 0 |
| Comments/Docs | 0 | 0 | 0 | 0 |
| Tests | 0 | 0 | 0 | 0 |
| Simplification | 0 | 0 | 0 | 0 |
Critical Issues (Must Fix)
src/lib/session-tracker.ts:189—refreshSession()refreshesprovider:*:active_sessionsTTL but notglobal:/key:/user:ZSET TTLs; under long-lived sessions (notably whenSESSION_TTL > 3600) those keys can expire early and then be recreated without TTL, leading to inaccurate counts and potential Redis key leakage.
High Priority Issues (Should Fix)
src/app/[locale]/settings/providers/_components/provider-endpoints-table.tsx:306—formatDistanceToNow(...)is used without an explicit locale, so relative time strings will render in English in non-English locales (violates i18n requirement).
Review Coverage
- Logic and correctness
- Security (OWASP Top 10)
- Error handling
- Type safety
- Documentation accuracy
- Test coverage
- Code clarity
Automated review by Codex AI
There was a problem hiding this comment.
- PR detected:
#731(release v0.5.4); size labeledsize/XL. - Inline review comments posted:
src/lib/session-tracker.ts:189— TTL refresh only applied toprovider:*:active_sessions, notglobal:/key:/user:ZSETs (risk: early expiry + recreate-without-TTL leak under long-lived sessions).src/app/[locale]/settings/providers/_components/provider-endpoints-table.tsx:306—formatDistanceToNow(...)used without locale (relative time strings will render in English; i18n violation).
- Summary review submitted via
gh pr review --comment.
- fix(auth): prevent scoped session access widening via ?? -> && guard - fix(i18n): standardize zh-CN provider terminology to "服务商" - fix(i18n): use consistent Russian translations for circuit status - fix(i18n): replace raw formatDistanceToNow with locale-aware RelativeTime - fix(gemini): log warning for unknown google search preference values - fix(error-rules): check subscribeCacheInvalidation return value - fix(test): correct endpoint hover sort test to assert URLs not labels Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
🧪 测试结果
总体结果: ✅ 所有测试通过 |
Additional Comments (1)
This module only initializes Prompt To Fix With AIThis is a comment left during a code review.
Path: src/lib/auth-session-storage.node.ts
Line: 11:15
Comment:
**Missing module export**
This module only initializes `globalThis.__cchAuthSessionStorage` but exports nothing, so importing it for side effects is the only way it can be used. If any code expects to `import { authSessionStorage } ...` (or similar) this will be `undefined` at runtime. Either export the initialized storage, or ensure all call sites only ever side-effect import this file and read via `globalThis.__cchAuthSessionStorage` (and remove any typed imports expecting a value).
How can I resolve this? If you propose a fix, please make it concise. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tests/unit/settings/providers/provider-endpoint-hover.test.tsx`:
- Around line 107-193: The mockEndpoints array contains fields that don't exist
on ProviderEndpoint (remove lastOk and lastLatencyMs) and is missing required
fields (add lastProbeStatusCode, lastProbeErrorType, lastProbeErrorMessage) and
uses strings for createdAt/updatedAt instead of Date objects; update the
mockEndpoints entries (the objects in the mockEndpoints constant) to remove
lastOk/lastLatencyMs, add the three missing lastProbe* properties with
appropriate values (null or numbers/strings matching the type), and convert
createdAt and updatedAt to Date instances (e.g., new Date("2024-01-01")),
applying the same fixes to all five endpoint objects so they match the
ProviderEndpoint type.
🧹 Nitpick comments (7)
tests/unit/settings/providers/provider-endpoint-hover.test.tsx (1)
62-84:renderWithProviders中容器未在测试失败时自动清理
renderWithProviders返回的unmount需要手动调用。如果测试中途抛出异常,unmount()不会执行,容器会残留在 DOM 中。虽然beforeEach中有 DOM 清理逻辑(Line 102-104),但建议考虑使用afterEach统一清理,或配合 vitest 的 cleanup 机制,确保测试隔离性。src/lib/auth.ts (2)
37-45:runWithAuthSession降级逻辑合理,但静默降级可能掩盖配置问题。当
globalThis.__cchAuthSessionStorage为undefined时,函数直接调用fn()而不注入会话上下文。这意味着后续getScopedAuthSession()会返回null,getSession()将回退到 cookie/header 路径。这个降级行为在客户端或 Edge Runtime 中是合理的,但如果在 Node.js 服务端(预期 storage 已初始化)出现
undefined,则属于配置遗漏,静默跳过可能导致难以排查的鉴权问题。建议至少在process.env.NODE_ENV === "development"时打印一条console.warn。可选:在开发环境添加降级警告
export function runWithAuthSession<T>( session: AuthSession, fn: () => T, options?: { allowReadOnlyAccess?: boolean } ): T { const storage = globalThis.__cchAuthSessionStorage; - if (!storage) return fn(); + if (!storage) { + if (process.env.NODE_ENV === "development") { + console.warn("[auth] __cchAuthSessionStorage is not initialized; scoped session will not be available."); + } + return fn(); + } return storage.run({ session, allowReadOnlyAccess: options?.allowReadOnlyAccess ?? false }, fn); }
162-172:options?.allowReadOnlyAccess ?? true的默认值与validateKey中的?? false不对称,逻辑正确但容易误读。第 167 行
options?.allowReadOnlyAccess ?? true使得getSession()无参数调用时,scoped 路径默认"继承 scoped context 的 allowReadOnlyAccess",而非 scoped 路径(validateKey第 66 行)默认为false(要求完全权限)。经过分析,AND 语义保证了"不允许提权"——两方必须同时同意才放行,逻辑正确。但这个不对称的默认值在维护时容易引起混淆。建议补充一行注释说明
?? true的语义意图。建议:添加注释说明默认值意图
// 关键:scoped 会话必须遵循其"创建时语义",仅允许内部显式降权(不允许提权) + // options?.allowReadOnlyAccess 未传时默认 true,表示"不额外收紧",以继承 scoped 创建时的权限; + // 与 validateKey 中默认 false 不同,因为 scoped 路径已在入口处完成了首次鉴权。 const effectiveAllowReadOnlyAccess = scoped.allowReadOnlyAccess && (options?.allowReadOnlyAccess ?? true);src/app/[locale]/settings/providers/_components/provider-endpoints-table.tsx (4)
116-128:useMemo回调中变量名t遮蔽了外层的useTranslations返回值。Line 118 的
typeOrder.map((t, i) => ...)中t与 Line 95 的const t = useTranslations(...)同名,虽然作用域不同不会导致 bug,但降低了可读性且容易在后续维护中引发混淆。建议重命名 map 参数
- const typeIndexMap = new Map(typeOrder.map((t, i) => [t, i])); + const typeIndexMap = new Map(typeOrder.map((type, i) => [type, i]));
338-344: 删除确认使用了浏览器原生confirm()对话框。原生
confirm()会阻塞主线程,且样式无法定制、不符合项目 UI 规范。项目已引入Dialog/AlertDialog组件,建议使用AlertDialog替代以保持 UI 一致性并提升可访问性。当前功能可用,可作为后续优化。
496-504:url输入框采用了混合受控/非受控模式。Line 503 通过
onChange更新urlstate(用于UrlPreview预览),但未设置value={url},表单提交时又从FormData读取值。虽然功能正确(两者值始终一致),但这种半受控模式在维护时容易产生困惑。建议统一为完全受控(设置value={url})或在提交时也从 state 读取。统一为受控模式
<Input id="url" name="url" placeholder={t("endpointUrlPlaceholder")} required + value={url} onChange={(e) => setUrl(e.target.value)} />
88-169:ProviderEndpointsTable整体结构清晰。查询键构建、条件数据获取、排序逻辑和加载/空状态处理均合理。一个小建议:
useQuery未设置staleTime,默认为 0,每次组件挂载或窗口聚焦都会重新请求。如果端点数据变更频率不高,可考虑设置适当的staleTime(如EndpointLatencySparkline中使用的30_000)以减少不必要的请求。
- Export authSessionStorage from auth-session-storage.node.ts to prevent undefined on named imports; remove duplicate declare global block - Fix mockEndpoints in provider-endpoint-hover test: remove nonexistent lastOk/lastLatencyMs fields, add missing lastProbe* fields, use Date objects for createdAt/updatedAt Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
| @@ -0,0 +1 @@ | |||
| ALTER TABLE "providers" ADD COLUMN "gemini_google_search_preference" varchar(20); No newline at end of file | |||
There was a problem hiding this comment.
Missing trailing newline
This migration file is committed without a trailing newline (\ No newline at end of file). Some tooling (formatters, certain migration runners / diffs) assumes POSIX newlines and can produce noisy diffs or fail strict checks. Add a final newline to the file to avoid these issues.
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
Prompt To Fix With AI
This is a comment left during a code review.
Path: drizzle/0062_aromatic_taskmaster.sql
Line: 1:1
Comment:
**Missing trailing newline**
This migration file is committed without a trailing newline (`\ No newline at end of file`). Some tooling (formatters, certain migration runners / diffs) assumes POSIX newlines and can produce noisy diffs or fail strict checks. Add a final newline to the file to avoid these issues.
<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>
How can I resolve this? If you propose a fix, please make it concise.| if (cbs.size === 0) { | ||
| subscriptions.delete(channel); | ||
| subscribedChannels.delete(channel); | ||
| if (subscriberClient) { | ||
| void subscriberClient.unsubscribe(channel); | ||
| } | ||
| } |
There was a problem hiding this comment.
Unsubscribe can desync state
In the cleanup function, subscribedChannels.delete(channel) happens before unsubscribe(channel) completes (and unsubscribe is not awaited). If the unsubscribe fails or is delayed while another subscribeCacheInvalidation() races in, the in-memory subscribedChannels can become out-of-sync with the actual Redis subscription state, leading to missed resubscribeAll() behavior or duplicate subscribe attempts. Consider serializing subscribe/unsubscribe mutations (or making subscriptions the single source of truth and deriving desired channels from it).
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/redis/pubsub.ts
Line: 187:193
Comment:
**Unsubscribe can desync state**
In the cleanup function, `subscribedChannels.delete(channel)` happens before `unsubscribe(channel)` completes (and `unsubscribe` is not awaited). If the unsubscribe fails or is delayed while another `subscribeCacheInvalidation()` races in, the in-memory `subscribedChannels` can become out-of-sync with the actual Redis subscription state, leading to missed `resubscribeAll()` behavior or duplicate subscribe attempts. Consider serializing subscribe/unsubscribe mutations (or making `subscriptions` the single source of truth and deriving desired channels from it).
How can I resolve this? If you propose a fix, please make it concise.| } else { | ||
| logger.warn("applyGeminiGoogleSearchOverrideWithAudit: unknown preference value", { | ||
| preference, | ||
| providerId: provider.id, | ||
| }); | ||
| return { request, audit: null }; | ||
| } |
There was a problem hiding this comment.
Bad preference loses audit
When geminiGoogleSearchPreference contains an unexpected persisted value, this path logs a warning but returns { audit: null } and leaves the request unchanged. That makes misconfigurations hard to diagnose because downstream logs/audits won’t show that the override was skipped due to invalid data. Prefer emitting an audit event (or coercing to inherit) so operators can spot and remediate bad stored values.
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/gemini/provider-overrides.ts
Line: 120:126
Comment:
**Bad preference loses audit**
When `geminiGoogleSearchPreference` contains an unexpected persisted value, this path logs a warning but returns `{ audit: null }` and leaves the request unchanged. That makes misconfigurations hard to diagnose because downstream logs/audits won’t show that the override was skipped due to invalid data. Prefer emitting an audit event (or coercing to `inherit`) so operators can spot and remediate bad stored values.
<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>
How can I resolve this? If you propose a fix, please make it concise.| if (endpointCandidates.length === 0) { | ||
| if (shouldEnforceStrictEndpointPool) { | ||
| logger.warn( | ||
| "[ProxyForwarder] Failed to load provider endpoints, fallback to provider.url", | ||
| "ProxyForwarder: Strict endpoint policy blocked legacy provider.url fallback", | ||
| { | ||
| providerId: currentProvider.id, | ||
| vendorId: currentProvider.providerVendorId, | ||
| vendorId: providerVendorId, | ||
| providerType: currentProvider.providerType, | ||
| error: error instanceof Error ? error.message : String(error), | ||
| requestPath, | ||
| reason: "strict_blocked_legacy_fallback", | ||
| strictBlockCause: endpointSelectionError | ||
| ? "selector_error" | ||
| : "no_endpoint_candidates", | ||
| selectorError: endpointSelectionError?.message, | ||
| } | ||
| ); | ||
| failedProviderIds.push(currentProvider.id); | ||
| attemptCount = maxAttemptsPerProvider; | ||
| } else { | ||
| endpointCandidates.push({ endpointId: null, baseUrl: currentProvider.url }); | ||
| } | ||
| } |
There was a problem hiding this comment.
Strict branch still falls through
In the strict-endpoint-policy case (shouldEnforceStrictEndpointPool and endpointCandidates.length === 0), the code sets attemptCount = maxAttemptsPerProvider but then continues executing the provider attempt logic. This leaves endpointCandidates empty and later accesses activeEndpoint.endpointId/baseUrl, which will crash. The strict branch should continue to the next provider (or otherwise short-circuit) after marking this provider as failed.
Also appears in the vendor-type circuit-open branch where attemptCount is set to maxAttemptsPerProvider but execution still continues with normal attempt loop setup.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/app/v1/_lib/proxy/forwarder.ts
Line: 270:291
Comment:
**Strict branch still falls through**
In the strict-endpoint-policy case (`shouldEnforceStrictEndpointPool` and `endpointCandidates.length === 0`), the code sets `attemptCount = maxAttemptsPerProvider` but then continues executing the provider attempt logic. This leaves `endpointCandidates` empty and later accesses `activeEndpoint.endpointId/baseUrl`, which will crash. The strict branch should `continue` to the next provider (or otherwise short-circuit) after marking this provider as failed.
Also appears in the vendor-type circuit-open branch where `attemptCount` is set to `maxAttemptsPerProvider` but execution still continues with normal attempt loop setup.
How can I resolve this? If you propose a fix, please make it concise.
🧪 测试结果
总体结果: ✅ 所有测试通过 |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tests/unit/settings/providers/provider-endpoint-hover.test.tsx`:
- Around line 37-58: The test's loadMessages function currently hardcodes
endpointStatus strings (including an incorrect "View Details" without the
{count} placeholder) causing mismatch with the app messages; update loadMessages
to reuse the real translations by assigning endpointStatus =
enMessages.settings.providers.endpointStatus (or spread that object into
settings.providers) so viewDetails and other keys come from enMessages and
include the correct "{count}" placeholder used by the component when calling
t("endpointStatus.viewDetails", { count }).
🧹 Nitpick comments (2)
src/lib/auth-session-storage.node.ts (1)
6-9: 在双重断言处添加注释说明其必要性。
AsyncLocalStorage<ScopedAuthContext>与AuthSessionStorage接口在运行时结构兼容(两者都拥有匹配的run和getStore方法签名),但 TypeScript 无法直接推断这种结构兼容性,因此需要as unknown as来桥接类型。建议在此处添加一行简短注释(如// AsyncLocalStorage 与 AuthSessionStorage 结构兼容但 TS 无法直接推断),方便后续维护者理解该断言的原因。已确认
globalThis.__cchAuthSessionStorage的类型声明已在src/lib/auth.ts的全局类型扩展中正确定义。tests/unit/settings/providers/provider-endpoint-hover.test.tsx (1)
227-235: 排序断言依赖内部 CSS 类名选择器span.truncate,存在脆弱性风险。如果
EndpointRow组件内部重构(例如更改类名或 DOM 结构),此选择器会静默返回空数组,断言toEqual([])将失败但错误信息不够直观。建议为端点行添加data-testid属性,或至少在断言前验证labels.length > 0。建议:增加长度断言作为前置校验
const labels = Array.from( document.querySelectorAll("[data-testid='tooltip-content'] span.truncate") ).map((el) => el.textContent); + expect(labels).toHaveLength(3); expect(labels).toEqual([ "https://api.anthropic.com/v1", "https://api.anthropic.com/v3", "https://api.anthropic.com/v2", ]);
| function loadMessages() { | ||
| const endpointStatus = { | ||
| viewDetails: "View Details", | ||
| activeEndpoints: "Active Endpoints", | ||
| noEndpoints: "No Endpoints", | ||
| healthy: "Healthy", | ||
| unhealthy: "Unhealthy", | ||
| unknown: "Unknown", | ||
| circuitOpen: "Circuit Open", | ||
| circuitHalfOpen: "Circuit Half-Open", | ||
| }; | ||
|
|
||
| return { | ||
| settings: { | ||
| ...enMessages.settings, | ||
| providers: { | ||
| ...(enMessages.settings.providers || {}), | ||
| endpointStatus, | ||
| }, | ||
| }, | ||
| }; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find messages directory and check for en.json
fd -t f "en\.json" --exec cat {} \; 2>/dev/null | head -200Repository: ding113/claude-code-hub
Length of output: 4528
🏁 Script executed:
#!/bin/bash
# Search for en.json more broadly and inspect structure
find . -name "en.json" -o -name "*en*" -type f | grep -i messages | head -20Repository: ding113/claude-code-hub
Length of output: 1026
🏁 Script executed:
#!/bin/bash
# Check the test file to understand the structure
cat tests/unit/settings/providers/provider-endpoint-hover.test.tsx | head -50Repository: ding113/claude-code-hub
Length of output: 1753
🏁 Script executed:
#!/bin/bash
# Find how messages are exported
find messages -type f \( -name "index.*" -o -name "*.json" \) | sortRepository: ding113/claude-code-hub
Length of output: 13544
🏁 Script executed:
#!/bin/bash
# Search for endpointStatus usage in the codebase
rg "endpointStatus" --type ts --type tsx --type json -A 2 -B 2Repository: ding113/claude-code-hub
Length of output: 94
🏁 Script executed:
#!/bin/bash
# Check the actual messages/en structure
ls -la messages/en/ 2>/dev/null && echo "---" && cat messages/en/*.json | head -100Repository: ding113/claude-code-hub
Length of output: 4454
🏁 Script executed:
#!/bin/bash
# Search for endpointStatus in JSON files
rg "endpointStatus" --type json -A 5 -B 2Repository: ding113/claude-code-hub
Length of output: 3207
🏁 Script executed:
#!/bin/bash
# Check the messages/en/settings/providers structure
find messages/en/settings/providers -type f -name "*.json" | xargs -I {} bash -c 'echo "=== {} ===" && cat {} | jq . 2>/dev/null || cat {}'Repository: ding113/claude-code-hub
Length of output: 50379
🏁 Script executed:
#!/bin/bash
# Check how the component uses the messages
cat src/app/\[locale\]/settings/providers/_components/provider-endpoint-hover.tsx 2>/dev/null | head -100Repository: ding113/claude-code-hub
Length of output: 3874
🏁 Script executed:
#!/bin/bash
# Check exact viewDetails string in messages file
cat messages/en/settings/providers/strings.json | jq '.endpointStatus.viewDetails'Repository: ding113/claude-code-hub
Length of output: 91
🏁 Script executed:
#!/bin/bash
# Check how the component uses the viewDetails translation
rg "endpointStatus.viewDetails" src/Repository: ding113/claude-code-hub
Length of output: 210
直接复用 enMessages 中的 endpointStatus 翻译,并修正 viewDetails 中缺失的 {count} 占位符。
endpointStatus 的定义已存在于 messages/en/settings/providers/strings.json 中。当前测试中硬编码这些字符串会导致维护负担。更重要的是,test mock 中 viewDetails 的值为 "View Details",但实际消息文件中为 "View Details ({count})",而组件通过 t("endpointStatus.viewDetails", { count }) 传递 count 参数进行插值。这个不匹配会导致测试消息与实际组件翻译不同步。
建议:直接从 enMessages.settings.providers.endpointStatus 复用,确保测试消息与生产消息一致。
🤖 Prompt for AI Agents
In `@tests/unit/settings/providers/provider-endpoint-hover.test.tsx` around lines
37 - 58, The test's loadMessages function currently hardcodes endpointStatus
strings (including an incorrect "View Details" without the {count} placeholder)
causing mismatch with the app messages; update loadMessages to reuse the real
translations by assigning endpointStatus =
enMessages.settings.providers.endpointStatus (or spread that object into
settings.providers) so viewDetails and other keys come from enMessages and
include the correct "{count}" placeholder used by the component when calling
t("endpointStatus.viewDetails", { count }).
Summary
Release v0.5.4 consolidates 13 merged PRs from the dev branch, delivering critical bug fixes, new features, and infrastructure improvements. This release focuses on provider endpoint management, session tracking stability, UI enhancements, and hot-reload capabilities.
Related Issues & PRs
Fixes:
Related:
Key Changes
Critical Fixes
Session Tracker Memory Leak (#718)
SESSION_TTL(was hardcoded to 5 minutes)EXPIREon hot write pathProvider Endpoint Race Conditions (#730)
syncProviderEndpointOnProviderEdit()for atomic endpoint migrationHot-Reload Cache Invalidation (#710)
globalThissingleton pattern for cross-worker consistencyAuthentication Session Propagation (#720)
getSession()returning null in /api/actions endpointsNew Features
Gemini Google Search Control (#721)
inherit(default),enabled(force inject),disabled(force remove)gemini_google_search_preferencecolumnVendor Endpoint Pool UI (#719)
Cost Column Toggle (#715)
Other Improvements
OpenAI Format Support (#716)
prompt_tokens/completion_tokensextraction to usage metricsThinking Signature Error Detection
Dependency Updates
Database Migration
Migration
0062_aromatic_taskmaster.sqladds:Migration is backward compatible - column is nullable, existing providers unaffected.
Breaking Changes
None. All changes are backward compatible:
Testing
Automated Tests
Pre-commit Checklist
bun run build- Production build succeedsbun run lint- Biome check passesbun run typecheck- TypeScript check passesbun run test- All tests passMerged PRs (13 total)
Deployment Notes
AUTO_MIGRATE=true(default)SESSION_TTLis set appropriately (default: 300 seconds)Checklist
Description enhanced by Claude AI
Greptile Overview
Greptile Summary
providers.gemini_google_search_preferenceand wires it through provider CRUD and proxy request overrides.Confidence Score: 3/5
Important Files Changed
Sequence Diagram