feat: 新增 /v1/models 端点,聚合返回用户可用模型列表#517
Conversation
根据用户的 provider 权限和客户端格式返回可用模型: - GET /v1/models - 聚合模型列表(自动检测格式) - GET /v1/responses/models - 仅返回 codex 类型 - GET /v1/chat/completions/models - 仅返回 openai-compatible 类型 - GET /v1beta/models - Gemini 格式支持 模型获取策略: - 优先返回 provider 配置的 allowedModels 列表 - 若未配置则实时查询上游 API 技术实现: - 新增 selectProviderByType() 按类型独立决策 provider - 抽取 extractApiKeyFromHeaders() 供非 Guard 流程复用 - 统一上游请求配置(UPSTREAM_CONFIGS),消除重复代码 - 支持 header/query 覆盖格式 (x-cch-api-type, api_type)
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthrough新增模型列表功能与相关路由,加入模型聚合与格式化的服务端实现、API 密钥提取工具与按类型的提供商选择器,并更新 CHANGELOG 文档;无其他业务逻辑删除,仅新增与路由/文档改动。 Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Cache: Disabled due to Reviews > Disable Cache setting 📒 Files selected for processing (1)
🧰 Additional context used🧬 Code graph analysis (1)src/app/v1/_lib/models/available-models.ts (6)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
🔇 Additional comments (6)
Comment |
Summary of ChangesHello @NieiR, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! 此拉取请求引入了一系列新的 Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
这个 PR 引入了一个新的 /v1/models 端点来聚合可用的模型列表,这是一个很棒的功能。实现结构良好,特别是使用 UPSTREAM_CONFIGS 来统一上游 API 调用和使用工厂函数创建处理器。代码通常很清晰,并遵循了现有的模式。
我的审查主要集中在几个关键的改进领域:
- 性能: 通过并行运行供应商选择和模型获取,可以显著优化模型聚合逻辑。
- 健壮性: 可以更安全地解析上游 API 响应,以防止因意外数据格式而导致的潜在运行时错误。
- 诊断: 改进决策上下文日志的详细程度将有助于未来的调试。
总的来说,这是一个坚实的贡献。解决这些问题将使新功能性能更高、更具弹性。
There was a problem hiding this comment.
Additional Comments (1)
-
src/app/v1/_lib/models/available-models.ts, line 166 (link)logic: API credentials exposed in URL query string. Query parameters appear in server logs, proxy logs, and browser history. Move authentication to headers using
x-goog-api-keyheader instead of URL parameter.
6 files reviewed, 1 comment
There was a problem hiding this comment.
Actionable comments posted: 8
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (6)
CHANGELOG.mdsrc/app/v1/[...route]/route.tssrc/app/v1/_lib/models/available-models.tssrc/app/v1/_lib/proxy/auth-guard.tssrc/app/v1/_lib/proxy/provider-selector.tssrc/app/v1beta/[...route]/route.ts
🧰 Additional context used
🧬 Code graph analysis (4)
src/app/v1/[...route]/route.ts (1)
src/app/v1/_lib/models/available-models.ts (3)
handleAvailableModels(466-501)handleCodexModels(450-453)handleOpenAICompatibleModels(458-461)
src/app/v1/_lib/models/available-models.ts (3)
src/app/v1/_lib/proxy/auth-guard.ts (1)
extractApiKeyFromHeaders(205-232)src/app/v1/_lib/proxy/format-mapper.ts (1)
ClientFormat(29-29)src/app/v1/_lib/proxy/provider-selector.ts (1)
ProxyProviderResolver(211-1124)
src/app/v1/_lib/proxy/provider-selector.ts (3)
src/types/provider.ts (1)
Provider(20-113)src/types/message.ts (1)
ProviderChainItem(8-172)src/repository/provider.ts (1)
findAllProviders(179-237)
src/app/v1beta/[...route]/route.ts (1)
src/app/v1/_lib/models/available-models.ts (1)
handleAvailableModels(466-501)
🪛 LanguageTool
CHANGELOG.md
[uncategorized] ~18-~18: 您的意思是“"不"全”?
Context: ...ithub.com/Privnode-HQ) - Session 详情记录增强,补全请求和响应的完整 payload (#495) - 排行榜输出速率计算优化,修...
(BU)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: pr-review
- GitHub Check: Greptile Review
- GitHub Check: pr-description
- GitHub Check: 🌐 API Tests
- GitHub Check: pr-label
- GitHub Check: Docker Build Test
🔇 Additional comments (8)
src/app/v1/_lib/proxy/provider-selector.ts (2)
1020-1027: 分组过滤行为与pickRandomProvider不一致当用户分组内没有匹配的供应商时,
pickRandomProvider方法(lines 652-675)会返回provider: null并记录错误日志,而此处会静默回退到所有可见供应商。这可能导致分组隔离策略不一致。请确认这是预期行为,还是应该保持与
pickRandomProvider一致的严格分组隔离?
1126-1127: LGTM!导出
checkProviderGroupMatch辅助函数,便于在available-models.ts等外部模块复用分组匹配逻辑。src/app/v1beta/[...route]/route.ts (1)
15-16: LGTM!新增
/models端点,复用 v1 的handleAvailableModels处理函数,路由注册顺序正确(在通配符路由之前)。CHANGELOG.md (1)
7-31: LGTM!更新日志内容完整记录了 v0.3.40 版本的新增功能、优化和修复项。静态分析工具关于 "补全" 的提示是误报,此处用词正确。
src/app/v1/[...route]/route.ts (1)
32-36: LGTM!新增四个模型列表端点,路由结构清晰:
/models- 聚合所有可用模型/responses/models- 仅 codex 类型/chat/completions/models和/chat/models- 仅 openai-compatible 类型路由注册顺序正确,在通配符路由之前。
src/app/v1/_lib/models/available-models.ts (3)
258-275: LGTM!使用 exhaustive check 模式确保所有
ClientFormat类型都被处理,映射逻辑清晰。
450-461: LGTM!使用工厂函数创建固定 providerType 的处理器,代码简洁且易于扩展。
466-501: LGTM!主处理函数结构清晰,支持多种响应格式(OpenAI/Anthropic/Gemini),格式检测和覆盖机制设计合理。
…tching - Redact Gemini API key from debug logs to prevent credential exposure - Use Promise.all for parallel model fetching to reduce latency
Response to Claude AI Review感谢详细的审查!针对提出的问题逐一回复: Critical Issues1. [ERROR-SILENT] fetchModelsWithConfig catch block 2. [ERROR-SILENT] 所有 provider 失败时返回空列表 3. [ERROR-NO-USER-FEEDBACK] catch 块 High Priority4. [TEST-MISSING-CRITICAL] 缺少测试 Medium Priority5. [TYPE-WEAK-INVARIANT] 注释编码问题 |
为 PR ding113#517 引入的模型列表功能补充单元测试,覆盖以下核心逻辑: - inferOwner: 模型所有者推断(Anthropic/OpenAI/Google/DeepSeek/Alibaba) - getProviderTypesForFormat: 客户端格式到 Provider 类型映射 - formatOpenAIResponse: OpenAI 格式响应 - formatAnthropicResponse: Anthropic 格式响应 - formatGeminiResponse: Gemini 格式响应 共 23 个测试用例,全部通过。 Related to ding113#517
Summary
新增
/v1/models系列端点,根据用户的 provider 权限和客户端格式返回可用模型列表。新增端点
GET /v1/modelsGET /v1/responses/modelsGET /v1/chat/completions/modelsGET /v1beta/modelsProblem
客户端通常需要通过
/v1/models端点查询可用模型列表,但之前系统不支持此功能。用户无法:Related Issues:
Solution
实现完整的
/v1/models端点族,支持多种客户端格式和 provider 类型决策。模型获取策略
allowedModels列表与 #482 的区别
models-list格式selectProviderByType()按类型独立决策/v1beta/modelsx-cch-api-type,api_type)技术实现
selectProviderByType()按类型独立决策 providerextractApiKeyFromHeaders()供非 Guard 流程复用UPSTREAM_CONFIGS),消除重复代码x-cch-api-type,api_type)Changes
Core Changes
src/app/v1/_lib/models/available-models.ts(新增 501 行):handleAvailableModels- 聚合式模型列表处理handleCodexModels- 仅返回 codex 类型handleOpenAICompatibleModels- 仅返回 openai-compatible 类型UPSTREAM_CONFIGS- 统一的上游 API 请求配置src/app/v1/_lib/proxy/provider-selector.ts(+128):selectProviderByType()- 按 providerType 独立决策src/app/v1/_lib/proxy/auth-guard.ts(+37):extractApiKeyFromHeaders()- 抽取为独立函数供复用Supporting Changes
src/app/v1/[...route]/route.ts(+11): 注册 4 个新端点src/app/v1beta/[...route]/route.ts(+4): 注册 Gemini/v1beta/models端点CHANGELOG.md(+27): 更新 v0.3.40 变更日志Breaking Changes
无。纯新增功能,0 deletions。
Testing
Automated Tests
bun run lint通过bun run typecheck通过Manual Testing
/v1/models端点自动检测格式(OpenAI/Claude/Gemini)/v1/responses/models仅返回 codex 模型/v1/chat/completions/models仅返回 openai-compatible 模型/v1beta/models返回 Gemini 格式allowedModels配置优先级Checklist
devmainDescription enhanced by Claude AI
Summary by CodeRabbit
发布说明
新增功能
文档
✏️ Tip: You can customize this high-level summary in your review settings.