Conversation
…ni v1beta - Add OpenAI GPT-5.2 model configuration with reasoningEffort options - Add GPT-5.2-small variant using medium reasoning effort - Fix Gemini baseURL to use /v1beta endpoint - Update i18n strings to reflect different baseURLs per provider Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
📝 WalkthroughWalkthrough此PR更新OpenCode配置文档和示例,跨越五个语言版本的消息文件明确了提供商基础URL映射(cchClaude/openai使用v1,cchGemini使用v1beta),并更新了GPT-5.2和Gemini配置,包含新的模型定义和推理参数。 Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
Summary of ChangesHello @ding113, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on enhancing the documentation and configuration examples for AI models within the application. It introduces new OpenAI GPT-5.2 model configurations, including a 'small' variant with specific reasoning efforts, and rectifies the base URL for the Gemini API to align with its '/v1beta' endpoint. Additionally, it ensures that these updates are accurately reflected in the internationalized usage documentation across multiple languages, improving clarity and correctness for users interacting with various AI providers. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request successfully updates the OpenCode configuration example to include the new GPT-5.2 models and corrects the Gemini baseURL. The corresponding internationalization strings and tests have also been updated. My review includes a critical fix for a test assertion that appears to be incorrect. Additionally, I've provided several suggestions to enhance the clarity of the configuration example and documentation text, which should improve the user experience.
| expect(text).toContain('"npm": "@ai-sdk/anthropic"'); | ||
| expect(text).toContain('"npm": "@ai-sdk/openai"'); | ||
| expect(text).toContain('"npm": "@ai-sdk/google"'); | ||
| expect(text).not.toContain('"npm": "@ai-sdk/openai"'); |
There was a problem hiding this comment.
This test assertion appears to be incorrect. The UsageDocContent component generates a configuration that includes "npm": "@ai-sdk/openai" for the cchGPT provider. Therefore, this not.toContain assertion will fail. It seems the original toContain was correct.
| expect(text).not.toContain('"npm": "@ai-sdk/openai"'); | |
| expect(text).toContain('"npm": "@ai-sdk/openai"'); |
| "Create an API key in the cch console and set the CCH_API_KEY environment variable", | ||
| "All providers use ${resolvedOrigin}/v1 as baseURL (cch v1 API base URL)", | ||
| "When selecting models, use provider_id/model_id (e.g. cchClaude/claude-sonnet-4-5-20250929)" | ||
| "cchClaude/openai use ${resolvedOrigin}/v1; cchGemini uses ${resolvedOrigin}/v1beta", |
There was a problem hiding this comment.
The phrasing cchClaude/openai use... could be slightly confusing, as it might be misinterpreted as a single provider named cchClaude/openai. To improve clarity, consider rephrasing to explicitly state that these are two separate providers.
| "cchClaude/openai use ${resolvedOrigin}/v1; cchGemini uses ${resolvedOrigin}/v1beta", | |
| "cchClaude and openai use ${resolvedOrigin}/v1; cchGemini uses ${resolvedOrigin}/v1beta", |
| "cch の管理画面で API Key を作成し、環境変数 CCH_API_KEY を設定してください", | ||
| "3 つの provider すべてで baseURL は ${resolvedOrigin}/v1(cch の v1 API ベース URL)", | ||
| "モデル選択は provider_id/model_id 形式(例:cchClaude/claude-sonnet-4-5-20250929)" | ||
| "cchClaude/openai は ${resolvedOrigin}/v1、cchGemini は ${resolvedOrigin}/v1beta を baseURL に使用します", |
There was a problem hiding this comment.
The phrasing cchClaude/openai は... could be slightly confusing, as it might be misinterpreted as a single provider named cchClaude/openai. To improve clarity, consider rephrasing to explicitly state that these are two separate providers.
| "cchClaude/openai は ${resolvedOrigin}/v1、cchGemini は ${resolvedOrigin}/v1beta を baseURL に使用します", | |
| "cchClaude と openai は ${resolvedOrigin}/v1、cchGemini は ${resolvedOrigin}/v1beta を baseURL に使用します", |
| "Создайте API key в панели cch и задайте переменную окружения CCH_API_KEY", | ||
| "Все provider используют ${resolvedOrigin}/v1 как baseURL (базовый URL cch v1 API)", | ||
| "При выборе модели используйте provider_id/model_id (например, cchClaude/claude-sonnet-4-5-20250929)" | ||
| "cchClaude/openai используют ${resolvedOrigin}/v1; cchGemini использует ${resolvedOrigin}/v1beta", |
There was a problem hiding this comment.
The phrasing cchClaude/openai используют... could be slightly confusing, as it might be misinterpreted as a single provider named cchClaude/openai. To improve clarity, consider rephrasing to explicitly state that these are two separate providers.
| "cchClaude/openai используют ${resolvedOrigin}/v1; cchGemini использует ${resolvedOrigin}/v1beta", | |
| "cchClaude и openai используют ${resolvedOrigin}/v1; cchGemini использует ${resolvedOrigin}/v1beta", |
| "请先在 cch 后台创建 API Key,并设置环境变量 CCH_API_KEY", | ||
| "三个 provider 的 baseURL 都使用 ${resolvedOrigin}/v1(cch v1 API 地址)", | ||
| "模型选择时使用 provider_id/model_id 格式(例如 cchClaude/claude-sonnet-4-5-20250929)" | ||
| "cchClaude/openai 使用 ${resolvedOrigin}/v1,cchGemini 使用 ${resolvedOrigin}/v1beta", |
There was a problem hiding this comment.
The phrasing cchClaude/openai 使用... could be slightly confusing, as it might be misinterpreted as a single provider named cchClaude/openai. To improve clarity, consider rephrasing to explicitly state that these are two separate providers.
| "cchClaude/openai 使用 ${resolvedOrigin}/v1,cchGemini 使用 ${resolvedOrigin}/v1beta", | |
| "cchClaude 和 openai 使用 ${resolvedOrigin}/v1,cchGemini 使用 ${resolvedOrigin}/v1beta", |
| "請先在 cch 後台創建 API Key,並設置環境變量 CCH_API_KEY", | ||
| "三個 provider 的 baseURL 都使用 ${resolvedOrigin}/v1(cch v1 API 地址)", | ||
| "模型選擇時使用 provider_id/model_id 格式(例如 cchClaude/claude-sonnet-4-5-20250929)" | ||
| "cchClaude/openai 使用 ${resolvedOrigin}/v1,cchGemini 使用 ${resolvedOrigin}/v1beta", |
There was a problem hiding this comment.
The phrasing cchClaude/openai 使用... could be slightly confusing, as it might be misinterpreted as a single provider named cchClaude/openai. To improve clarity, consider rephrasing to explicitly state that these are two separate providers.
| "cchClaude/openai 使用 ${resolvedOrigin}/v1,cchGemini 使用 ${resolvedOrigin}/v1beta", | |
| "cchClaude 和 openai 使用 ${resolvedOrigin}/v1,cchGemini 使用 ${resolvedOrigin}/v1beta", |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @tests/unit/usage-doc/opencode-usage-doc.test.tsx:
- Line 73: The test in tests/unit/usage-doc/opencode-usage-doc.test.tsx asserts
the serialized page content does NOT contain '"npm": "@ai-sdk/openai"', but the
cchGPT provider in page.tsx (the cchGPT configuration at/near the cchGPT
provider object) includes npm: "@ai-sdk/openai", so either remove that npm entry
from the cchGPT provider configuration or update the test assertion to expect
the string to be present (change expect(text).not.toContain('"npm":
"@ai-sdk/openai"') to expect(text).toContain('"npm": "@ai-sdk/openai"')). Locate
the cchGPT provider object in page.tsx and the failing assertion in
opencode-usage-doc.test.tsx and make the change so implementation and test
match.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (7)
messages/en/usage.jsonmessages/ja/usage.jsonmessages/ru/usage.jsonmessages/zh-CN/usage.jsonmessages/zh-TW/usage.jsonsrc/app/[locale]/usage-doc/page.tsxtests/unit/usage-doc/opencode-usage-doc.test.tsx
🧰 Additional context used
📓 Path-based instructions (5)
**/*.{js,ts,tsx,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Never use emoji characters in any code, comments, or string literals
Files:
src/app/[locale]/usage-doc/page.tsxtests/unit/usage-doc/opencode-usage-doc.test.tsx
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx,js,jsx}: All user-facing strings must use i18n (5 languages supported: zh-CN, zh-TW, en, ja, ru). Never hardcode display text
Use path alias@/to reference files in./src/directory
Format code with Biome using: double quotes, trailing commas, 2-space indent, 100 character line width
Files:
src/app/[locale]/usage-doc/page.tsxtests/unit/usage-doc/opencode-usage-doc.test.tsx
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Prefer named exports over default exports
Files:
src/app/[locale]/usage-doc/page.tsxtests/unit/usage-doc/opencode-usage-doc.test.tsx
**/*.test.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
All new features must have unit test coverage of at least 80%
Files:
tests/unit/usage-doc/opencode-usage-doc.test.tsx
tests/**/*.test.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use Vitest for unit testing and happy-dom for DOM testing
Files:
tests/unit/usage-doc/opencode-usage-doc.test.tsx
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: pr-description
- GitHub Check: 🌐 API Tests
- GitHub Check: pr-review
- GitHub Check: check-codex-status
- GitHub Check: Docker Build Test
- GitHub Check: dev-build-deploy
🔇 Additional comments (11)
messages/zh-TW/usage.json (1)
542-543: LGTM!i18n 更新正确反映了 PR 目标:
- 区分了不同 provider 的 baseURL(cchClaude/openai 使用
/v1,cchGemini 使用/v1beta)- 模型选择示例已更新为新的
provider_id/model_id格式messages/ja/usage.json (1)
546-547: LGTM!日语本地化更新与其他语言版本保持一致,正确反映了 provider 级别的 baseURL 区分和新的模型选择格式。
messages/zh-CN/usage.json (1)
542-543: LGTM!简体中文 i18n 更新正确,与其他语言版本保持一致。
messages/ru/usage.json (1)
546-547: LGTM!俄语本地化更新与其他语言版本保持一致。
tests/unit/usage-doc/opencode-usage-doc.test.tsx (2)
80-89: LGTM! - GPT-5.2 配置测试覆盖全面新增的断言全面覆盖了 GPT-5.2 模型配置的关键字段:
model和small_model字段reasoningEffort的两个级别 (xhigh和medium)store: false和setCacheKey: true配置reasoning.encrypted_content字段
93-93: LGTM! - v1beta baseURL 断言此断言正确验证了 Gemini provider 使用
/v1beta端点,与 Line 69 的/v1端点断言形成互补,确保不同 provider 使用正确的 baseURL 配置。src/app/[locale]/usage-doc/page.tsx (4)
1083-1084: LGTM!新增的
model和small_model顶层字段与下方models定义中的 key 一致,配置结构正确。
1105-1106: LGTM!
store: false和setCacheKey: true选项添加到 cchGPT provider 的 options 中是合理的配置。
1128-1139: LGTM!cchGemini 的 baseURL 从
/v1更改为/v1beta是正确的,与 Google Gemini API 的实际端点路径一致。同时 i18n 字符串已同步更新。
1109-1125: GPT-5.2 模型配置中的reasoningEffort值无需调整。根据 OpenAI 文档,GPT-5.2 的有效值包括"none"、"low"、"medium"、"high"和"xhigh"。代码中使用的"xhigh"和"medium"都是有效的。Likely an incorrect or invalid review comment.
messages/en/usage.json (1)
544-548: LGTM! 所有语言文件已正确更新。i18n 字符串已正确更新,清楚说明了不同 provider 使用不同的 baseURL:
cchClaude/openai使用/v1cchGemini使用/v1beta示例也已更新为包含
openai/gpt-5.2。所有 5 个语言文件(en、ja、ru、zh-CN、zh-TW)都已一致更新,包括提供的中文翻译准确专业。
| expect(text).toContain('"npm": "@ai-sdk/anthropic"'); | ||
| expect(text).toContain('"npm": "@ai-sdk/openai"'); | ||
| expect(text).toContain('"npm": "@ai-sdk/google"'); | ||
| expect(text).not.toContain('"npm": "@ai-sdk/openai"'); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -n "@ai-sdk/openai" --type=ts --type=tsx -g '!node_modules/**'Repository: ding113/claude-code-hub
Length of output: 94
🏁 Script executed:
head -n 95 tests/unit/usage-doc/opencode-usage-doc.test.tsx | tail -n 30Repository: ding113/claude-code-hub
Length of output: 1351
🏁 Script executed:
git log --oneline -n 20 -- tests/unit/usage-doc/opencode-usage-doc.test.tsxRepository: ding113/claude-code-hub
Length of output: 153
🏁 Script executed:
find . -name "page.tsx" -path "*/usage-doc/*" -type fRepository: ding113/claude-code-hub
Length of output: 104
🏁 Script executed:
rg -n "ai-sdk/openai" -t=ts -t=jsx -t=js -g '!node_modules' -g '!*.test.*'Repository: ding113/claude-code-hub
Length of output: 94
🏁 Script executed:
fd -e tsx -e ts "usage-doc" | grep -E "(page|config)" | head -20Repository: ding113/claude-code-hub
Length of output: 111
🏁 Script executed:
cat -n ./src/app/[locale]/usage-doc/page.tsx | head -200Repository: ding113/claude-code-hub
Length of output: 7379
🏁 Script executed:
grep -n "openai\|@ai-sdk" ./src/app/[locale]/usage-doc/page.tsxRepository: ding113/claude-code-hub
Length of output: 500
🏁 Script executed:
sed -n '1080,1110p' ./src/app/[locale]/usage-doc/page.tsxRepository: ding113/claude-code-hub
Length of output: 1123
🏁 Script executed:
sed -n '1095,1135p' ./src/app/[locale]/usage-doc/page.tsxRepository: ding113/claude-code-hub
Length of output: 1377
🏁 Script executed:
git diff ./src/app/[locale]/usage-doc/page.tsxRepository: ding113/claude-code-hub
Length of output: 49
🏁 Script executed:
git status ./src/app/[locale]/usage-doc/page.tsxRepository: ding113/claude-code-hub
Length of output: 132
🏁 Script executed:
cat -n tests/unit/usage-doc/opencode-usage-doc.test.tsxRepository: ding113/claude-code-hub
Length of output: 6386
🏁 Script executed:
sed -n '1050,1200p' ./src/app/[locale]/usage-doc/page.tsx | grep -n "CodeBlock\|config\|JSON\|stringify"Repository: ding113/claude-code-hub
Length of output: 1501
🏁 Script executed:
grep -n "JSON.stringify\|<CodeBlock" ./src/app/[locale]/usage-doc/page.tsx | head -20Repository: ding113/claude-code-hub
Length of output: 791
🏁 Script executed:
sed -n '1070,1160p' ./src/app/[locale]/usage-doc/page.tsxRepository: ding113/claude-code-hub
Length of output: 3184
配置中 @ai-sdk/openai 的移除与测试不匹配
第 73 行的断言 expect(text).not.toContain('"npm": "@ai-sdk/openai"') 与实现存在矛盾。page.tsx 第 1100 行的 cchGPT 提供商配置中明确包含 npm: "@ai-sdk/openai",该配置通过 JSON.stringify() 序列化后将包含 "npm": "@ai-sdk/openai" 字符串,导致该测试断言失败。
需要对齐实现与测试:要么从 cchGPT 提供商中移除 @ai-sdk/openai,要么将测试断言改为 toContain。
🤖 Prompt for AI Agents
In @tests/unit/usage-doc/opencode-usage-doc.test.tsx at line 73, The test in
tests/unit/usage-doc/opencode-usage-doc.test.tsx asserts the serialized page
content does NOT contain '"npm": "@ai-sdk/openai"', but the cchGPT provider in
page.tsx (the cchGPT configuration at/near the cchGPT provider object) includes
npm: "@ai-sdk/openai", so either remove that npm entry from the cchGPT provider
configuration or update the test assertion to expect the string to be present
(change expect(text).not.toContain('"npm": "@ai-sdk/openai"') to
expect(text).toContain('"npm": "@ai-sdk/openai"')). Locate the cchGPT provider
object in page.tsx and the failing assertion in opencode-usage-doc.test.tsx and
make the change so implementation and test match.
| model: "openai/gpt-5.2", | ||
| small_model: "openai/gpt-5.2-small", |
There was a problem hiding this comment.
The model references use openai/gpt-5.2 and openai/gpt-5.2-small, expecting a provider named openai. However, the GPT provider is named cchGPT at line 1099. This mismatch will cause OpenCode to fail when trying to resolve these models because it won't find a provider with ID openai.
These model references should either:
- Be changed to
cchGPT/gpt-5.2andcchGPT/gpt-5.2-smallto match the provider name, OR - The provider at line 1099 should be renamed from
cchGPTtoopenai
Based on the i18n examples (which use openai/gpt-5.2) and the test expectations, option 2 (renaming provider to openai) appears to be the intended fix.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/app/[locale]/usage-doc/page.tsx
Line: 1083:1084
Comment:
The model references use `openai/gpt-5.2` and `openai/gpt-5.2-small`, expecting a provider named `openai`. However, the GPT provider is named `cchGPT` at line 1099. This mismatch will cause OpenCode to fail when trying to resolve these models because it won't find a provider with ID `openai`.
These model references should either:
1. Be changed to `cchGPT/gpt-5.2` and `cchGPT/gpt-5.2-small` to match the provider name, OR
2. The provider at line 1099 should be renamed from `cchGPT` to `openai`
Based on the i18n examples (which use `openai/gpt-5.2`) and the test expectations, option 2 (renaming provider to `openai`) appears to be the intended fix.
How can I resolve this? If you propose a fix, please make it concise.
Additional Comments (1)
Additionally, the test explicitly expects that The provider name should be changed from Prompt To Fix With AIThis is a comment left during a code review.
Path: src/app/[locale]/usage-doc/page.tsx
Line: 1099:1099
Comment:
The provider is named `cchGPT` but the model references at lines 1083-1084 use `openai/gpt-5.2` format. This creates a critical mismatch because OpenCode expects `provider_id/model_id` format. If models are referenced as `openai/gpt-5.2`, the provider must be named `openai`, not `cchGPT`.
Additionally, the test explicitly expects that `"npm": "@ai-sdk/openai"` should NOT be present in the output (line 73 of tests/unit/usage-doc/opencode-usage-doc.test.tsx), but this provider definition includes it at line 1100, which will cause the test to fail.
The provider name should be changed from `cchGPT:` to `openai:` to match the model references.
How can I resolve this? If you propose a fix, please make it concise. |
There was a problem hiding this comment.
Code Review Summary
This PR has 2 CRITICAL issues that will cause test failures and runtime errors. The code must not be merged until these are resolved.
PR Size: S
- Lines changed: 56 (43 additions, 13 deletions)
- Files changed: 7
Issues Found
| Category | Critical | High | Medium | Low |
|---|---|---|---|---|
| Logic/Bugs | 2 | 0 | 0 | 0 |
| Security | 0 | 0 | 0 | 0 |
| Error Handling | 0 | 0 | 0 | 0 |
| Types | 0 | 0 | 0 | 0 |
| Comments/Docs | 0 | 0 | 0 | 0 |
| Tests | 1 | 0 | 0 | 0 |
| Simplification | 0 | 0 | 0 | 0 |
Critical Issues (Must Fix)
1. Test-Code Mismatch at src/app/[locale]/usage-doc/page.tsx:1098
[TEST-MISSING-CRITICAL] The test file expects "npm": "@ai-sdk/openai" to NOT appear in output (line 72), but the code includes it at line 1098. This violates CLAUDE.md Rule #4: "bun run test - Run Vitest tests" must pass before committing.
Evidence:
- Test:
expect(text).not.toContain('"npm": "@ai-sdk/openai"');(opencode-usage-doc.test.tsx:72) - Code:
npm: "@ai-sdk/openai"(page.tsx:1098)
Fix: Either use "@ai-sdk/openai-compatible" or remove the npm field entirely.
2. Provider Name Mismatch at src/app/[locale]/usage-doc/page.tsx:1082-1083,1097
[LOGIC-BUG] The top-level model references use "openai/gpt-5.2" and "openai/gpt-5.2-small" (lines 1082-1083), but the provider is named cchGPT (line 1097), not openai. OpenCode's provider_id/model_id pattern requires these to match. This will cause model resolution failures at runtime.
Evidence:
- Model refs:
model: "openai/gpt-5.2",small_model: "openai/gpt-5.2-small" - Provider name:
cchGPT: { ... } - i18n documentation states:
"openai/gpt-5.2 or cchClaude/..."(indicating "openai" should be the provider name)
Fix: Rename the provider from cchGPT to openai:
provider: {
// ... cchClaude config ...
openai: { // Changed from cchGPT
npm: "@ai-sdk/openai-compatible", // Or remove entirely
name: "GPT via cch",
options: { /* ... */ },
models: { /* ... */ }
},
// ... cchGemini config ...
}Review Coverage
- Logic and correctness - 2 Critical bugs found
- Security (OWASP Top 10) - Clean
- Error handling - Clean
- Type safety - Clean
- Documentation accuracy - Inconsistent with code
- Test coverage - Test exists but contradicts code
- Code clarity - Good
Additional Notes
The Gemini v1beta endpoint change (line 1112) is correct. The i18n translations across all 5 languages are properly synchronized. The GPT-5.2 configuration structure with reasoningEffort options is well-formed. However, the provider naming inconsistency must be resolved before merge.
Automated review by Claude AI
There was a problem hiding this comment.
- Identified open PR
#598(feat(codex): add session ID auto-completion for Codex requests). - Applied PR size label:
size/XL. - Posted 2 inline review comments:
src/app/v1/_lib/codex/session-completer.ts:202—CodexSessionIdCompleterisn’t wired into any production path (only referenced by tests), so the feature won’t run.drizzle/0054_shiny_quentin_quire.sql:1— DB flagenable_codex_session_id_completionis added but not represented/used in code (schema/types/runtime), making it ineffective and risking drift.
- Submitted the required PR review summary via
gh pr review --comment.
* PR: 修复 Edge Runtime 下 `process.once` 构建告警(AsyncTaskManager 导入链) (#589) * fix: skip async task manager init on edge * fix: avoid static async task manager import * test: cover edge runtime task scheduling * chore: document edge runtime process.once fix * chore: record edge runtime warning baseline * fix: drop NEXT_PHASE and lazy-init async task manager * test: isolate NEXT_RUNTIME in cloud price sync tests * docs: stabilize edge process.once repro baseline * docs: make rollback instructions hashless * docs: add grep checklist for edge warning audit * chore: run regression gate and align docs * test: cover edge runtime guard on register * Update src/lib/async-task-manager.ts 补充 NEXT_PHASE === "phase-production-build" 检查 Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * chore: format code (fix-edge-runtime-process-once-bee7e19) * PR:i18n settings 拆分与翻译质量门禁 (#588) * refactor(i18n): split settings json into smaller files * refactor(i18n): load settings from split module * refactor(i18n): remove legacy settings.json * chore(i18n): update sync-settings-keys for split layout * test(i18n): add split settings guards * chore: align biome schema version * chore(i18n): document messages loading contract * chore(i18n): add settings split verification notes * chore: format code (refactor-i18n-split-settings-3f48fec) * chore: fix i18n request formatting * chore: format code (refactor-i18n-split-settings-a1eff62) * fix: replace settings placeholder translations * chore: verify settings sync script is idempotent * test: run i18n settings split guards * test: add audit for zh-CN placeholder settings strings * chore: apply biome formatting * chore: document manual i18n settings verification * fix: translate all providers filter in ja * fix: translate all providers filter in zh-TW * fix: translate providers section copy in zh-TW * fix: translate providers section copy in ja * feat: extend placeholder audit output * feat: add allowlist for placeholder audit * docs: define i18n translation quality rules * chore: add i18n audit fail commands * docs: add i18n PR checklist * chore: format i18n audit tests * fix: translate dashboard placeholders * fix: translate myUsage placeholders * fix: enforce locale-specific parentheses * fix: start translating provider form strings * fix: translate provider form strings * fix: translate provider guide content * test: add ja dashboard parentheses guard * test: add zh-TW dashboard parentheses guard * test: add zh-TW myUsage parentheses guard * chore: translate ja provider form strings * chore: translate zh-TW provider form strings * chore: translate ja providers guide * chore: translate zh-TW providers guide * chore: refine zh-TW dashboard strings * chore: translate ja providers strings * chore: translate zh-TW providers strings * chore: refine zh-TW api test strings * chore: translate zh-TW settings small modules * chore: translate ja settings small modules * chore: clear i18n placeholders in settings * chore: format code (refactor-i18n-split-settings-2437d19) * test: fix biome formatting in i18n test * chore: verify Biome lint gate (I18NE-030) * chore: add messages emoji audit script (I18NE-010) * fix: remove emoji from messages warnings (I18NE-040) * test: add messages no-emoji audit gate (I18NE-050) * docs: add zh-CN i18n docs (I18NE-020) * docs: add messages no-emoji rule (I18NE-060) * chore: run full regression checks (I18NE-070) * docs: add i18n PR evidence template (I18NE-080) * fix: make messages no-emoji audit path-sep safe * docs: add bun alias for messages no-emoji audit * fix: detect keycap and flag emoji sequences in i18n message audits * fix(provider): allow removing custom whitelisted models (#592) (#593) * fix(rectifier): detect 'signature: Field required' error and trigger rectifier (#594) - Extend detectThinkingSignatureRectifierTrigger to match 'signature: Field required' - Add Rule 72 for friendly error message when signature field is missing - Add comprehensive test cases for the new detection logic This fixes an issue where switching from non-Anthropic to Anthropic channels with thinking blocks missing signature fields would fail without proper handling. * feat(users): increase provider group length to 200 (#591) close #590 * feat(usage-doc): update OpenCode config example with GPT-5.2 and Gemini v1beta (#597) - Add OpenAI GPT-5.2 model configuration with reasoningEffort options - Add GPT-5.2-small variant using medium reasoning effort - Fix Gemini baseURL to use /v1beta endpoint - Update i18n strings to reflect different baseURLs per provider * feat: auto-complete Codex session identifiers (#599) * fix: Codex session completion must not inject metadata (#601) * feat: auto-complete Codex session identifiers * fix: avoid Codex metadata injection --------- Co-authored-by: YangQing-Lin <56943790+YangQing-Lin@users.noreply.github.com> Co-authored-by: Hwwwww-dev <47653238+Hwwwww-dev@users.noreply.github.com>
Summary
reasoningEffortoptions/v1betaendpointTest plan
Greptile Overview
Greptile Summary
This PR updates the OpenCode configuration documentation to add GPT-5.2 model support with reasoning effort options and fix the Gemini baseURL to use the v1beta endpoint. The i18n strings across all 5 languages were updated to reflect that different providers use different baseURL endpoints.
Changes Made
gpt-5.2andgpt-5.2-smallmodels withreasoningEffortsettings ("xhigh" for full model, "medium" for small variant)store: false,setCacheKey: true, andinclude: ["reasoning.encrypted_content"]options/v1to/v1betaCritical Issue Found
The implementation has a critical logic error that will cause:
"npm": "@ai-sdk/openai"NOT to appear in output, but the code includes itopenai/gpt-5.2but provider is namedcchGPT, causing model resolution to failRoot cause: The GPT provider at line 1099 of
page.tsxis namedcchGPT, but the model references at lines 1083-1084 useopenai/gpt-5.2format. OpenCode'sprovider_id/model_idpattern requires these to match. The provider should be renamed fromcchGPTtoopenai.Architecture Fit
This change fits into the existing usage documentation pattern of providing complete configuration examples for CLI tools. The addition of GPT-5.2 reasoning models and the Gemini v1beta endpoint correction align with keeping documentation current with evolving API capabilities.
Confidence Score: 1/5
Important Files Changed
File Analysis
cchGPTbut model references useopenai/prefix, causing resolution failure and test failuresSequence Diagram
sequenceDiagram participant User participant Browser participant UsageDocPage participant UsageDocContent participant OpenCodeConfig participant i18n User->>Browser: Navigate to /usage-doc Browser->>UsageDocPage: Render page UsageDocPage->>i18n: Load translations (en/ja/ru/zh-CN/zh-TW) i18n-->>UsageDocPage: Return usage.json messages UsageDocPage->>UsageDocContent: Pass origin and translations UsageDocContent->>OpenCodeConfig: Build config JSON Note over OpenCodeConfig: provider: cchClaude (v1)<br/>provider: cchGPT (v1) ❌<br/>provider: cchGemini (v1beta)<br/>model: "openai/gpt-5.2" ❌<br/>small_model: "openai/gpt-5.2-small" ❌ OpenCodeConfig-->>UsageDocContent: Config JSON string UsageDocContent->>Browser: Render documentation with config example Browser-->>User: Display usage documentation Note over OpenCodeConfig: ISSUE: Provider "cchGPT" should be "openai"<br/>to match model references