Refine release workflow and logging#36
Conversation
* device/stealth: commit local changes in submodule * Docs: fix test README package name (#18) * docs: reference @openhax/codex in test README * Delete spec/issue-11-docs-package.md * Add fork-aware prompt_cache_key derivation (#19) * Refactor: Eliminate code duplication and improve maintainability - Create shared clone utility (lib/utils/clone.ts) to eliminate 3+ duplicate implementations - Create InputItemUtils (lib/utils/input-item-utils.ts) for centralized text extraction - Centralize magic numbers in constants with SESSION_CONFIG, CONVERSATION_CONFIG, PERFORMANCE_CONFIG - Add ESLint cognitive complexity rules (max: 15) to prevent future issues - Refactor large functions to use shared utilities, reducing complexity - Update all modules to use centralized utilities and constants - Remove dead code and unused imports - All 123 tests pass, no regressions introduced Code quality improved from B+ to A- with better maintainability. * testing! * Address review: plugin config errors, compaction gating, cache key fallback * Style and metrics cleanup for auth, cache, and sessions * linting and formatting * Finalize PR 20 review: shared clone utils and tests * Fix CI workflow YAML syntax and quoting * update lint rules * allow lint warnings without masking errors * seperate linting and formatting ci * seperated formatting workflow from ci yaml and gave it permissions to edit workflow files * opencode can respond to all pr comments * Fix test/README.md documentation: update test counts and config file paths - Update stale test counts to reflect actual numbers: * auth.test.ts: 16 → 27 tests * config.test.ts: 13 → 16 tests * request-transformer.test.ts: 30 → 123 tests * logger.test.ts: 5 → 7 tests * response-handler.test.ts: unchanged at 10 tests - Fix broken configuration file paths: * config/minimal-opencode.json (was config/minimal-opencode.json) * config/full-opencode.json (was config/full-opencode.json) Both configuration files exist in the config/ directory at repository root. * 0.1.0 * 0.2.0 * docs: update AGENTS.md for gpt-5.1-codex-max support - Update overview to reflect new gpt-5.1-codex-max model as default - Add note about xhigh reasoning effort exclusivity to gpt-5.1-codex-max - Document expanded model lineup matching Codex CLI * chore: add v3.3.0 changelog entry for gpt-5.1-codex-max - Document new Codex Max support with xhigh reasoning - Note configuration changes and sample updates - Record automatic reasoning effort downgrade fix for compatibility * docs: update README for gpt-5.1-codex-max integration - Add gpt-5.1-codex-max configuration with xhigh reasoning support - Update model count from 20 to 21 variants - Expand model comparison table with Codex Max as flagship default - Add note about xhigh reasoning exclusivity and auto-downgrade behavior * config: add gpt-5.1-codex-max to full-opencode.json - Add flagship Codex Max model with 400k context and 128k output limits - Configure with medium reasoning effort as default - Include encrypted_content for stateless operation - Set store: false for ChatGPT backend compatibility * config: update minimal-opencode.json default to gpt-5.1-codex-max - Change default model from gpt-5.1-codex to gpt-5.1-codex-max - Align minimal config with new flagship Codex Max model - Provide users with best-in-class default experience * docs: update CONFIG_FIELDS.md for gpt-5.1-codex-max - Add gpt-5.1-codex-max example configuration - Document xhigh reasoning effort exclusivity and auto-clamping - Remove outdated duplicate key example section - Clean up reasoning effort notes with new xhigh behavior * docs: add persistent logging note to TESTING.md - Document new per-request JSON logging and rolling log files - Note environment variables for enabling live console output - Help developers debug with comprehensive logging capabilities * feat: implement persistent rolling logging in logger.ts - Add rolling log file under ~/.opencode/logs/codex-plugin/ - Write structured JSON entries with timestamps for all log levels - Maintain per-request stage files for detailed debugging - Improve error handling and log forwarding to OpenCode app - Separate console logging controls from file logging * feat: add gpt-5.1-codex-max support to request transformer - Add model normalization for all codex-max variants - Implement xhigh reasoning effort with auto-downgrade for non-max models - Add Codex Max specific reasoning effort validation and normalization - Ensure compatibility with existing model configurations * types: add xhigh reasoning effort to TypeScript interfaces - Add xhigh to ConfigOptions.reasoningEffort union type - Add xhigh to ReasoningConfig.effort union type - Enable type-safe usage of extra high reasoning for gpt-5.1-codex-max * test: add gpt-5.1-codex-max to test-all-models.sh - Add test case for new flagship Codex Max model - Verify medium reasoning effort with auto summary and medium verbosity - Ensure comprehensive testing coverage for all model variants * test: fix codex-fetcher test headers mock - Add default Authorization header to createCodexHeaders mock - Prevent test failures due to missing required headers - Ensure consistent test environment across all test runs * test: update logger tests for persistent rolling logging - Add tests for rolling log file functionality - Update test structure to handle module caching properly - Test console logging behavior with environment variables - Verify error handling for file write failures - Ensure appendFileSync is called for all log entries * test: add appendFileSync mock to plugin-config tests - Add missing appendFileSync mock to prevent test failures - Ensure all file system operations are properly mocked - Maintain test isolation and consistency * test: add appendFileSync mock to prompts-codex tests - Add appendFileSync mock to prevent test failures from logger changes - Clear all mocks properly in beforeEach setup - Ensure test isolation and consistency across test runs * test: add comprehensive fs mocks to prompts-opencode-codex tests - Add existsSync, appendFileSync, writeFileSync, mkdirSync mocks - Clear all mocks in beforeEach for proper test isolation - Prevent test failures from logger persistent logging changes - Ensure consistent test environment across all test files * test: add comprehensive gpt-5.1-codex-max test coverage - Add model normalization tests for all codex-max variants - Test xhigh reasoning effort behavior for codex-max vs other models - Verify reasoning effort downgrade logic (minimal/none → low, xhigh → high) - Add integration tests for transformRequestBody with xhigh reasoning - Ensure complete test coverage for new Codex Max functionality * docs: add specification files for gpt-5.1-codex-max and persistent logging - Add comprehensive spec for Codex Max integration with xhigh reasoning - Document persistent logging requirements and implementation plan - Track requirements, references, and change logs for both features * fix failing tests * fix: implement cache isolation to resolve OAuth plugin conflicts Resolves Issue #25 - Plugin fails with confusing errors if started with the other oauth plugin's cache files **Root Cause**: Both opencode-openai-codex-auth and @openhax/codex plugins used identical cache file names in ~/.opencode/cache/, causing conflicts when switching between plugins. **Solution**: 1. **Cache Isolation** (lib/utils/cache-config.ts): - Added PLUGIN_PREFIX = "openhax-codex" for unique cache namespace - Updated cache files to use plugin-specific prefixes: - openhax-codex-instructions.md (was codex-instructions.md) - openhax-codex-opencode-prompt.txt (was opencode-codex.txt) - Corresponding metadata files with -meta.json suffix 2. **Migration Logic** (lib/prompts/opencode-codex.ts): - migrateLegacyCache(): Automatically detects and migrates old cache files - validateCacheFormat(): Detects incompatible cache formats from other plugins - Enhanced error messages with actionable guidance for cache conflicts 3. **Test Updates**: - Updated all test files to use new cache file names - All 123 tests passing ✅ **User Experience**: - Seamless migration: Users switching plugins get automatic cache migration - Clear error messages: When cache conflicts occur, users get actionable guidance - No data loss: Existing cache content is preserved during migration Files modified: - lib/utils/cache-config.ts - Cache isolation configuration - lib/prompts/opencode-codex.ts - Migration and validation logic - test/prompts-opencode-codex.test.ts - Updated cache file paths - test/prompts-codex.test.ts - Updated cache file paths - spec/issue-25-oauth-cache-conflicts.md - Implementation spec * fixed minor type error * test: remove redundant env reset and header mock * Reduce console logging to debug flag * fix: filter ENOENT errors from cache logging to reduce noise - Add ENOENT filtering in getOpenCodeCodexPrompt cache read error handling - Add ENOENT filtering in getCachedPromptPrefix error handling - Prevents noisy error logs for expected first-run scenarios - Preserves visibility into genuine I/O/parsing problems - Addresses CodeRabbit review feedback on PR #28 * Use openhax/codex as plugin identifier * Add fallback sources for OpenCode codex prompt * Refresh OpenCode prompt cache metadata * Refactor metrics response helpers and fix JWT decoding * Tighten session tail slice to user/assistant only * Improve compaction resilience and cache metrics safety * Add release process documentation and open issues triage guide - Add comprehensive RELEASE_PROCESS.md with step-by-step release workflow - Add open-issues-triage.md for systematic issue management - Both documents support better project governance and maintenance * Strengthen tests for cache keys and gpt-5.1 cases * Soften first-session cache warnings and sync transformed body * Preseed session prompt cache keys * Memoize config loading and keep bridge prompts stable * Code cleanup: removed redundancy, improved tests Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com> * Fixed test shallow copy issue with deep copy. Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com> * chore(codex-max): memoize config loading, stabilize bridge prompts, and prep for Codex Max release; update request transforms and auth/session utilities; add lint-warnings spec * stabilize oauth server tests by completing mocks * fix: improve token refresh error handling and add debug logging - Add debug logging to token refresh process in session-manager - Improve error handling in codex-fetcher for 401 responses - Fix fetch helper error handling for failed token refresh - Add comprehensive test coverage for token refresh scenarios - Add refresh-access-token spec documentation * Fix test to assert on returned auth object instead of in-place mutation - Update test/fetch-helpers.test.ts to properly validate refreshAndUpdateToken return value - Add type guard for OAuth auth type checking - Aligns test expectations with function's design of returning updated auth object - All 396 tests pass with no TypeScript errors * test: add negative test for host-provided prompt_cache_key; fix: ensure explicit Content-Type headers in OAuth server responses - Add test to verify host-provided prompt_cache_key is preserved over session cache key - Update OAuth server send helper to always include default Content-Type: text/plain; charset=utf-8 - Change headers parameter type to http.OutgoingHttpHeaders for stronger typing - Preserve existing HTML response Content-Type override behavior * fix: clone auth refresh and tighten console logging * chore: guard disk logging and strengthen clones/roles * fix: simplify role validation in formatRole function - Remove hardcoded role whitelist in formatRole() - Return normalized role directly without validation - Add PR review documentation for CodeRabbit feedback * Fix all CodeRabbit review issues from PR #29 ## Critical Bug Fixes - Fix content-type header bug in fetch-helpers.ts - preserve original content-type for non-JSON responses - Fix cache fallback bug in codex.ts - wrap getLatestReleaseTag() in try/catch to ensure fallback chain works ## Test Improvements - Remove unused mocks in cache-warming.test.ts areCachesWarm tests - Fix mock leakage in index.test.ts by resetting sessionManager instance mocks - Add missing compactionDecision test case in codex-fetcher.test.ts - Remove redundant test case in codex-fetcher.test.ts ## Code Quality - Harden logger against JSON.stringify failures with try/catch fallback - Remove unused error parameter from logToConsole function - Update type signatures to match new function signatures ## Documentation - Add comprehensive PR analysis document in spec/pr-29-review-analysis.md All tests pass (398 passed, 2 skipped) with 82.73% coverage. * 📝 Add docstrings to `release/review-comments` Docstrings generation was requested by @riatzukiza. * #34 (comment) The following files were modified: * `lib/auth/server.ts` * `lib/logger.ts` * `lib/request/fetch-helpers.ts` * Enhance compaction test coverage and fix linter warning ## Test Improvements - Enhance compaction decision test in codex-fetcher.test.ts to validate full flow: - Verify recordSessionResponseFromHandledResponse called with compacted response - Verify fetcher returns the compacted response with correct status/body - Ensure complete end-to-end compaction flow validation ## Code Quality - Fix linter warning in lib/auth/server.ts by prefixing unused parameter with underscore - Update corresponding type definition in lib/types.ts to match All tests continue to pass (398 passed, 2 skipped). * Replace unsafe any cast with type-safe client access in logger ## Type Safety Improvements - Add OpencodeApp type with proper notify/toast method signatures - Add OpencodeClientWithApp intersection type for type-safe app access - Create isOpencodeClientWithApp type guard function - Replace (loggerClient as any)?.app with type-safe guarded access - Update emit function to use type guard for loggerClient.app access ## Benefits - Eliminates unsafe any type casting - Provides compile-time type checking for app property access - Maintains backward compatibility with existing OpencodeClient interface - Follows TypeScript best practices for type guards All tests continue to pass (398 passed, 2 skipped). * Fix type safety in logger module - Replace unsafe type casting with proper optional chaining - Update notifyToast to use correct Opencode SDK API structure - Use client.tui.showToast with proper body object format - Remove unnecessary type guard function - All tests pass and TypeScript compilation succeeds * Clarify waitForCode state validation docs --------- Co-authored-by: opencode-agent[bot] <opencode-agent[bot]@users.noreply.github.com> Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings. WalkthroughAdds toast-aware logging with wrapped messages and tests; refactors request transformation into modular utilities (prompt-cache, compaction, input-filters, tooling, model-config), implements deterministic prompt-cache keys, compaction helpers, tool normalization, and hardens Codex prompt fetching with ETag/304-aware fetch and resilient cache fallbacks. Changes
Sequence Diagram(s)sequenceDiagram
actor Host
participant RequestTransformer
participant InputFilters
participant CompactionHelpers
participant PromptCache
participant Tooling
participant FetchHelpers
Host->>RequestTransformer: transformRequestBody(body, options)
RequestTransformer->>InputFilters: filterOpenCodeSystemPrompts(body.input)
InputFilters-->>RequestTransformer: filtered input
RequestTransformer->>InputFilters: addCodexBridgeMessage(input, hasTools, session)
InputFilters-->>RequestTransformer: input with bridge?
RequestTransformer->>CompactionHelpers: applyCompactionIfNeeded(body, options.compaction)
alt compaction triggered
CompactionHelpers-->>RequestTransformer: compactionDecision + new input
else no compaction
CompactionHelpers-->>RequestTransformer: undefined
end
RequestTransformer->>PromptCache: ensurePromptCacheKey(body)
PromptCache-->>RequestTransformer: cacheKeyResult
RequestTransformer->>Tooling: normalizeToolsForCodexBody(body, skipTransforms)
Tooling-->>RequestTransformer: tools normalized / cleaned
RequestTransformer->>FetchHelpers: transformRequestForCodex(bodyWithCacheKey)
FetchHelpers-->>Host: transformed request result
sequenceDiagram
actor App
participant Logger
participant Toast as UI.Toast
participant Console
App->>Logger: logWarn(message)
Logger->>Logger: detect supportsToast
alt supports toast
Logger->>Logger: wrapToastMessage(message)
Logger->>Toast: showToast(wrappedMessage)
Toast-->>Logger: ok
Note over Console: console.warn & app.log NOT forwarded for warn
else no toast
Logger->>Console: console.warn(message)
Logger->>Logger: forward to app.log
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro 📒 Files selected for processing (4)
🧰 Additional context used🧬 Code graph analysis (2)lib/prompts/codex.ts (5)
lib/request/request-transformer.ts (3)
🔇 Additional comments (8)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/logger.ts (1)
26-39: Warn-level toast routing is correct; align showToast typing with usageFunctionally, the new emit flow looks good:
- All levels still hit the rolling log when logging/debug is enabled.
- When a TUI client is available,
warnentries skipapp.logand console, and instead route tonotifyToast, matching the new tests.- Errors always toast, and
notifyToastfalls back tologToConsole("warn", ...)on failures, so you still get visibility if the UI path blows up.wrapToastMessagebehaves as expected for normal, space-delimited text and keeps lines within the intended width.A couple of minor refinements to consider:
Type shape for
showToast
OpencodeClientWithTuicurrently declares:showToast?: (args: { message: string; variant?: "success" | "error" | "warning" | "info" }) => void;but
notifyToastcalls:loggerClient.tui.showToast({ body: { title: "...", message: wrappedMessage, variant }, });To keep TypeScript honest and in sync with the actual SDK, it would be safer to define the argument type in terms of the real client type, e.g. via
Parameters<OpencodeClient["tui"]["showToast"]>[0](if available) or by updatingOpencodeClientWithTuito match the concrete{ body: { ... } }shape you’re using.Edge case: extremely long single tokens
wrapToastMessagesplits on whitespace and wraps by word, which is perfect for normal sentences. For extraordinarily long, unbroken strings (no spaces), a single “word” can still exceedmaxWidth. If you expect such payloads (e.g., long IDs, stack traces), you might want to add a fallback that hard-splits words longer thanmaxWidthinto chunks.Neither point is a blocker, but addressing (1) in particular will prevent subtle type drift between this helper and
@opencode-ai/sdk.Also applies to: 127-165, 179-224
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (10)
.github/workflows/staging-release-prep.ymlis excluded by none and included by noneeslint.config.mjsis excluded by none and included by noneindex.tsis excluded by none and included by nonepackage-lock.jsonis excluded by!**/package-lock.jsonand included by nonepackage.jsonis excluded by none and included by nonespec/complexity-reduction.mdis excluded by none and included by nonespec/log-warning-toasts.mdis excluded by none and included by nonespec/release-pr-flow.mdis excluded by none and included by nonespec/request-transformer-refactor.mdis excluded by none and included by nonespec/toast-word-wrap.mdis excluded by none and included by none
📒 Files selected for processing (13)
lib/logger.ts(5 hunks)lib/prompts/codex.ts(2 hunks)lib/prompts/opencode-codex.ts(2 hunks)lib/request/compaction-helpers.ts(1 hunks)lib/request/fetch-helpers.ts(6 hunks)lib/request/input-filters.ts(1 hunks)lib/request/model-config.ts(1 hunks)lib/request/prompt-cache.ts(1 hunks)lib/request/request-transformer.ts(2 hunks)lib/request/tool-normalizer.ts(1 hunks)lib/request/tooling.ts(1 hunks)test/logger.test.ts(2 hunks)test/session-manager.test.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (12)
lib/request/tooling.ts (2)
lib/types.ts (1)
RequestBody(147-169)lib/request/tool-normalizer.ts (1)
normalizeToolsForResponses(139-148)
lib/prompts/opencode-codex.ts (3)
lib/logger.ts (2)
logError(116-118)logWarn(112-114)lib/utils/cache-config.ts (2)
CACHE_TTL_MS(11-11)PLUGIN_PREFIX(26-26)lib/cache/session-cache.ts (1)
openCodePromptCache(75-75)
test/logger.test.ts (1)
lib/logger.ts (3)
configureLogger(62-83)logWarn(112-114)flushRollingLogsForTest(120-125)
lib/prompts/codex.ts (5)
lib/types.ts (1)
CacheMetadata(223-228)lib/utils/file-system-utils.ts (3)
safeReadFile(56-62)safeWriteFile(42-48)fileExistsAndNotEmpty(69-76)lib/cache/session-cache.ts (2)
getCodexCacheKey(83-85)codexInstructionsCache(74-74)lib/utils/cache-config.ts (1)
CACHE_TTL_MS(11-11)lib/logger.ts (2)
logError(116-118)logWarn(112-114)
test/session-manager.test.ts (1)
lib/types.ts (1)
SessionContext(199-205)
lib/request/compaction-helpers.ts (5)
lib/utils/clone.ts (1)
cloneInputItems(31-42)lib/compaction/compaction-executor.ts (1)
CompactionDecision(5-15)lib/utils/input-item-utils.ts (1)
countConversationTurns(116-118)lib/compaction/codex-compaction.ts (4)
approximateTokenCount(26-35)serializeConversation(55-85)buildCompactionPromptItems(87-99)collectSystemMessages(101-106)lib/types.ts (1)
RequestBody(147-169)
lib/logger.ts (1)
lib/constants.ts (1)
PLUGIN_NAME(7-7)
lib/request/input-filters.ts (7)
lib/utils/input-item-utils.ts (1)
extractTextFromItem(16-27)lib/prompts/opencode-codex.ts (1)
getOpenCodeCodexPrompt(190-261)lib/prompts/codex-opencode-bridge.ts (1)
CODEX_OPENCODE_BRIDGE(11-125)lib/types.ts (1)
SessionContext(199-205)lib/cache/prompt-fingerprinting.ts (4)
hasBridgePromptInConversation(27-46)cacheBridgeDecision(109-117)getCachedBridgeDecision(88-101)generateContentHash(15-17)lib/logger.ts (1)
logDebug(104-106)lib/prompts/codex.ts (1)
TOOL_REMAP_MESSAGE(232-294)
lib/request/prompt-cache.ts (2)
lib/types.ts (1)
RequestBody(147-169)lib/logger.ts (3)
logDebug(104-106)logInfo(108-110)logWarn(112-114)
lib/request/model-config.ts (1)
lib/types.ts (3)
UserConfig(40-47)ConfigOptions(52-57)ReasoningConfig(62-65)
lib/request/request-transformer.ts (3)
lib/request/compaction-helpers.ts (2)
CompactionOptions(19-23)applyCompactionIfNeeded(82-106)lib/types.ts (1)
RequestBody(147-169)lib/compaction/compaction-executor.ts (1)
CompactionDecision(5-15)
lib/request/fetch-helpers.ts (6)
lib/types.ts (3)
PluginConfig(6-35)RequestBody(147-169)SessionContext(199-205)lib/session/session-manager.ts (1)
SessionManager(174-522)lib/compaction/codex-compaction.ts (1)
detectCompactionCommand(37-53)lib/logger.ts (1)
logRequest(85-102)lib/constants.ts (1)
LOG_STAGES(56-61)lib/request/request-transformer.ts (1)
transformRequestBody(74-134)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Mutation Tests
🔇 Additional comments (8)
lib/prompts/codex.ts (1)
41-121: Caching helpers and GitHub fetch flow look consistentThe new metadata/session helpers and
fetchInstructionsFromGithub/fetchInstructionsWithFallbackwiring are internally consistent: ETag handling, session cache hydration, and disk metadata updates all line up with theCacheMetadatashape and thefileExistsAndNotEmptychecks. I don’t see correctness problems in these helpers.lib/request/compaction-helpers.ts (1)
25-105: Compaction decision and body mutation logic look soundThe compaction flow is coherent:
maybeBuildCompactionPromptcleanly separates command-triggered vs auto-triggered compaction and guards onsettings.enabled.- Auto mode correctly checks both approximate tokens and a minimum turn count before firing.
applyCompactionIfNeededreplacesbody.inputwith the compaction prompt and removestools/tool_choice/parallel_tool_calls, which is appropriate for a pure-text compaction request.The extra
settings.enabledcheck insidemaybeBuildCompactionPromptis slightly redundant given the guard inapplyCompactionIfNeeded, but it’s harmless and defensive. No blockers here.test/session-manager.test.ts (1)
155-165: Metrics snapshot test rename is safeRenaming
contextto_contexthere is purely cosmetic and still ensuresapplyRequestis called so the session manager has state beforegetMetrics(). No behavioral change and no issues.test/logger.test.ts (1)
132-184: New warn→toast and wrapping tests give good coverageThese two tests nicely exercise the new behavior:
- With a TUI client,
logWarndrivestui.showToastand avoids bothapp.logandconsole.warn.- Long messages are wrapped into multiple lines with a hard cap at 72 characters per line, with no regressions to app/console logging.
This gives solid regression protection around the new toast routing and formatting.
lib/prompts/opencode-codex.ts (1)
87-181: Refactored OpenCode prompt caching and fetch fallbacks look solidThe new layering around
readCachedPrompt,cacheIsFresh, andfetchPromptFromUrlis consistent and resilient:
- Disk meta/content reads are centralized with clear error handling.
- TTL checks use
lastCheckedas intended, and you validate cache shape to avoid mixing foreign plugin caches.- The 304 path refreshes
lastCheckedand the session cache without touching the content, and the multi-URL fetch loop correctly falls back to cached content (updating meta) when both URLs fail.Overall, this is a clear improvement in structure and robustness over inlined logic; I don’t see correctness issues here.
Also applies to: 209-256
lib/request/prompt-cache.ts (1)
6-203: No blocker; prompt cache key derivation correctly separates concerns.Verification confirms the design is sound. The code intentionally coordinates between
ensurePromptCacheKey()andSessionManager:
ensurePromptCacheKey()generates or normalizes keys during stateless request transformation (e.g.,"cache_meta-conv-123"or"cache_<hash>").SessionManagerderives its own keys (e.g.,"conv-123"or"conv-prefix-fork::prefix::..."), explicitly marked "Keep in sync" per session-manager.ts:153–154.- When active,
SessionManager.applyRequest()overwritesbody.prompt_cache_keywith its own value, ensuring session-managed flows take precedence.- Tests confirm both paths work: fallback keys match
/^cache_/, and session keys use the expected format.- Both systems respect existing host-provided keys and avoid redundant overwrites (fetch-helpers.ts).
The key formats are naturally distinct and the coordination is documented, so there is no hidden coupling.
lib/request/request-transformer.ts (2)
16-23: Re-export surface looks good; keep this as the stable entrypointRe-exporting input-filter and model-config helpers here keeps
request-transformeras the main integration surface while letting their implementations live in dedicated modules. This is a reasonable way to preserve import paths through the refactor; just ensure any external docs reference this file as the canonical entrypoint rather than the new internal modules.
74-81: Return type integration is complete and correct—no action neededThe API change from
Promise<RequestBody>toPromise<TransformResult>has been properly integrated throughout the codebase:
- Function implementation (lines 74–133) correctly returns
{ body, compactionDecision }- In-place mutation semantics preserved: The body is still mutated during transformation (lines 97–132), then returned within the result object
- Production code updated:
fetch-helpers.ts(lines 189, 207, 210) correctly accessestransformResult.bodyand its properties- Tests already adapted: Both
request-transformer.test.tsandrequest-transformer-tools-normalization.test.tsextract.bodyfrom the result, confirming the new contract is in placeAll call sites are consistent with the new return shape. The function maintains its core semantics while cleanly separating the transformed body from the compaction decision.
…en into max-width chunks before wrapping, ensuring no line exceeds maxWidth even with very long words. Tests not run (not requested).
There was a problem hiding this comment.
Actionable comments posted: 3
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (4)
.github/workflows/pr-auto-base.ymlis excluded by none and included by none.github/workflows/staging-release-prep.ymlis excluded by none and included by nonespec/merge-conflict-resolution.mdis excluded by none and included by nonespec/release-automerge-and-tags.mdis excluded by none and included by none
📒 Files selected for processing (2)
lib/request/fetch-helpers.ts(6 hunks)test/logger.test.ts(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
test/logger.test.ts (1)
lib/logger.ts (3)
configureLogger(62-83)logWarn(112-114)flushRollingLogsForTest(120-125)
lib/request/fetch-helpers.ts (3)
lib/types.ts (3)
PluginConfig(6-35)RequestBody(147-169)SessionContext(199-205)lib/session/session-manager.ts (1)
SessionManager(174-522)lib/request/request-transformer.ts (1)
transformRequestBody(74-134)
🔇 Additional comments (2)
test/logger.test.ts (2)
1-1: No blockers: type-onlyOpencodeClientimport fits usageUsing a type-only import here is appropriate: it gives you type safety for the mocked
clientwithout impacting runtime or test isolation.
132-157: Toast routing behavior is well coveredThis test cleanly exercises the “TUI present” path: it verifies that
logWarnpreferstui.showToastand suppresses bothapp.logandconsole.warn. The setup matchesconfigureLogger’s contract and looks stable.
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (4)
lib/request/compaction-helpers.ts(1 hunks)lib/request/fetch-helpers.ts(6 hunks)lib/request/model-config.ts(1 hunks)lib/request/request-transformer.ts(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
lib/request/model-config.ts (1)
lib/types.ts (3)
UserConfig(40-47)ConfigOptions(52-57)ReasoningConfig(62-65)
lib/request/fetch-helpers.ts (5)
lib/types.ts (3)
PluginConfig(6-35)RequestBody(147-169)SessionContext(199-205)lib/compaction/codex-compaction.ts (1)
detectCompactionCommand(37-53)lib/logger.ts (1)
logRequest(85-102)lib/constants.ts (1)
LOG_STAGES(56-61)lib/request/request-transformer.ts (1)
transformRequestBody(78-141)
lib/request/compaction-helpers.ts (6)
lib/utils/clone.ts (1)
cloneInputItems(31-42)lib/compaction/compaction-executor.ts (1)
CompactionDecision(5-15)lib/utils/input-item-utils.ts (1)
countConversationTurns(116-118)lib/compaction/codex-compaction.ts (4)
approximateTokenCount(26-35)serializeConversation(55-85)buildCompactionPromptItems(87-99)collectSystemMessages(101-106)lib/types.ts (1)
RequestBody(147-169)lib/request/input-filters.ts (1)
filterInput(16-46)
lib/request/request-transformer.ts (3)
lib/request/compaction-helpers.ts (2)
CompactionOptions(20-25)applyCompactionIfNeeded(84-109)lib/types.ts (1)
RequestBody(147-169)lib/compaction/compaction-executor.ts (1)
CompactionDecision(5-15)
🔇 Additional comments (26)
lib/request/fetch-helpers.ts (10)
102-108: LGTM: Compaction settings helper is clean.The centralization of compaction settings derivation is straightforward and provides sensible defaults.
110-120: LGTM: Prompt cache key logic is sound.The function correctly prioritizes host-provided keys and safely applies session cache keys when appropriate.
122-131: LGTM: Compaction history application is properly guarded.The early returns for disabled settings and manual commands are appropriate.
165-176: Past review concern addressed: duplicate logic removed.The refactoring successfully eliminates the redundant prompt cache key checks and compaction history applications. The flow now uses
buildCompactionSettings,applyPromptCacheKey, andapplyCompactionHistoryas single sources of truth.
271-277: LGTM: Safe JSON parser is straightforward.The try-catch wrapper provides the expected fallback behavior for error parsing.
279-297: LGTM: Rate limit parsing is robust.The header parsing safely extracts primary and secondary bucket metadata and correctly returns undefined when no rate limit headers are present.
300-302: LGTM: Usage limit detection is appropriate.The regex pattern correctly identifies the expected usage and rate limit error codes.
304-324: LGTM: Friendly message builder is well-crafted.The function gracefully handles various reset timestamp formats and produces clear, actionable guidance for users.
326-354: LGTM: Error enrichment is comprehensive.The function properly parses errors, extracts rate limits, identifies usage limit errors, and builds friendly messages. The check
if (!parsed)treats scalar JSON (e.g.,0,false) as non-JSON, which is acceptable for typical Codex error responses that should be objects.
362-382: LGTM: Error response handling correctly applies enrichment.The function appropriately uses
enrichErrorBody, logs the enriched payload, and sets the content-type header only when JSON enrichment occurred.lib/request/compaction-helpers.ts (5)
14-18: LGTM: CompactionSettings interface is clear.The interface appropriately defines compaction feature toggles and thresholds.
20-25: LGTM: CompactionOptions interface is comprehensive.The interface correctly bundles compaction settings, command context, original input, and the preserveIds flag.
27-36: LGTM: Last user message removal is correctly implemented.The function safely clones the input and removes the last user message from the conversation history.
38-82: LGTM: Compaction prompt builder handles both trigger modes correctly.The function appropriately detects command and auto-triggered compaction, serializes the conversation, and produces the expected prompt items and decision metadata.
84-109: Past review concern addressed: ID filtering now applied to compaction output.Line 103 correctly applies
filterInput(compactionBuild.items, { preserveIds })to the compaction output before replacingbody.input, ensuring that thepreserveIdscontract is respected in the compaction path. Tool-related fields are appropriately stripped.lib/request/model-config.ts (7)
3-43: Past review concern addressed: Unknown models are now preserved.Line 42 correctly returns
sanitizedinstead of forcing all non-matching models to the fallback. This preserves valid caller-selected models (likegpt-4o-mini) that don't match the GPT-5 heuristics.
45-53: LGTM: Model config merging is straightforward.The function correctly merges global and model-specific configurations with appropriate precedence.
55-96: LGTM: Model classification logic is thorough.The classification function comprehensively detects GPT-5.1, Codex variants, and lightweight models through multiple naming patterns.
98-109: LGTM: Default effort assignments are sensible.The function appropriately assigns reasoning effort defaults based on model characteristics and capabilities.
111-119: LGTM: Effort request validation is appropriate.The function correctly downgrades "xhigh" to "high" for models that don't support the maximum effort level.
121-155: LGTM: Effort normalization handles model-specific constraints.The function appropriately adjusts reasoning effort to align with each model family's capabilities and supported levels.
157-170: LGTM: Reasoning config derivation pipeline is well-structured.The function correctly orchestrates model classification, effort defaulting, request validation, and normalization to produce the final reasoning configuration.
lib/request/request-transformer.ts (4)
25-32: Past review concern addressed: TransformRequestOptions is now documented.The JSDoc on line 27 clarifies that
preserveIdsis enforced after compaction builds, directly addressing the past review's request to document the semantics.
39-76: LGTM: Input transformation correctly filters IDs and applies mode-specific transforms.The function appropriately guards against skipped transforms, applies ID filtering with the preserveIds flag, logs defensively, and correctly branches between CODEX mode and tool remap flows.
78-141: Past review concern fully addressed: Compaction now respects preserveIds.Line 92 correctly spreads
preserveIdsinto the compaction options, ensuring thatfilterInputis applied to compaction output inapplyCompactionIfNeeded(compaction-helpers.ts line 103). Non-compaction flows are filtered intransformInputForCodex(line 56). Both paths now honor thepreserveIdscontract.
1-23: LGTM: Module imports and re-exports are well-organized.The import structure correctly brings in modular helpers, and the re-exports appropriately expose public APIs from input-filters and model-config.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
lib/prompts/codex.ts (1)
66-74: Empty string fallback could cache invalid instructions.If
safeReadFilereturns null (file doesn't exist or read error), line 71 returns an empty string that gets cached in the session. Callers expect valid instructions, not empty content.Although this function appears to be called only when
cacheFileExistsis true, defensive coding suggests either:
- Asserting the file exists, or
- Throwing an error if content is empty
Consider applying this diff:
function readCachedInstructions( cacheFilePath: string, etag?: string | undefined, tag?: string | undefined, ): string { - const fileContent = safeReadFile(cacheFilePath) || ""; + const fileContent = safeReadFile(cacheFilePath); + if (!fileContent) { + throw new Error(`Cache file exists but could not be read: ${cacheFilePath}`); + } cacheSessionEntry(fileContent, etag, tag); return fileContent; }
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
lib/prompts/codex.ts(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
lib/prompts/codex.ts (5)
lib/types.ts (1)
CacheMetadata(223-228)lib/utils/file-system-utils.ts (3)
safeReadFile(56-62)safeWriteFile(42-48)fileExistsAndNotEmpty(69-76)lib/cache/session-cache.ts (2)
getCodexCacheKey(83-85)codexInstructionsCache(74-74)lib/utils/cache-config.ts (1)
CACHE_TTL_MS(11-11)lib/logger.ts (2)
logError(116-118)logWarn(112-114)
🔇 Additional comments (3)
lib/prompts/codex.ts (3)
41-50: LGTM! Defensive metadata parsing.The error handling is appropriate, returning null on read or parse failures.
52-64: LGTM! Clean helper utilities.Both
loadSessionFromMetadataandcacheIsFreshhave clear responsibilities and correct logic.
199-207: Previous review concern properly addressed.The fallback logic now correctly prefers the existing disk cache over the bundled fallback when GitHub API lookup fails, improving offline resilience.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Summary
Testing