Conversation
* docs: reference @openhax/codex in test README * Delete spec/issue-11-docs-package.md
- Create shared clone utility (lib/utils/clone.ts) to eliminate 3+ duplicate implementations - Create InputItemUtils (lib/utils/input-item-utils.ts) for centralized text extraction - Centralize magic numbers in constants with SESSION_CONFIG, CONVERSATION_CONFIG, PERFORMANCE_CONFIG - Add ESLint cognitive complexity rules (max: 15) to prevent future issues - Refactor large functions to use shared utilities, reducing complexity - Update all modules to use centralized utilities and constants - Remove dead code and unused imports - All 123 tests pass, no regressions introduced Code quality improved from B+ to A- with better maintainability.
… edit workflow files
…paths - Update stale test counts to reflect actual numbers: * auth.test.ts: 16 → 27 tests * config.test.ts: 13 → 16 tests * request-transformer.test.ts: 30 → 123 tests * logger.test.ts: 5 → 7 tests * response-handler.test.ts: unchanged at 10 tests - Fix broken configuration file paths: * config/minimal-opencode.json (was config/minimal-opencode.json) * config/full-opencode.json (was config/full-opencode.json) Both configuration files exist in the config/ directory at repository root.
Device/stealth
- Update overview to reflect new gpt-5.1-codex-max model as default - Add note about xhigh reasoning effort exclusivity to gpt-5.1-codex-max - Document expanded model lineup matching Codex CLI
- Document new Codex Max support with xhigh reasoning - Note configuration changes and sample updates - Record automatic reasoning effort downgrade fix for compatibility
- Add gpt-5.1-codex-max configuration with xhigh reasoning support - Update model count from 20 to 21 variants - Expand model comparison table with Codex Max as flagship default - Add note about xhigh reasoning exclusivity and auto-downgrade behavior
- Add flagship Codex Max model with 400k context and 128k output limits - Configure with medium reasoning effort as default - Include encrypted_content for stateless operation - Set store: false for ChatGPT backend compatibility
- Change default model from gpt-5.1-codex to gpt-5.1-codex-max - Align minimal config with new flagship Codex Max model - Provide users with best-in-class default experience
- Add gpt-5.1-codex-max example configuration - Document xhigh reasoning effort exclusivity and auto-clamping - Remove outdated duplicate key example section - Clean up reasoning effort notes with new xhigh behavior
- Document new per-request JSON logging and rolling log files - Note environment variables for enabling live console output - Help developers debug with comprehensive logging capabilities
- Add rolling log file under ~/.opencode/logs/codex-plugin/ - Write structured JSON entries with timestamps for all log levels - Maintain per-request stage files for detailed debugging - Improve error handling and log forwarding to OpenCode app - Separate console logging controls from file logging
- Add model normalization for all codex-max variants - Implement xhigh reasoning effort with auto-downgrade for non-max models - Add Codex Max specific reasoning effort validation and normalization - Ensure compatibility with existing model configurations
Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com>
Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com>
Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com>
Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com>
address code rabbit comments
…ariable' Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
…ariable' Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
lib/request/input-filters.ts (1)
246-275: Tool‑remap deduplication cleanly addresses the previous duplicate‑injection concern
addToolRemapMessagenow:
- Precomputes a stable
TOOL_REMAP_MESSAGE_HASH.- Scans existing developer messages with
extractTextFromItemandgenerateContentHashto detect an already‑present remap prompt.- Only prepends the
toolRemapMessagewhen no such hash match exists.This resolves the earlier risk of stacking identical TOOL_REMAP_MESSAGE prompts when the transformer runs multiple times for the same conversation, without changing the function’s public surface.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (6)
.github/workflows/dev-release-prep.ymlis excluded by none and included by none.github/workflows/review-response.ymlis excluded by none and included by nonepackage-lock.jsonis excluded by!**/package-lock.jsonand included by nonescripts/review-response-context.mjsis excluded by none and included by nonespec/review-response-token.mdis excluded by none and included by nonespec/review-v0.3.5-fixes.mdis excluded by none and included by none
📒 Files selected for processing (4)
lib/prompts/codex.ts(2 hunks)lib/request/compaction-helpers.ts(1 hunks)lib/request/input-filters.ts(1 hunks)test/compaction-helpers.test.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
test/compaction-helpers.test.ts (2)
lib/types.ts (1)
RequestBody(147-169)lib/request/compaction-helpers.ts (1)
applyCompactionIfNeeded(82-107)
lib/request/compaction-helpers.ts (5)
lib/compaction/compaction-executor.ts (1)
CompactionDecision(5-15)lib/utils/input-item-utils.ts (1)
countConversationTurns(116-118)lib/compaction/codex-compaction.ts (4)
approximateTokenCount(26-35)serializeConversation(55-85)buildCompactionPromptItems(87-99)collectSystemMessages(101-106)lib/types.ts (1)
RequestBody(147-169)lib/request/input-filters.ts (1)
filterInput(18-48)
lib/prompts/codex.ts (5)
lib/types.ts (1)
CacheMetadata(223-228)lib/utils/file-system-utils.ts (3)
safeReadFile(56-62)safeWriteFile(42-48)fileExistsAndNotEmpty(69-76)lib/cache/session-cache.ts (2)
getCodexCacheKey(83-85)codexInstructionsCache(74-74)lib/utils/cache-config.ts (1)
CACHE_TTL_MS(11-11)lib/logger.ts (2)
logWarn(112-114)logError(116-118)
lib/request/input-filters.ts (6)
lib/cache/prompt-fingerprinting.ts (5)
generateContentHash(15-17)generateInputHash(124-143)hasBridgePromptInConversation(27-46)cacheBridgeDecision(109-117)getCachedBridgeDecision(88-101)lib/prompts/codex.ts (1)
TOOL_REMAP_MESSAGE(272-334)lib/utils/input-item-utils.ts (1)
extractTextFromItem(16-27)lib/prompts/opencode-codex.ts (1)
getOpenCodeCodexPrompt(190-261)lib/types.ts (1)
SessionContext(199-205)lib/logger.ts (1)
logDebug(104-106)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Mutation Tests
🔇 Additional comments (5)
lib/prompts/codex.ts (1)
66-79: readCachedInstructions now correctly avoids caching missing/empty content
readCachedInstructionsusessafeReadFiledirectly and treats falsy content as a cache miss, logging a warning and returningnullinstead of caching an empty string. This resolves the earlier risk of silently caching broken/empty cache files and keeps the session cache aligned with real, readable data only. The rest of the call sites (fresh TTL, 304, and error fallbacks) all correctly branch on thenullreturn.lib/request/compaction-helpers.ts (1)
29-80: Compaction flow is logically sound and preserves original inputThe compaction helpers look consistent and safe:
removeLastUserMessagecorrectly drops only the lastrole === "user"item while preserving later assistant/tool items and returns a new array only when truncation occurs.maybeBuildCompactionPromptgates onsettings.enabled, uses explicit commandText as a hard trigger, and otherwise falls back to token+turn heuristics (autoLimitTokens+autoMinMessages), with a clearCompactionDecisionpayload including preserved system messages and serialization.applyCompactionIfNeededcleanly no‑ops when disabled or when no trigger fires, and when compaction applies it:
- Replaces
body.inputwith the compaction prompt (optionally preserving IDs viafilterInput).- Strips
tools,tool_choice, andparallel_tool_callsto avoid inconsistent tool state on compacted requests.- Returns the structured decision for downstream use.
No functional or correctness issues stand out here; behavior matches the described design.
Also applies to: 82-107
test/compaction-helpers.test.ts (1)
32-58: Tests exercise the key compaction behaviors and body mutationsThe second test usefully validates the “no user message” path: compaction still triggers via
commandText,serialization.totalTurnsis 1 as expected from a single assistant turn, andbody.inputis mutated while tool fields are stripped. Combined with the first test, this provides good coverage of the new helpers’ primary behaviors.lib/request/input-filters.ts (2)
18-48: filterInput behavior remains consistent and safe for downstream users
filterInputcontinues to:
- Drop
item_referenceentries.- Optionally strip
idandmetadatawhile preserving other fields.- Return the original
inputuntouched for non-array values.This is compatible with its new usages (compaction and bridge/remap flows) and does not introduce new edge‑case risks.
72-166: OpenCode system/compaction prompt filtering and sanitization look correctThe
filterOpenCodeSystemPromptspipeline is coherent:
- It uses
getOpenCodeCodexPrompt(when available) plus role/content heuristics inisOpenCodeSystemPromptto drop the heavy OpenCode system prompt while leaving user messages intact.isOpenCodeCompactionPromptandsanitizeOpenCodeCompactionPromptspecifically target OpenCode auto‑compaction instructions:
- Removing lines that mention summary paths/files and
.opencodelocations.- Normalizing whitespace and re‑adding an “Auto‑compaction summary” header only when the original mentioned auto‑compaction but the sanitized text no longer does.
- Non‑matching system/developer messages flow through unchanged, and user messages always pass through.
Overall this should effectively strip environment‑specific summary instructions without losing the higher‑level semantics of “there is a summary,” and it’s idempotent enough that re‑filtering sanitized prompts won’t keep re‑writing them.
Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com>
Align session fork ids with prompt cache hints
Use metadata flag for OpenCode compaction prompts
Handle Codex cache metadata on 304/fallback
Avoid duplicate bridge on session reinject
No description provided.