Skip to content

Comments

Release#57

Merged
riatzukiza merged 167 commits intomainfrom
dev
Nov 21, 2025
Merged

Release#57
riatzukiza merged 167 commits intomainfrom
dev

Conversation

@riatzukiza
Copy link
Collaborator

No description provided.

* docs: reference @openhax/codex in test README

* Delete spec/issue-11-docs-package.md
- Create shared clone utility (lib/utils/clone.ts) to eliminate 3+ duplicate implementations
- Create InputItemUtils (lib/utils/input-item-utils.ts) for centralized text extraction
- Centralize magic numbers in constants with SESSION_CONFIG, CONVERSATION_CONFIG, PERFORMANCE_CONFIG
- Add ESLint cognitive complexity rules (max: 15) to prevent future issues
- Refactor large functions to use shared utilities, reducing complexity
- Update all modules to use centralized utilities and constants
- Remove dead code and unused imports
- All 123 tests pass, no regressions introduced

Code quality improved from B+ to A- with better maintainability.
…paths

- Update stale test counts to reflect actual numbers:
  * auth.test.ts: 16 → 27 tests
  * config.test.ts: 13 → 16 tests
  * request-transformer.test.ts: 30 → 123 tests
  * logger.test.ts: 5 → 7 tests
  * response-handler.test.ts: unchanged at 10 tests

- Fix broken configuration file paths:
  * config/minimal-opencode.json (was config/minimal-opencode.json)
  * config/full-opencode.json (was config/full-opencode.json)

Both configuration files exist in the config/ directory at repository root.
- Update overview to reflect new gpt-5.1-codex-max model as default
- Add note about xhigh reasoning effort exclusivity to gpt-5.1-codex-max
- Document expanded model lineup matching Codex CLI
- Document new Codex Max support with xhigh reasoning
- Note configuration changes and sample updates
- Record automatic reasoning effort downgrade fix for compatibility
- Add gpt-5.1-codex-max configuration with xhigh reasoning support
- Update model count from 20 to 21 variants
- Expand model comparison table with Codex Max as flagship default
- Add note about xhigh reasoning exclusivity and auto-downgrade behavior
- Add flagship Codex Max model with 400k context and 128k output limits
- Configure with medium reasoning effort as default
- Include encrypted_content for stateless operation
- Set store: false for ChatGPT backend compatibility
- Change default model from gpt-5.1-codex to gpt-5.1-codex-max
- Align minimal config with new flagship Codex Max model
- Provide users with best-in-class default experience
- Add gpt-5.1-codex-max example configuration
- Document xhigh reasoning effort exclusivity and auto-clamping
- Remove outdated duplicate key example section
- Clean up reasoning effort notes with new xhigh behavior
- Document new per-request JSON logging and rolling log files
- Note environment variables for enabling live console output
- Help developers debug with comprehensive logging capabilities
- Add rolling log file under ~/.opencode/logs/codex-plugin/
- Write structured JSON entries with timestamps for all log levels
- Maintain per-request stage files for detailed debugging
- Improve error handling and log forwarding to OpenCode app
- Separate console logging controls from file logging
- Add model normalization for all codex-max variants
- Implement xhigh reasoning effort with auto-downgrade for non-max models
- Add Codex Max specific reasoning effort validation and normalization
- Ensure compatibility with existing model configurations
opencode-agent bot and others added 13 commits November 20, 2025 22:42
Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com>
Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com>
Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com>
Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com>
…ariable'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
…ariable'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
lib/request/input-filters.ts (1)

246-275: Tool‑remap deduplication cleanly addresses the previous duplicate‑injection concern

addToolRemapMessage now:

  • Precomputes a stable TOOL_REMAP_MESSAGE_HASH.
  • Scans existing developer messages with extractTextFromItem and generateContentHash to detect an already‑present remap prompt.
  • Only prepends the toolRemapMessage when no such hash match exists.

This resolves the earlier risk of stacking identical TOOL_REMAP_MESSAGE prompts when the transformer runs multiple times for the same conversation, without changing the function’s public surface.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4e16ae8 and 798a8be.

⛔ Files ignored due to path filters (6)
  • .github/workflows/dev-release-prep.yml is excluded by none and included by none
  • .github/workflows/review-response.yml is excluded by none and included by none
  • package-lock.json is excluded by !**/package-lock.json and included by none
  • scripts/review-response-context.mjs is excluded by none and included by none
  • spec/review-response-token.md is excluded by none and included by none
  • spec/review-v0.3.5-fixes.md is excluded by none and included by none
📒 Files selected for processing (4)
  • lib/prompts/codex.ts (2 hunks)
  • lib/request/compaction-helpers.ts (1 hunks)
  • lib/request/input-filters.ts (1 hunks)
  • test/compaction-helpers.test.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
test/compaction-helpers.test.ts (2)
lib/types.ts (1)
  • RequestBody (147-169)
lib/request/compaction-helpers.ts (1)
  • applyCompactionIfNeeded (82-107)
lib/request/compaction-helpers.ts (5)
lib/compaction/compaction-executor.ts (1)
  • CompactionDecision (5-15)
lib/utils/input-item-utils.ts (1)
  • countConversationTurns (116-118)
lib/compaction/codex-compaction.ts (4)
  • approximateTokenCount (26-35)
  • serializeConversation (55-85)
  • buildCompactionPromptItems (87-99)
  • collectSystemMessages (101-106)
lib/types.ts (1)
  • RequestBody (147-169)
lib/request/input-filters.ts (1)
  • filterInput (18-48)
lib/prompts/codex.ts (5)
lib/types.ts (1)
  • CacheMetadata (223-228)
lib/utils/file-system-utils.ts (3)
  • safeReadFile (56-62)
  • safeWriteFile (42-48)
  • fileExistsAndNotEmpty (69-76)
lib/cache/session-cache.ts (2)
  • getCodexCacheKey (83-85)
  • codexInstructionsCache (74-74)
lib/utils/cache-config.ts (1)
  • CACHE_TTL_MS (11-11)
lib/logger.ts (2)
  • logWarn (112-114)
  • logError (116-118)
lib/request/input-filters.ts (6)
lib/cache/prompt-fingerprinting.ts (5)
  • generateContentHash (15-17)
  • generateInputHash (124-143)
  • hasBridgePromptInConversation (27-46)
  • cacheBridgeDecision (109-117)
  • getCachedBridgeDecision (88-101)
lib/prompts/codex.ts (1)
  • TOOL_REMAP_MESSAGE (272-334)
lib/utils/input-item-utils.ts (1)
  • extractTextFromItem (16-27)
lib/prompts/opencode-codex.ts (1)
  • getOpenCodeCodexPrompt (190-261)
lib/types.ts (1)
  • SessionContext (199-205)
lib/logger.ts (1)
  • logDebug (104-106)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Mutation Tests
🔇 Additional comments (5)
lib/prompts/codex.ts (1)

66-79: readCachedInstructions now correctly avoids caching missing/empty content

readCachedInstructions uses safeReadFile directly and treats falsy content as a cache miss, logging a warning and returning null instead of caching an empty string. This resolves the earlier risk of silently caching broken/empty cache files and keeps the session cache aligned with real, readable data only. The rest of the call sites (fresh TTL, 304, and error fallbacks) all correctly branch on the null return.

lib/request/compaction-helpers.ts (1)

29-80: Compaction flow is logically sound and preserves original input

The compaction helpers look consistent and safe:

  • removeLastUserMessage correctly drops only the last role === "user" item while preserving later assistant/tool items and returns a new array only when truncation occurs.
  • maybeBuildCompactionPrompt gates on settings.enabled, uses explicit commandText as a hard trigger, and otherwise falls back to token+turn heuristics (autoLimitTokens + autoMinMessages), with a clear CompactionDecision payload including preserved system messages and serialization.
  • applyCompactionIfNeeded cleanly no‑ops when disabled or when no trigger fires, and when compaction applies it:
    • Replaces body.input with the compaction prompt (optionally preserving IDs via filterInput).
    • Strips tools, tool_choice, and parallel_tool_calls to avoid inconsistent tool state on compacted requests.
    • Returns the structured decision for downstream use.

No functional or correctness issues stand out here; behavior matches the described design.

Also applies to: 82-107

test/compaction-helpers.test.ts (1)

32-58: Tests exercise the key compaction behaviors and body mutations

The second test usefully validates the “no user message” path: compaction still triggers via commandText, serialization.totalTurns is 1 as expected from a single assistant turn, and body.input is mutated while tool fields are stripped. Combined with the first test, this provides good coverage of the new helpers’ primary behaviors.

lib/request/input-filters.ts (2)

18-48: filterInput behavior remains consistent and safe for downstream users

filterInput continues to:

  • Drop item_reference entries.
  • Optionally strip id and metadata while preserving other fields.
  • Return the original input untouched for non-array values.

This is compatible with its new usages (compaction and bridge/remap flows) and does not introduce new edge‑case risks.


72-166: OpenCode system/compaction prompt filtering and sanitization look correct

The filterOpenCodeSystemPrompts pipeline is coherent:

  • It uses getOpenCodeCodexPrompt (when available) plus role/content heuristics in isOpenCodeSystemPrompt to drop the heavy OpenCode system prompt while leaving user messages intact.
  • isOpenCodeCompactionPrompt and sanitizeOpenCodeCompactionPrompt specifically target OpenCode auto‑compaction instructions:
    • Removing lines that mention summary paths/files and .opencode locations.
    • Normalizing whitespace and re‑adding an “Auto‑compaction summary” header only when the original mentioned auto‑compaction but the sanitized text no longer does.
  • Non‑matching system/developer messages flow through unchanged, and user messages always pass through.

Overall this should effectively strip environment‑specific summary instructions without losing the higher‑level semantics of “there is a summary,” and it’s idempotent enough that re‑filtering sanitized prompts won’t keep re‑writing them.

@riatzukiza riatzukiza enabled auto-merge November 20, 2025 23:43
@riatzukiza riatzukiza disabled auto-merge November 21, 2025 00:04
@riatzukiza riatzukiza merged commit 2304f4b into main Nov 21, 2025
18 of 23 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Dec 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant