Skip to content

Comments

Add fork-aware prompt_cache_key derivation#19

Merged
riatzukiza merged 1 commit intodevice/stealthfrom
issue-4
Nov 17, 2025
Merged

Add fork-aware prompt_cache_key derivation#19
riatzukiza merged 1 commit intodevice/stealthfrom
issue-4

Conversation

@riatzukiza
Copy link
Collaborator

Summary

Testing

  • pnpm test request-transformer.test.ts
  • pnpm run typecheck

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 17, 2025

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

🗂️ Base branches to auto review (1)
  • main

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch issue-4

Comment @coderabbitai help to get the list of available commands and usage tips.

@riatzukiza riatzukiza merged commit f56e506 into device/stealth Nov 17, 2025
7 of 12 checks passed
@riatzukiza riatzukiza deleted the issue-4 branch November 17, 2025 05:23
riatzukiza added a commit that referenced this pull request Nov 20, 2025
* device/stealth: commit local changes in submodule

* Docs: fix test README package name (#18)

* docs: reference @openhax/codex in test README

* Delete spec/issue-11-docs-package.md

* Add fork-aware prompt_cache_key derivation (#19)

* Refactor: Eliminate code duplication and improve maintainability

- Create shared clone utility (lib/utils/clone.ts) to eliminate 3+ duplicate implementations
- Create InputItemUtils (lib/utils/input-item-utils.ts) for centralized text extraction
- Centralize magic numbers in constants with SESSION_CONFIG, CONVERSATION_CONFIG, PERFORMANCE_CONFIG
- Add ESLint cognitive complexity rules (max: 15) to prevent future issues
- Refactor large functions to use shared utilities, reducing complexity
- Update all modules to use centralized utilities and constants
- Remove dead code and unused imports
- All 123 tests pass, no regressions introduced

Code quality improved from B+ to A- with better maintainability.

* testing!

* Address review: plugin config errors, compaction gating, cache key fallback

* Style and metrics cleanup for auth, cache, and sessions

* linting and formatting

* Finalize PR 20 review: shared clone utils and tests

* Fix CI workflow YAML syntax and quoting

* update lint rules

* allow lint warnings without masking errors

* seperate linting and formatting ci

* seperated formatting workflow from ci yaml and gave it permissions to edit workflow files

* opencode can respond to all pr comments

* Fix test/README.md documentation: update test counts and config file paths

- Update stale test counts to reflect actual numbers:
  * auth.test.ts: 16 → 27 tests
  * config.test.ts: 13 → 16 tests
  * request-transformer.test.ts: 30 → 123 tests
  * logger.test.ts: 5 → 7 tests
  * response-handler.test.ts: unchanged at 10 tests

- Fix broken configuration file paths:
  * config/minimal-opencode.json (was config/minimal-opencode.json)
  * config/full-opencode.json (was config/full-opencode.json)

Both configuration files exist in the config/ directory at repository root.

* 0.1.0

* 0.2.0

* docs: update AGENTS.md for gpt-5.1-codex-max support

- Update overview to reflect new gpt-5.1-codex-max model as default
- Add note about xhigh reasoning effort exclusivity to gpt-5.1-codex-max
- Document expanded model lineup matching Codex CLI

* chore: add v3.3.0 changelog entry for gpt-5.1-codex-max

- Document new Codex Max support with xhigh reasoning
- Note configuration changes and sample updates
- Record automatic reasoning effort downgrade fix for compatibility

* docs: update README for gpt-5.1-codex-max integration

- Add gpt-5.1-codex-max configuration with xhigh reasoning support
- Update model count from 20 to 21 variants
- Expand model comparison table with Codex Max as flagship default
- Add note about xhigh reasoning exclusivity and auto-downgrade behavior

* config: add gpt-5.1-codex-max to full-opencode.json

- Add flagship Codex Max model with 400k context and 128k output limits
- Configure with medium reasoning effort as default
- Include encrypted_content for stateless operation
- Set store: false for ChatGPT backend compatibility

* config: update minimal-opencode.json default to gpt-5.1-codex-max

- Change default model from gpt-5.1-codex to gpt-5.1-codex-max
- Align minimal config with new flagship Codex Max model
- Provide users with best-in-class default experience

* docs: update CONFIG_FIELDS.md for gpt-5.1-codex-max

- Add gpt-5.1-codex-max example configuration
- Document xhigh reasoning effort exclusivity and auto-clamping
- Remove outdated duplicate key example section
- Clean up reasoning effort notes with new xhigh behavior

* docs: add persistent logging note to TESTING.md

- Document new per-request JSON logging and rolling log files
- Note environment variables for enabling live console output
- Help developers debug with comprehensive logging capabilities

* feat: implement persistent rolling logging in logger.ts

- Add rolling log file under ~/.opencode/logs/codex-plugin/
- Write structured JSON entries with timestamps for all log levels
- Maintain per-request stage files for detailed debugging
- Improve error handling and log forwarding to OpenCode app
- Separate console logging controls from file logging

* feat: add gpt-5.1-codex-max support to request transformer

- Add model normalization for all codex-max variants
- Implement xhigh reasoning effort with auto-downgrade for non-max models
- Add Codex Max specific reasoning effort validation and normalization
- Ensure compatibility with existing model configurations

* types: add xhigh reasoning effort to TypeScript interfaces

- Add xhigh to ConfigOptions.reasoningEffort union type
- Add xhigh to ReasoningConfig.effort union type
- Enable type-safe usage of extra high reasoning for gpt-5.1-codex-max

* test: add gpt-5.1-codex-max to test-all-models.sh

- Add test case for new flagship Codex Max model
- Verify medium reasoning effort with auto summary and medium verbosity
- Ensure comprehensive testing coverage for all model variants

* test: fix codex-fetcher test headers mock

- Add default Authorization header to createCodexHeaders mock
- Prevent test failures due to missing required headers
- Ensure consistent test environment across all test runs

* test: update logger tests for persistent rolling logging

- Add tests for rolling log file functionality
- Update test structure to handle module caching properly
- Test console logging behavior with environment variables
- Verify error handling for file write failures
- Ensure appendFileSync is called for all log entries

* test: add appendFileSync mock to plugin-config tests

- Add missing appendFileSync mock to prevent test failures
- Ensure all file system operations are properly mocked
- Maintain test isolation and consistency

* test: add appendFileSync mock to prompts-codex tests

- Add appendFileSync mock to prevent test failures from logger changes
- Clear all mocks properly in beforeEach setup
- Ensure test isolation and consistency across test runs

* test: add comprehensive fs mocks to prompts-opencode-codex tests

- Add existsSync, appendFileSync, writeFileSync, mkdirSync mocks
- Clear all mocks in beforeEach for proper test isolation
- Prevent test failures from logger persistent logging changes
- Ensure consistent test environment across all test files

* test: add comprehensive gpt-5.1-codex-max test coverage

- Add model normalization tests for all codex-max variants
- Test xhigh reasoning effort behavior for codex-max vs other models
- Verify reasoning effort downgrade logic (minimal/none → low, xhigh → high)
- Add integration tests for transformRequestBody with xhigh reasoning
- Ensure complete test coverage for new Codex Max functionality

* docs: add specification files for gpt-5.1-codex-max and persistent logging

- Add comprehensive spec for Codex Max integration with xhigh reasoning
- Document persistent logging requirements and implementation plan
- Track requirements, references, and change logs for both features

* fix failing tests

* fix: implement cache isolation to resolve OAuth plugin conflicts

Resolves Issue #25 - Plugin fails with confusing errors if started with the other oauth plugin's cache files

**Root Cause**: Both opencode-openai-codex-auth and @openhax/codex plugins used identical cache file names in ~/.opencode/cache/, causing conflicts when switching between plugins.

**Solution**:
1. **Cache Isolation** (lib/utils/cache-config.ts):
   - Added PLUGIN_PREFIX = "openhax-codex" for unique cache namespace
   - Updated cache files to use plugin-specific prefixes:
     - openhax-codex-instructions.md (was codex-instructions.md)
     - openhax-codex-opencode-prompt.txt (was opencode-codex.txt)
     - Corresponding metadata files with -meta.json suffix

2. **Migration Logic** (lib/prompts/opencode-codex.ts):
   - migrateLegacyCache(): Automatically detects and migrates old cache files
   - validateCacheFormat(): Detects incompatible cache formats from other plugins
   - Enhanced error messages with actionable guidance for cache conflicts

3. **Test Updates**:
   - Updated all test files to use new cache file names
   - All 123 tests passing ✅

**User Experience**:
- Seamless migration: Users switching plugins get automatic cache migration
- Clear error messages: When cache conflicts occur, users get actionable guidance
- No data loss: Existing cache content is preserved during migration

Files modified:
- lib/utils/cache-config.ts - Cache isolation configuration
- lib/prompts/opencode-codex.ts - Migration and validation logic
- test/prompts-opencode-codex.test.ts - Updated cache file paths
- test/prompts-codex.test.ts - Updated cache file paths
- spec/issue-25-oauth-cache-conflicts.md - Implementation spec

* fixed minor type error

* test: remove redundant env reset and header mock

* Reduce console logging to debug flag

* fix: filter ENOENT errors from cache logging to reduce noise

- Add ENOENT filtering in getOpenCodeCodexPrompt cache read error handling
- Add ENOENT filtering in getCachedPromptPrefix error handling
- Prevents noisy error logs for expected first-run scenarios
- Preserves visibility into genuine I/O/parsing problems
- Addresses CodeRabbit review feedback on PR #28

* Use openhax/codex as plugin identifier

* Add fallback sources for OpenCode codex prompt

* Refresh OpenCode prompt cache metadata

* Refactor metrics response helpers and fix JWT decoding

* Tighten session tail slice to user/assistant only

* Improve compaction resilience and cache metrics safety

* Add release process documentation and open issues triage guide

- Add comprehensive RELEASE_PROCESS.md with step-by-step release workflow
- Add open-issues-triage.md for systematic issue management
- Both documents support better project governance and maintenance

* Strengthen tests for cache keys and gpt-5.1 cases

* Soften first-session cache warnings and sync transformed body

* Preseed session prompt cache keys

* Memoize config loading and keep bridge prompts stable

* Code cleanup: removed redundancy, improved tests

Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com>

* Fixed test shallow copy issue with deep copy.

Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com>

* chore(codex-max): memoize config loading, stabilize bridge prompts, and prep for Codex Max release; update request transforms and auth/session utilities; add lint-warnings spec

* stabilize oauth server tests by completing mocks

* fix: improve token refresh error handling and add debug logging

- Add debug logging to token refresh process in session-manager
- Improve error handling in codex-fetcher for 401 responses
- Fix fetch helper error handling for failed token refresh
- Add comprehensive test coverage for token refresh scenarios
- Add refresh-access-token spec documentation

* Fix test to assert on returned auth object instead of in-place mutation

- Update test/fetch-helpers.test.ts to properly validate refreshAndUpdateToken return value
- Add type guard for OAuth auth type checking
- Aligns test expectations with function's design of returning updated auth object
- All 396 tests pass with no TypeScript errors

* test: add negative test for host-provided prompt_cache_key; fix: ensure explicit Content-Type headers in OAuth server responses

- Add test to verify host-provided prompt_cache_key is preserved over session cache key
- Update OAuth server send helper to always include default Content-Type: text/plain; charset=utf-8
- Change headers parameter type to http.OutgoingHttpHeaders for stronger typing
- Preserve existing HTML response Content-Type override behavior

* fix: clone auth refresh and tighten console logging

* chore: guard disk logging and strengthen clones/roles

* fix: simplify role validation in formatRole function

- Remove hardcoded role whitelist in formatRole()
- Return normalized role directly without validation
- Add PR review documentation for CodeRabbit feedback

* Fix all CodeRabbit review issues from PR #29

## Critical Bug Fixes
- Fix content-type header bug in fetch-helpers.ts - preserve original content-type for non-JSON responses
- Fix cache fallback bug in codex.ts - wrap getLatestReleaseTag() in try/catch to ensure fallback chain works

## Test Improvements
- Remove unused mocks in cache-warming.test.ts areCachesWarm tests
- Fix mock leakage in index.test.ts by resetting sessionManager instance mocks
- Add missing compactionDecision test case in codex-fetcher.test.ts
- Remove redundant test case in codex-fetcher.test.ts

## Code Quality
- Harden logger against JSON.stringify failures with try/catch fallback
- Remove unused error parameter from logToConsole function
- Update type signatures to match new function signatures

## Documentation
- Add comprehensive PR analysis document in spec/pr-29-review-analysis.md

All tests pass (398 passed, 2 skipped) with 82.73% coverage.

* 📝 Add docstrings to `release/review-comments`

Docstrings generation was requested by @riatzukiza.

* #34 (comment)

The following files were modified:

* `lib/auth/server.ts`
* `lib/logger.ts`
* `lib/request/fetch-helpers.ts`

* Enhance compaction test coverage and fix linter warning

## Test Improvements
- Enhance compaction decision test in codex-fetcher.test.ts to validate full flow:
  - Verify recordSessionResponseFromHandledResponse called with compacted response
  - Verify fetcher returns the compacted response with correct status/body
  - Ensure complete end-to-end compaction flow validation

## Code Quality
- Fix linter warning in lib/auth/server.ts by prefixing unused parameter with underscore
- Update corresponding type definition in lib/types.ts to match

All tests continue to pass (398 passed, 2 skipped).

* Replace unsafe any cast with type-safe client access in logger

## Type Safety Improvements
- Add OpencodeApp type with proper notify/toast method signatures
- Add OpencodeClientWithApp intersection type for type-safe app access
- Create isOpencodeClientWithApp type guard function
- Replace (loggerClient as any)?.app with type-safe guarded access
- Update emit function to use type guard for loggerClient.app access

## Benefits
- Eliminates unsafe any type casting
- Provides compile-time type checking for app property access
- Maintains backward compatibility with existing OpencodeClient interface
- Follows TypeScript best practices for type guards

All tests continue to pass (398 passed, 2 skipped).

* Fix type safety in logger module

- Replace unsafe type casting with proper optional chaining
- Update notifyToast to use correct Opencode SDK API structure
- Use client.tui.showToast with proper body object format
- Remove unnecessary type guard function
- All tests pass and TypeScript compilation succeeds

* Clarify waitForCode state validation docs

---------

Co-authored-by: opencode-agent[bot] <opencode-agent[bot]@users.noreply.github.com>
Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant