Skip to content

Comments

Refine release workflow and logging#36

Merged
riatzukiza merged 15 commits intostagingfrom
ops/release-workflow
Nov 20, 2025
Merged

Refine release workflow and logging#36
riatzukiza merged 15 commits intostagingfrom
ops/release-workflow

Conversation

@riatzukiza
Copy link
Collaborator

Summary

  • switch staging release workflow to generate release branches and PRs (including hotfix PR to main) instead of pushing to protected branches
  • improve logging: warn-to-toast routing with word wrapping and delayed personal-use warning; refactor prompt caching/error handling helpers and disable max-lines lint for refactors
  • update tests and add specs outlining complexity reduction, warning toasts, release PR flow, and toast wrap behavior

Testing

  • not run (not requested)

riatzukiza and others added 4 commits November 20, 2025 00:57
* device/stealth: commit local changes in submodule

* Docs: fix test README package name (#18)

* docs: reference @openhax/codex in test README

* Delete spec/issue-11-docs-package.md

* Add fork-aware prompt_cache_key derivation (#19)

* Refactor: Eliminate code duplication and improve maintainability

- Create shared clone utility (lib/utils/clone.ts) to eliminate 3+ duplicate implementations
- Create InputItemUtils (lib/utils/input-item-utils.ts) for centralized text extraction
- Centralize magic numbers in constants with SESSION_CONFIG, CONVERSATION_CONFIG, PERFORMANCE_CONFIG
- Add ESLint cognitive complexity rules (max: 15) to prevent future issues
- Refactor large functions to use shared utilities, reducing complexity
- Update all modules to use centralized utilities and constants
- Remove dead code and unused imports
- All 123 tests pass, no regressions introduced

Code quality improved from B+ to A- with better maintainability.

* testing!

* Address review: plugin config errors, compaction gating, cache key fallback

* Style and metrics cleanup for auth, cache, and sessions

* linting and formatting

* Finalize PR 20 review: shared clone utils and tests

* Fix CI workflow YAML syntax and quoting

* update lint rules

* allow lint warnings without masking errors

* seperate linting and formatting ci

* seperated formatting workflow from ci yaml and gave it permissions to edit workflow files

* opencode can respond to all pr comments

* Fix test/README.md documentation: update test counts and config file paths

- Update stale test counts to reflect actual numbers:
  * auth.test.ts: 16 → 27 tests
  * config.test.ts: 13 → 16 tests
  * request-transformer.test.ts: 30 → 123 tests
  * logger.test.ts: 5 → 7 tests
  * response-handler.test.ts: unchanged at 10 tests

- Fix broken configuration file paths:
  * config/minimal-opencode.json (was config/minimal-opencode.json)
  * config/full-opencode.json (was config/full-opencode.json)

Both configuration files exist in the config/ directory at repository root.

* 0.1.0

* 0.2.0

* docs: update AGENTS.md for gpt-5.1-codex-max support

- Update overview to reflect new gpt-5.1-codex-max model as default
- Add note about xhigh reasoning effort exclusivity to gpt-5.1-codex-max
- Document expanded model lineup matching Codex CLI

* chore: add v3.3.0 changelog entry for gpt-5.1-codex-max

- Document new Codex Max support with xhigh reasoning
- Note configuration changes and sample updates
- Record automatic reasoning effort downgrade fix for compatibility

* docs: update README for gpt-5.1-codex-max integration

- Add gpt-5.1-codex-max configuration with xhigh reasoning support
- Update model count from 20 to 21 variants
- Expand model comparison table with Codex Max as flagship default
- Add note about xhigh reasoning exclusivity and auto-downgrade behavior

* config: add gpt-5.1-codex-max to full-opencode.json

- Add flagship Codex Max model with 400k context and 128k output limits
- Configure with medium reasoning effort as default
- Include encrypted_content for stateless operation
- Set store: false for ChatGPT backend compatibility

* config: update minimal-opencode.json default to gpt-5.1-codex-max

- Change default model from gpt-5.1-codex to gpt-5.1-codex-max
- Align minimal config with new flagship Codex Max model
- Provide users with best-in-class default experience

* docs: update CONFIG_FIELDS.md for gpt-5.1-codex-max

- Add gpt-5.1-codex-max example configuration
- Document xhigh reasoning effort exclusivity and auto-clamping
- Remove outdated duplicate key example section
- Clean up reasoning effort notes with new xhigh behavior

* docs: add persistent logging note to TESTING.md

- Document new per-request JSON logging and rolling log files
- Note environment variables for enabling live console output
- Help developers debug with comprehensive logging capabilities

* feat: implement persistent rolling logging in logger.ts

- Add rolling log file under ~/.opencode/logs/codex-plugin/
- Write structured JSON entries with timestamps for all log levels
- Maintain per-request stage files for detailed debugging
- Improve error handling and log forwarding to OpenCode app
- Separate console logging controls from file logging

* feat: add gpt-5.1-codex-max support to request transformer

- Add model normalization for all codex-max variants
- Implement xhigh reasoning effort with auto-downgrade for non-max models
- Add Codex Max specific reasoning effort validation and normalization
- Ensure compatibility with existing model configurations

* types: add xhigh reasoning effort to TypeScript interfaces

- Add xhigh to ConfigOptions.reasoningEffort union type
- Add xhigh to ReasoningConfig.effort union type
- Enable type-safe usage of extra high reasoning for gpt-5.1-codex-max

* test: add gpt-5.1-codex-max to test-all-models.sh

- Add test case for new flagship Codex Max model
- Verify medium reasoning effort with auto summary and medium verbosity
- Ensure comprehensive testing coverage for all model variants

* test: fix codex-fetcher test headers mock

- Add default Authorization header to createCodexHeaders mock
- Prevent test failures due to missing required headers
- Ensure consistent test environment across all test runs

* test: update logger tests for persistent rolling logging

- Add tests for rolling log file functionality
- Update test structure to handle module caching properly
- Test console logging behavior with environment variables
- Verify error handling for file write failures
- Ensure appendFileSync is called for all log entries

* test: add appendFileSync mock to plugin-config tests

- Add missing appendFileSync mock to prevent test failures
- Ensure all file system operations are properly mocked
- Maintain test isolation and consistency

* test: add appendFileSync mock to prompts-codex tests

- Add appendFileSync mock to prevent test failures from logger changes
- Clear all mocks properly in beforeEach setup
- Ensure test isolation and consistency across test runs

* test: add comprehensive fs mocks to prompts-opencode-codex tests

- Add existsSync, appendFileSync, writeFileSync, mkdirSync mocks
- Clear all mocks in beforeEach for proper test isolation
- Prevent test failures from logger persistent logging changes
- Ensure consistent test environment across all test files

* test: add comprehensive gpt-5.1-codex-max test coverage

- Add model normalization tests for all codex-max variants
- Test xhigh reasoning effort behavior for codex-max vs other models
- Verify reasoning effort downgrade logic (minimal/none → low, xhigh → high)
- Add integration tests for transformRequestBody with xhigh reasoning
- Ensure complete test coverage for new Codex Max functionality

* docs: add specification files for gpt-5.1-codex-max and persistent logging

- Add comprehensive spec for Codex Max integration with xhigh reasoning
- Document persistent logging requirements and implementation plan
- Track requirements, references, and change logs for both features

* fix failing tests

* fix: implement cache isolation to resolve OAuth plugin conflicts

Resolves Issue #25 - Plugin fails with confusing errors if started with the other oauth plugin's cache files

**Root Cause**: Both opencode-openai-codex-auth and @openhax/codex plugins used identical cache file names in ~/.opencode/cache/, causing conflicts when switching between plugins.

**Solution**:
1. **Cache Isolation** (lib/utils/cache-config.ts):
   - Added PLUGIN_PREFIX = "openhax-codex" for unique cache namespace
   - Updated cache files to use plugin-specific prefixes:
     - openhax-codex-instructions.md (was codex-instructions.md)
     - openhax-codex-opencode-prompt.txt (was opencode-codex.txt)
     - Corresponding metadata files with -meta.json suffix

2. **Migration Logic** (lib/prompts/opencode-codex.ts):
   - migrateLegacyCache(): Automatically detects and migrates old cache files
   - validateCacheFormat(): Detects incompatible cache formats from other plugins
   - Enhanced error messages with actionable guidance for cache conflicts

3. **Test Updates**:
   - Updated all test files to use new cache file names
   - All 123 tests passing ✅

**User Experience**:
- Seamless migration: Users switching plugins get automatic cache migration
- Clear error messages: When cache conflicts occur, users get actionable guidance
- No data loss: Existing cache content is preserved during migration

Files modified:
- lib/utils/cache-config.ts - Cache isolation configuration
- lib/prompts/opencode-codex.ts - Migration and validation logic
- test/prompts-opencode-codex.test.ts - Updated cache file paths
- test/prompts-codex.test.ts - Updated cache file paths
- spec/issue-25-oauth-cache-conflicts.md - Implementation spec

* fixed minor type error

* test: remove redundant env reset and header mock

* Reduce console logging to debug flag

* fix: filter ENOENT errors from cache logging to reduce noise

- Add ENOENT filtering in getOpenCodeCodexPrompt cache read error handling
- Add ENOENT filtering in getCachedPromptPrefix error handling
- Prevents noisy error logs for expected first-run scenarios
- Preserves visibility into genuine I/O/parsing problems
- Addresses CodeRabbit review feedback on PR #28

* Use openhax/codex as plugin identifier

* Add fallback sources for OpenCode codex prompt

* Refresh OpenCode prompt cache metadata

* Refactor metrics response helpers and fix JWT decoding

* Tighten session tail slice to user/assistant only

* Improve compaction resilience and cache metrics safety

* Add release process documentation and open issues triage guide

- Add comprehensive RELEASE_PROCESS.md with step-by-step release workflow
- Add open-issues-triage.md for systematic issue management
- Both documents support better project governance and maintenance

* Strengthen tests for cache keys and gpt-5.1 cases

* Soften first-session cache warnings and sync transformed body

* Preseed session prompt cache keys

* Memoize config loading and keep bridge prompts stable

* Code cleanup: removed redundancy, improved tests

Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com>

* Fixed test shallow copy issue with deep copy.

Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com>

* chore(codex-max): memoize config loading, stabilize bridge prompts, and prep for Codex Max release; update request transforms and auth/session utilities; add lint-warnings spec

* stabilize oauth server tests by completing mocks

* fix: improve token refresh error handling and add debug logging

- Add debug logging to token refresh process in session-manager
- Improve error handling in codex-fetcher for 401 responses
- Fix fetch helper error handling for failed token refresh
- Add comprehensive test coverage for token refresh scenarios
- Add refresh-access-token spec documentation

* Fix test to assert on returned auth object instead of in-place mutation

- Update test/fetch-helpers.test.ts to properly validate refreshAndUpdateToken return value
- Add type guard for OAuth auth type checking
- Aligns test expectations with function's design of returning updated auth object
- All 396 tests pass with no TypeScript errors

* test: add negative test for host-provided prompt_cache_key; fix: ensure explicit Content-Type headers in OAuth server responses

- Add test to verify host-provided prompt_cache_key is preserved over session cache key
- Update OAuth server send helper to always include default Content-Type: text/plain; charset=utf-8
- Change headers parameter type to http.OutgoingHttpHeaders for stronger typing
- Preserve existing HTML response Content-Type override behavior

* fix: clone auth refresh and tighten console logging

* chore: guard disk logging and strengthen clones/roles

* fix: simplify role validation in formatRole function

- Remove hardcoded role whitelist in formatRole()
- Return normalized role directly without validation
- Add PR review documentation for CodeRabbit feedback

* Fix all CodeRabbit review issues from PR #29

## Critical Bug Fixes
- Fix content-type header bug in fetch-helpers.ts - preserve original content-type for non-JSON responses
- Fix cache fallback bug in codex.ts - wrap getLatestReleaseTag() in try/catch to ensure fallback chain works

## Test Improvements
- Remove unused mocks in cache-warming.test.ts areCachesWarm tests
- Fix mock leakage in index.test.ts by resetting sessionManager instance mocks
- Add missing compactionDecision test case in codex-fetcher.test.ts
- Remove redundant test case in codex-fetcher.test.ts

## Code Quality
- Harden logger against JSON.stringify failures with try/catch fallback
- Remove unused error parameter from logToConsole function
- Update type signatures to match new function signatures

## Documentation
- Add comprehensive PR analysis document in spec/pr-29-review-analysis.md

All tests pass (398 passed, 2 skipped) with 82.73% coverage.

* 📝 Add docstrings to `release/review-comments`

Docstrings generation was requested by @riatzukiza.

* #34 (comment)

The following files were modified:

* `lib/auth/server.ts`
* `lib/logger.ts`
* `lib/request/fetch-helpers.ts`

* Enhance compaction test coverage and fix linter warning

## Test Improvements
- Enhance compaction decision test in codex-fetcher.test.ts to validate full flow:
  - Verify recordSessionResponseFromHandledResponse called with compacted response
  - Verify fetcher returns the compacted response with correct status/body
  - Ensure complete end-to-end compaction flow validation

## Code Quality
- Fix linter warning in lib/auth/server.ts by prefixing unused parameter with underscore
- Update corresponding type definition in lib/types.ts to match

All tests continue to pass (398 passed, 2 skipped).

* Replace unsafe any cast with type-safe client access in logger

## Type Safety Improvements
- Add OpencodeApp type with proper notify/toast method signatures
- Add OpencodeClientWithApp intersection type for type-safe app access
- Create isOpencodeClientWithApp type guard function
- Replace (loggerClient as any)?.app with type-safe guarded access
- Update emit function to use type guard for loggerClient.app access

## Benefits
- Eliminates unsafe any type casting
- Provides compile-time type checking for app property access
- Maintains backward compatibility with existing OpencodeClient interface
- Follows TypeScript best practices for type guards

All tests continue to pass (398 passed, 2 skipped).

* Fix type safety in logger module

- Replace unsafe type casting with proper optional chaining
- Update notifyToast to use correct Opencode SDK API structure
- Use client.tui.showToast with proper body object format
- Remove unnecessary type guard function
- All tests pass and TypeScript compilation succeeds

* Clarify waitForCode state validation docs

---------

Co-authored-by: opencode-agent[bot] <opencode-agent[bot]@users.noreply.github.com>
Co-authored-by: riatzukiza <riatzukiza@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 20, 2025

Summary by CodeRabbit

  • New Features

    • Added intelligent message wrapping for toast notifications to improve readability with long messages.
  • Bug Fixes

    • Improved error messages to include full URLs for better debugging context.
    • Enhanced caching system with fallback mechanisms for more reliable instruction and prompt retrieval.
  • Improvements

    • Refined tool normalization and model classification for better request handling.
    • Updated error handling to provide more structured, user-friendly error responses.

✏️ Tip: You can customize this high-level summary in your review settings.

Walkthrough

Adds toast-aware logging with wrapped messages and tests; refactors request transformation into modular utilities (prompt-cache, compaction, input-filters, tooling, model-config), implements deterministic prompt-cache keys, compaction helpers, tool normalization, and hardens Codex prompt fetching with ETag/304-aware fetch and resilient cache fallbacks.

Changes

Cohort / File(s) Summary
Logger with Toast Support
lib/logger.ts, test/logger.test.ts
Runtime supportsToast detection; gate forwarding to app.log/console when toasting; notifyToast signature adjusted internally; adds wrapToastMessage to split long messages; tests validate toast usage and wrapping.
Codex Instruction Caching
lib/prompts/codex.ts, lib/prompts/opencode-codex.ts
Introduces layered caching helpers (session/file metadata, freshness checks), centralized fetch with If-None-Match/304 handling, and fallbacks to cached or bundled instructions; refactors retrieval flow to prefer session → file → remote → bundled.
Compaction Helpers & Fetch Integration
lib/request/compaction-helpers.ts, lib/request/fetch-helpers.ts
New compaction APIs/types (CompactionSettings, CompactionOptions, applyCompactionIfNeeded), builder for compaction settings, prompt-cache-key injection, history compaction application, and centralized error enrichment (rate-limit parsing, friendly messages).
Input Filtering & Bridge Logic
lib/request/input-filters.ts
New input-filtering utilities: filterInput, OpenCode system prompt detection/sanitization, bridge requirement analysis, addCodexBridgeMessage, and addToolRemapMessage with session caching and hashing guards.
Model Config & Reasoning
lib/request/model-config.ts
Adds model normalization/classification (ModelFlags, normalizeModel), per-model config merging (getModelConfig), and reasoning-effort selection/normalization (getReasoningConfig, etc.).
Prompt Cache Keying
lib/request/prompt-cache.ts
Deterministic cache-key derivation and provenance: ensurePromptCacheKey, derivePromptCacheKeyFromBody, fallback hash generation, and logCacheKeyDecision with structured PromptCacheKeyResult.
Tool Normalization & Tooling
lib/request/tool-normalizer.ts, lib/request/tooling.ts
normalizeToolsForResponses to unify varied tool representations and normalizeToolsForCodexBody to apply/strip normalized tools, set tool_choice and parallel_tool_calls.
Request Orchestration / Exports
lib/request/request-transformer.ts
Moves inline helpers into new modules; adds TransformRequestOptions and TransformResult; re-exports utilities from new modules and wires the modular flow.

Sequence Diagram(s)

sequenceDiagram
    actor Host
    participant RequestTransformer
    participant InputFilters
    participant CompactionHelpers
    participant PromptCache
    participant Tooling
    participant FetchHelpers

    Host->>RequestTransformer: transformRequestBody(body, options)
    RequestTransformer->>InputFilters: filterOpenCodeSystemPrompts(body.input)
    InputFilters-->>RequestTransformer: filtered input
    RequestTransformer->>InputFilters: addCodexBridgeMessage(input, hasTools, session)
    InputFilters-->>RequestTransformer: input with bridge?
    RequestTransformer->>CompactionHelpers: applyCompactionIfNeeded(body, options.compaction)
    alt compaction triggered
        CompactionHelpers-->>RequestTransformer: compactionDecision + new input
    else no compaction
        CompactionHelpers-->>RequestTransformer: undefined
    end
    RequestTransformer->>PromptCache: ensurePromptCacheKey(body)
    PromptCache-->>RequestTransformer: cacheKeyResult
    RequestTransformer->>Tooling: normalizeToolsForCodexBody(body, skipTransforms)
    Tooling-->>RequestTransformer: tools normalized / cleaned
    RequestTransformer->>FetchHelpers: transformRequestForCodex(bodyWithCacheKey)
    FetchHelpers-->>Host: transformed request result
Loading
sequenceDiagram
    actor App
    participant Logger
    participant Toast as UI.Toast
    participant Console

    App->>Logger: logWarn(message)
    Logger->>Logger: detect supportsToast
    alt supports toast
        Logger->>Logger: wrapToastMessage(message)
        Logger->>Toast: showToast(wrappedMessage)
        Toast-->>Logger: ok
        Note over Console: console.warn & app.log NOT forwarded for warn
    else no toast
        Logger->>Console: console.warn(message)
        Logger->>Logger: forward to app.log
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

  • Key areas to focus on:
    • Prompt cache key derivation and fallback hashing (lib/request/prompt-cache.ts).
    • Compaction trigger heuristics, serialization, and preserved system-message handling (lib/request/compaction-helpers.ts).
    • Input-filtering, bridge injection and caching to avoid duplicate or missing bridge messages (lib/request/input-filters.ts).
    • ETag/304 handling, cache metadata writes, and fallback ordering in Codex prompt fetchers (lib/prompts/codex.ts, lib/prompts/opencode-codex.ts).
    • Logger toast gating semantics and tests asserting suppression of console/app.log for warned toasts (lib/logger.ts, test/logger.test.ts).

Possibly related PRs

Poem

Toasts unwrap and cache keys chime,
Bridges stitch a tidy rhyme.
Prompts compact and tools align,
Warnings wrapped in patient time.
Small changes, smoother climb.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 12.50% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Refine release workflow and logging' accurately reflects the main changes: improvements to the release workflow and logging system with toast notifications.
Description check ✅ Passed The description directly addresses key changes in the PR, including release workflow updates, logging improvements with toast routing and wrapping, refactoring of prompt caching/error handlers, and test updates.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ops/release-workflow

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e6f6a1a and 7a1b4b2.

📒 Files selected for processing (4)
  • lib/prompts/codex.ts (2 hunks)
  • lib/request/request-transformer.ts (2 hunks)
  • test/prompts-codex.test.ts (4 hunks)
  • test/request-transformer.test.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
lib/prompts/codex.ts (5)
lib/types.ts (1)
  • CacheMetadata (223-228)
lib/utils/file-system-utils.ts (3)
  • safeReadFile (56-62)
  • safeWriteFile (42-48)
  • fileExistsAndNotEmpty (69-76)
lib/cache/session-cache.ts (2)
  • getCodexCacheKey (83-85)
  • codexInstructionsCache (74-74)
lib/utils/cache-config.ts (1)
  • CACHE_TTL_MS (11-11)
lib/logger.ts (2)
  • logError (116-118)
  • logWarn (112-114)
lib/request/request-transformer.ts (3)
lib/request/compaction-helpers.ts (2)
  • CompactionOptions (20-25)
  • applyCompactionIfNeeded (84-109)
lib/types.ts (1)
  • RequestBody (147-169)
lib/compaction/compaction-executor.ts (1)
  • CompactionDecision (5-15)
🔇 Additional comments (8)
test/prompts-codex.test.ts (1)

115-115: LGTM! Improved test coverage for warning paths.

The addition of console.warn monitoring correctly validates the new logging behavior where fallback scenarios use warnings instead of errors. Error messages now include the specific GitHub URL being fetched, improving debuggability.

Also applies to: 140-140, 142-146, 231-231, 247-247, 249-251

lib/prompts/codex.ts (3)

41-86: LGTM! Well-structured helper functions with proper error handling.

The new helper functions (readCacheMetadata, loadSessionFromMetadata, cacheIsFresh, readCachedInstructions, loadBundledInstructions) cleanly separate concerns and handle errors appropriately. The try-catch in loadBundledInstructions prevents crashes on corrupted installations as flagged in previous reviews.


88-166: LGTM! Robust fetch and fallback orchestration.

The fetchInstructionsFromGithub and fetchInstructionsWithFallback functions implement proper ETag-based conditional requests, handle 304 responses correctly, and maintain a resilient fallback chain (remote → file cache → bundled). The use of logWarn for expected degraded-mode fallbacks (lines 155, 163) appropriately distinguishes them from actual errors.


187-237: LGTM! Refactored cache-driven flow is resilient and modular.

The main getCodexInstructions function now follows a clear, layered approach:

  1. Session cache (in-memory)
  2. Fresh file cache
  3. Latest release tag fetch with graceful fallback to existing cache on API failure (lines 206-212)
  4. Remote fetch with fallback chain

This addresses previous concerns about preferring existing cache over bundled fallback during GitHub API outages, improving offline resilience.

test/request-transformer.test.ts (1)

37-40: Verify the behavior change for unknown models.

The test now expects unknown models to be preserved as-is rather than remapped to gpt-5.1. This is a behavioral change in the public API:

  • Old: unknown-modelgpt-5.1 (safe fallback)
  • New: unknown-modelunknown-model (pass-through)

While this increases flexibility (allows using new models without code updates), it means invalid model names now reach the API and may cause runtime errors. Please confirm this is the intended behavior and that downstream error handling is adequate.

lib/request/request-transformer.ts (3)

25-36: LGTM! Well-documented interfaces with clear semantics.

The new TransformRequestOptions and TransformResult interfaces are properly documented:

  • Line 26 clarifies that preserveIds only applies when conversation transforms run (not during compaction bypass scenarios, though compaction now applies filtering separately per commit 53f43e9)
  • Line 33 clarifies that body is mutated in place rather than cloned

This addresses previous review comments about documenting mutation semantics.


89-93: LGTM! Compaction now correctly applies ID filtering.

The applyCompactionIfNeeded call now receives preserveIds via the spread on line 91. According to the relevant code snippet from compaction-helpers.ts (lines 99-100), compaction now calls filterInput before assigning to body.input, which resolves the previous issue where IDs leaked when compaction was active.


3-23: LGTM! Clean modularization improves maintainability.

The refactor successfully delegates specialized concerns to dedicated modules:

  • input-filters.ts: Input filtering, bridge/tool message injection
  • model-config.ts: Model normalization and configuration resolution
  • compaction-helpers.ts: Compaction orchestration
  • prompt-cache.ts: Cache key derivation
  • tooling.ts: Tool normalization

The public API is preserved via re-exports, maintaining backward compatibility while improving code organization.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
lib/logger.ts (1)

26-39: Warn-level toast routing is correct; align showToast typing with usage

Functionally, the new emit flow looks good:

  • All levels still hit the rolling log when logging/debug is enabled.
  • When a TUI client is available, warn entries skip app.log and console, and instead route to notifyToast, matching the new tests.
  • Errors always toast, and notifyToast falls back to logToConsole("warn", ...) on failures, so you still get visibility if the UI path blows up.
  • wrapToastMessage behaves as expected for normal, space-delimited text and keeps lines within the intended width.

A couple of minor refinements to consider:

  1. Type shape for showToast

    OpencodeClientWithTui currently declares:

    showToast?: (args: { message: string; variant?: "success" | "error" | "warning" | "info" }) => void;

    but notifyToast calls:

    loggerClient.tui.showToast({
      body: { title: "...", message: wrappedMessage, variant },
    });

    To keep TypeScript honest and in sync with the actual SDK, it would be safer to define the argument type in terms of the real client type, e.g. via Parameters<OpencodeClient["tui"]["showToast"]>[0] (if available) or by updating OpencodeClientWithTui to match the concrete { body: { ... } } shape you’re using.

  2. Edge case: extremely long single tokens

    wrapToastMessage splits on whitespace and wraps by word, which is perfect for normal sentences. For extraordinarily long, unbroken strings (no spaces), a single “word” can still exceed maxWidth. If you expect such payloads (e.g., long IDs, stack traces), you might want to add a fallback that hard-splits words longer than maxWidth into chunks.

Neither point is a blocker, but addressing (1) in particular will prevent subtle type drift between this helper and @opencode-ai/sdk.

Also applies to: 127-165, 179-224

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 83734e2 and e2f9d65.

⛔ Files ignored due to path filters (10)
  • .github/workflows/staging-release-prep.yml is excluded by none and included by none
  • eslint.config.mjs is excluded by none and included by none
  • index.ts is excluded by none and included by none
  • package-lock.json is excluded by !**/package-lock.json and included by none
  • package.json is excluded by none and included by none
  • spec/complexity-reduction.md is excluded by none and included by none
  • spec/log-warning-toasts.md is excluded by none and included by none
  • spec/release-pr-flow.md is excluded by none and included by none
  • spec/request-transformer-refactor.md is excluded by none and included by none
  • spec/toast-word-wrap.md is excluded by none and included by none
📒 Files selected for processing (13)
  • lib/logger.ts (5 hunks)
  • lib/prompts/codex.ts (2 hunks)
  • lib/prompts/opencode-codex.ts (2 hunks)
  • lib/request/compaction-helpers.ts (1 hunks)
  • lib/request/fetch-helpers.ts (6 hunks)
  • lib/request/input-filters.ts (1 hunks)
  • lib/request/model-config.ts (1 hunks)
  • lib/request/prompt-cache.ts (1 hunks)
  • lib/request/request-transformer.ts (2 hunks)
  • lib/request/tool-normalizer.ts (1 hunks)
  • lib/request/tooling.ts (1 hunks)
  • test/logger.test.ts (2 hunks)
  • test/session-manager.test.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (12)
lib/request/tooling.ts (2)
lib/types.ts (1)
  • RequestBody (147-169)
lib/request/tool-normalizer.ts (1)
  • normalizeToolsForResponses (139-148)
lib/prompts/opencode-codex.ts (3)
lib/logger.ts (2)
  • logError (116-118)
  • logWarn (112-114)
lib/utils/cache-config.ts (2)
  • CACHE_TTL_MS (11-11)
  • PLUGIN_PREFIX (26-26)
lib/cache/session-cache.ts (1)
  • openCodePromptCache (75-75)
test/logger.test.ts (1)
lib/logger.ts (3)
  • configureLogger (62-83)
  • logWarn (112-114)
  • flushRollingLogsForTest (120-125)
lib/prompts/codex.ts (5)
lib/types.ts (1)
  • CacheMetadata (223-228)
lib/utils/file-system-utils.ts (3)
  • safeReadFile (56-62)
  • safeWriteFile (42-48)
  • fileExistsAndNotEmpty (69-76)
lib/cache/session-cache.ts (2)
  • getCodexCacheKey (83-85)
  • codexInstructionsCache (74-74)
lib/utils/cache-config.ts (1)
  • CACHE_TTL_MS (11-11)
lib/logger.ts (2)
  • logError (116-118)
  • logWarn (112-114)
test/session-manager.test.ts (1)
lib/types.ts (1)
  • SessionContext (199-205)
lib/request/compaction-helpers.ts (5)
lib/utils/clone.ts (1)
  • cloneInputItems (31-42)
lib/compaction/compaction-executor.ts (1)
  • CompactionDecision (5-15)
lib/utils/input-item-utils.ts (1)
  • countConversationTurns (116-118)
lib/compaction/codex-compaction.ts (4)
  • approximateTokenCount (26-35)
  • serializeConversation (55-85)
  • buildCompactionPromptItems (87-99)
  • collectSystemMessages (101-106)
lib/types.ts (1)
  • RequestBody (147-169)
lib/logger.ts (1)
lib/constants.ts (1)
  • PLUGIN_NAME (7-7)
lib/request/input-filters.ts (7)
lib/utils/input-item-utils.ts (1)
  • extractTextFromItem (16-27)
lib/prompts/opencode-codex.ts (1)
  • getOpenCodeCodexPrompt (190-261)
lib/prompts/codex-opencode-bridge.ts (1)
  • CODEX_OPENCODE_BRIDGE (11-125)
lib/types.ts (1)
  • SessionContext (199-205)
lib/cache/prompt-fingerprinting.ts (4)
  • hasBridgePromptInConversation (27-46)
  • cacheBridgeDecision (109-117)
  • getCachedBridgeDecision (88-101)
  • generateContentHash (15-17)
lib/logger.ts (1)
  • logDebug (104-106)
lib/prompts/codex.ts (1)
  • TOOL_REMAP_MESSAGE (232-294)
lib/request/prompt-cache.ts (2)
lib/types.ts (1)
  • RequestBody (147-169)
lib/logger.ts (3)
  • logDebug (104-106)
  • logInfo (108-110)
  • logWarn (112-114)
lib/request/model-config.ts (1)
lib/types.ts (3)
  • UserConfig (40-47)
  • ConfigOptions (52-57)
  • ReasoningConfig (62-65)
lib/request/request-transformer.ts (3)
lib/request/compaction-helpers.ts (2)
  • CompactionOptions (19-23)
  • applyCompactionIfNeeded (82-106)
lib/types.ts (1)
  • RequestBody (147-169)
lib/compaction/compaction-executor.ts (1)
  • CompactionDecision (5-15)
lib/request/fetch-helpers.ts (6)
lib/types.ts (3)
  • PluginConfig (6-35)
  • RequestBody (147-169)
  • SessionContext (199-205)
lib/session/session-manager.ts (1)
  • SessionManager (174-522)
lib/compaction/codex-compaction.ts (1)
  • detectCompactionCommand (37-53)
lib/logger.ts (1)
  • logRequest (85-102)
lib/constants.ts (1)
  • LOG_STAGES (56-61)
lib/request/request-transformer.ts (1)
  • transformRequestBody (74-134)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Mutation Tests
🔇 Additional comments (8)
lib/prompts/codex.ts (1)

41-121: Caching helpers and GitHub fetch flow look consistent

The new metadata/session helpers and fetchInstructionsFromGithub / fetchInstructionsWithFallback wiring are internally consistent: ETag handling, session cache hydration, and disk metadata updates all line up with the CacheMetadata shape and the fileExistsAndNotEmpty checks. I don’t see correctness problems in these helpers.

lib/request/compaction-helpers.ts (1)

25-105: Compaction decision and body mutation logic look sound

The compaction flow is coherent:

  • maybeBuildCompactionPrompt cleanly separates command-triggered vs auto-triggered compaction and guards on settings.enabled.
  • Auto mode correctly checks both approximate tokens and a minimum turn count before firing.
  • applyCompactionIfNeeded replaces body.input with the compaction prompt and removes tools / tool_choice / parallel_tool_calls, which is appropriate for a pure-text compaction request.

The extra settings.enabled check inside maybeBuildCompactionPrompt is slightly redundant given the guard in applyCompactionIfNeeded, but it’s harmless and defensive. No blockers here.

test/session-manager.test.ts (1)

155-165: Metrics snapshot test rename is safe

Renaming context to _context here is purely cosmetic and still ensures applyRequest is called so the session manager has state before getMetrics(). No behavioral change and no issues.

test/logger.test.ts (1)

132-184: New warn→toast and wrapping tests give good coverage

These two tests nicely exercise the new behavior:

  • With a TUI client, logWarn drives tui.showToast and avoids both app.log and console.warn.
  • Long messages are wrapped into multiple lines with a hard cap at 72 characters per line, with no regressions to app/console logging.

This gives solid regression protection around the new toast routing and formatting.

lib/prompts/opencode-codex.ts (1)

87-181: Refactored OpenCode prompt caching and fetch fallbacks look solid

The new layering around readCachedPrompt, cacheIsFresh, and fetchPromptFromUrl is consistent and resilient:

  • Disk meta/content reads are centralized with clear error handling.
  • TTL checks use lastChecked as intended, and you validate cache shape to avoid mixing foreign plugin caches.
  • The 304 path refreshes lastChecked and the session cache without touching the content, and the multi-URL fetch loop correctly falls back to cached content (updating meta) when both URLs fail.

Overall, this is a clear improvement in structure and robustness over inlined logic; I don’t see correctness issues here.

Also applies to: 209-256

lib/request/prompt-cache.ts (1)

6-203: No blocker; prompt cache key derivation correctly separates concerns.

Verification confirms the design is sound. The code intentionally coordinates between ensurePromptCacheKey() and SessionManager:

  • ensurePromptCacheKey() generates or normalizes keys during stateless request transformation (e.g., "cache_meta-conv-123" or "cache_<hash>").
  • SessionManager derives its own keys (e.g., "conv-123" or "conv-prefix-fork::prefix::..."), explicitly marked "Keep in sync" per session-manager.ts:153–154.
  • When active, SessionManager.applyRequest() overwrites body.prompt_cache_key with its own value, ensuring session-managed flows take precedence.
  • Tests confirm both paths work: fallback keys match /^cache_/, and session keys use the expected format.
  • Both systems respect existing host-provided keys and avoid redundant overwrites (fetch-helpers.ts).

The key formats are naturally distinct and the coordination is documented, so there is no hidden coupling.

lib/request/request-transformer.ts (2)

16-23: Re-export surface looks good; keep this as the stable entrypoint

Re-exporting input-filter and model-config helpers here keeps request-transformer as the main integration surface while letting their implementations live in dedicated modules. This is a reasonable way to preserve import paths through the refactor; just ensure any external docs reference this file as the canonical entrypoint rather than the new internal modules.


74-81: Return type integration is complete and correct—no action needed

The API change from Promise<RequestBody> to Promise<TransformResult> has been properly integrated throughout the codebase:

  • Function implementation (lines 74–133) correctly returns { body, compactionDecision }
  • In-place mutation semantics preserved: The body is still mutated during transformation (lines 97–132), then returned within the result object
  • Production code updated: fetch-helpers.ts (lines 189, 207, 210) correctly accesses transformResult.body and its properties
  • Tests already adapted: Both request-transformer.test.ts and request-transformer-tools-normalization.test.ts extract .body from the result, confirming the new contract is in place

All call sites are consistent with the new return shape. The function maintains its core semantics while cleanly separating the transformed body from the compaction decision.

@riatzukiza riatzukiza changed the base branch from main to staging November 20, 2025 17:31
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e2f9d65 and 41ff1e9.

⛔ Files ignored due to path filters (4)
  • .github/workflows/pr-auto-base.yml is excluded by none and included by none
  • .github/workflows/staging-release-prep.yml is excluded by none and included by none
  • spec/merge-conflict-resolution.md is excluded by none and included by none
  • spec/release-automerge-and-tags.md is excluded by none and included by none
📒 Files selected for processing (2)
  • lib/request/fetch-helpers.ts (6 hunks)
  • test/logger.test.ts (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
test/logger.test.ts (1)
lib/logger.ts (3)
  • configureLogger (62-83)
  • logWarn (112-114)
  • flushRollingLogsForTest (120-125)
lib/request/fetch-helpers.ts (3)
lib/types.ts (3)
  • PluginConfig (6-35)
  • RequestBody (147-169)
  • SessionContext (199-205)
lib/session/session-manager.ts (1)
  • SessionManager (174-522)
lib/request/request-transformer.ts (1)
  • transformRequestBody (74-134)
🔇 Additional comments (2)
test/logger.test.ts (2)

1-1: No blockers: type-only OpencodeClient import fits usage

Using a type-only import here is appropriate: it gives you type safety for the mocked client without impacting runtime or test isolation.


132-157: Toast routing behavior is well covered

This test cleanly exercises the “TUI present” path: it verifies that logWarn prefers tui.showToast and suppresses both app.log and console.warn. The setup matches configureLogger’s contract and looks stable.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d2130ff and 53f43e9.

📒 Files selected for processing (4)
  • lib/request/compaction-helpers.ts (1 hunks)
  • lib/request/fetch-helpers.ts (6 hunks)
  • lib/request/model-config.ts (1 hunks)
  • lib/request/request-transformer.ts (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
lib/request/model-config.ts (1)
lib/types.ts (3)
  • UserConfig (40-47)
  • ConfigOptions (52-57)
  • ReasoningConfig (62-65)
lib/request/fetch-helpers.ts (5)
lib/types.ts (3)
  • PluginConfig (6-35)
  • RequestBody (147-169)
  • SessionContext (199-205)
lib/compaction/codex-compaction.ts (1)
  • detectCompactionCommand (37-53)
lib/logger.ts (1)
  • logRequest (85-102)
lib/constants.ts (1)
  • LOG_STAGES (56-61)
lib/request/request-transformer.ts (1)
  • transformRequestBody (78-141)
lib/request/compaction-helpers.ts (6)
lib/utils/clone.ts (1)
  • cloneInputItems (31-42)
lib/compaction/compaction-executor.ts (1)
  • CompactionDecision (5-15)
lib/utils/input-item-utils.ts (1)
  • countConversationTurns (116-118)
lib/compaction/codex-compaction.ts (4)
  • approximateTokenCount (26-35)
  • serializeConversation (55-85)
  • buildCompactionPromptItems (87-99)
  • collectSystemMessages (101-106)
lib/types.ts (1)
  • RequestBody (147-169)
lib/request/input-filters.ts (1)
  • filterInput (16-46)
lib/request/request-transformer.ts (3)
lib/request/compaction-helpers.ts (2)
  • CompactionOptions (20-25)
  • applyCompactionIfNeeded (84-109)
lib/types.ts (1)
  • RequestBody (147-169)
lib/compaction/compaction-executor.ts (1)
  • CompactionDecision (5-15)
🔇 Additional comments (26)
lib/request/fetch-helpers.ts (10)

102-108: LGTM: Compaction settings helper is clean.

The centralization of compaction settings derivation is straightforward and provides sensible defaults.


110-120: LGTM: Prompt cache key logic is sound.

The function correctly prioritizes host-provided keys and safely applies session cache keys when appropriate.


122-131: LGTM: Compaction history application is properly guarded.

The early returns for disabled settings and manual commands are appropriate.


165-176: Past review concern addressed: duplicate logic removed.

The refactoring successfully eliminates the redundant prompt cache key checks and compaction history applications. The flow now uses buildCompactionSettings, applyPromptCacheKey, and applyCompactionHistory as single sources of truth.


271-277: LGTM: Safe JSON parser is straightforward.

The try-catch wrapper provides the expected fallback behavior for error parsing.


279-297: LGTM: Rate limit parsing is robust.

The header parsing safely extracts primary and secondary bucket metadata and correctly returns undefined when no rate limit headers are present.


300-302: LGTM: Usage limit detection is appropriate.

The regex pattern correctly identifies the expected usage and rate limit error codes.


304-324: LGTM: Friendly message builder is well-crafted.

The function gracefully handles various reset timestamp formats and produces clear, actionable guidance for users.


326-354: LGTM: Error enrichment is comprehensive.

The function properly parses errors, extracts rate limits, identifies usage limit errors, and builds friendly messages. The check if (!parsed) treats scalar JSON (e.g., 0, false) as non-JSON, which is acceptable for typical Codex error responses that should be objects.


362-382: LGTM: Error response handling correctly applies enrichment.

The function appropriately uses enrichErrorBody, logs the enriched payload, and sets the content-type header only when JSON enrichment occurred.

lib/request/compaction-helpers.ts (5)

14-18: LGTM: CompactionSettings interface is clear.

The interface appropriately defines compaction feature toggles and thresholds.


20-25: LGTM: CompactionOptions interface is comprehensive.

The interface correctly bundles compaction settings, command context, original input, and the preserveIds flag.


27-36: LGTM: Last user message removal is correctly implemented.

The function safely clones the input and removes the last user message from the conversation history.


38-82: LGTM: Compaction prompt builder handles both trigger modes correctly.

The function appropriately detects command and auto-triggered compaction, serializes the conversation, and produces the expected prompt items and decision metadata.


84-109: Past review concern addressed: ID filtering now applied to compaction output.

Line 103 correctly applies filterInput(compactionBuild.items, { preserveIds }) to the compaction output before replacing body.input, ensuring that the preserveIds contract is respected in the compaction path. Tool-related fields are appropriately stripped.

lib/request/model-config.ts (7)

3-43: Past review concern addressed: Unknown models are now preserved.

Line 42 correctly returns sanitized instead of forcing all non-matching models to the fallback. This preserves valid caller-selected models (like gpt-4o-mini) that don't match the GPT-5 heuristics.


45-53: LGTM: Model config merging is straightforward.

The function correctly merges global and model-specific configurations with appropriate precedence.


55-96: LGTM: Model classification logic is thorough.

The classification function comprehensively detects GPT-5.1, Codex variants, and lightweight models through multiple naming patterns.


98-109: LGTM: Default effort assignments are sensible.

The function appropriately assigns reasoning effort defaults based on model characteristics and capabilities.


111-119: LGTM: Effort request validation is appropriate.

The function correctly downgrades "xhigh" to "high" for models that don't support the maximum effort level.


121-155: LGTM: Effort normalization handles model-specific constraints.

The function appropriately adjusts reasoning effort to align with each model family's capabilities and supported levels.


157-170: LGTM: Reasoning config derivation pipeline is well-structured.

The function correctly orchestrates model classification, effort defaulting, request validation, and normalization to produce the final reasoning configuration.

lib/request/request-transformer.ts (4)

25-32: Past review concern addressed: TransformRequestOptions is now documented.

The JSDoc on line 27 clarifies that preserveIds is enforced after compaction builds, directly addressing the past review's request to document the semantics.


39-76: LGTM: Input transformation correctly filters IDs and applies mode-specific transforms.

The function appropriately guards against skipped transforms, applies ID filtering with the preserveIds flag, logs defensively, and correctly branches between CODEX mode and tool remap flows.


78-141: Past review concern fully addressed: Compaction now respects preserveIds.

Line 92 correctly spreads preserveIds into the compaction options, ensuring that filterInput is applied to compaction output in applyCompactionIfNeeded (compaction-helpers.ts line 103). Non-compaction flows are filtered in transformInputForCodex (line 56). Both paths now honor the preserveIds contract.


1-23: LGTM: Module imports and re-exports are well-organized.

The import structure correctly brings in modular helpers, and the re-exports appropriately expose public APIs from input-filters and model-config.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
lib/prompts/codex.ts (1)

66-74: Empty string fallback could cache invalid instructions.

If safeReadFile returns null (file doesn't exist or read error), line 71 returns an empty string that gets cached in the session. Callers expect valid instructions, not empty content.

Although this function appears to be called only when cacheFileExists is true, defensive coding suggests either:

  1. Asserting the file exists, or
  2. Throwing an error if content is empty

Consider applying this diff:

 function readCachedInstructions(
 	cacheFilePath: string,
 	etag?: string | undefined,
 	tag?: string | undefined,
 ): string {
-	const fileContent = safeReadFile(cacheFilePath) || "";
+	const fileContent = safeReadFile(cacheFilePath);
+	if (!fileContent) {
+		throw new Error(`Cache file exists but could not be read: ${cacheFilePath}`);
+	}
 	cacheSessionEntry(fileContent, etag, tag);
 	return fileContent;
 }
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 53f43e9 and e6f6a1a.

📒 Files selected for processing (1)
  • lib/prompts/codex.ts (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
lib/prompts/codex.ts (5)
lib/types.ts (1)
  • CacheMetadata (223-228)
lib/utils/file-system-utils.ts (3)
  • safeReadFile (56-62)
  • safeWriteFile (42-48)
  • fileExistsAndNotEmpty (69-76)
lib/cache/session-cache.ts (2)
  • getCodexCacheKey (83-85)
  • codexInstructionsCache (74-74)
lib/utils/cache-config.ts (1)
  • CACHE_TTL_MS (11-11)
lib/logger.ts (2)
  • logError (116-118)
  • logWarn (112-114)
🔇 Additional comments (3)
lib/prompts/codex.ts (3)

41-50: LGTM! Defensive metadata parsing.

The error handling is appropriate, returning null on read or parse failures.


52-64: LGTM! Clean helper utilities.

Both loadSessionFromMetadata and cacheIsFresh have clear responsibilities and correct logic.


199-207: Previous review concern properly addressed.

The fallback logic now correctly prefers the existing disk cache over the bundled fallback when GitHub API lookup fails, improving offline resilience.

riatzukiza and others added 2 commits November 20, 2025 12:33
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
@riatzukiza riatzukiza merged commit fd1fbb6 into staging Nov 20, 2025
12 checks passed
@riatzukiza riatzukiza deleted the ops/release-workflow branch November 20, 2025 18:50
github-actions bot added a commit that referenced this pull request Nov 20, 2025
github-actions bot added a commit that referenced this pull request Nov 20, 2025
@coderabbitai coderabbitai bot mentioned this pull request Dec 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant