diff --git a/AGENTS.md b/AGENTS.md index d741405..61082d3 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -4,7 +4,7 @@ This file provides coding guidance for AI agents (including Claude Code, Codex, ## Overview -This is an **opencode plugin** that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It allows users to access `gpt-5-codex`, `gpt-5-codex-mini`, and `gpt-5` models through their ChatGPT subscription instead of using OpenAI Platform API credits. +This is an **opencode plugin** that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It now mirrors the Codex CLI lineup, making `gpt-5.1-codex-max` (with optional `xhigh` reasoning) the default alongside the existing `gpt-5.1-codex`, `gpt-5.1-codex-mini`, and legacy `gpt-5` models—all available through a ChatGPT subscription instead of OpenAI Platform API credits. **Key architecture principle**: 7-step fetch flow that intercepts opencode's OpenAI SDK requests, transforms them for the ChatGPT backend API, and handles OAuth token management. @@ -157,6 +157,8 @@ This plugin **intentionally differs from opencode defaults** because it accesses | `store` | true | false | Required for ChatGPT backend | | `include` | (not set) | `["reasoning.encrypted_content"]` | Required for stateless operation | +> **Extra High reasoning**: `reasoningEffort: "xhigh"` is only honored for `gpt-5.1-codex-max`. Other models automatically downgrade it to `high` so their API calls remain valid. + ## File Paths & Locations - **Plugin config**: `~/.opencode/openhax-codex-config.json` diff --git a/CHANGELOG.md b/CHANGELOG.md index c18d3c1..0fcfa80 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,17 @@ All notable changes to this project are documented here. Dates use the ISO format (YYYY-MM-DD). +## [3.3.0] - 2025-11-19 +### Added +- Codex Max support that mirrors the Codex CLI: normalization for every `gpt-5.1-codex-max` alias, `reasoningEffort: "xhigh"`, and unit tests covering both the transformer and request body integration path. +- Documentation and configuration updates calling out Codex Max as the flagship preset, plus refreshed samples showing how to opt into the Extra High reasoning mode. + +### Changed +- Sample configs (`full` + `minimal`), README tables, AGENTS.md, and the diagnostics script now prefer `gpt-5.1-codex-max`, keeping plugin defaults aligned with Codex CLI behaviour. + +### Fixed +- Requests that specify `reasoningEffort: "xhigh"` for non-supported models are now automatically downgraded to `high`, preventing API errors when Codex Max isn't selected. + ## [3.2.0] - 2025-11-13 ### Added - GPT-5.1 family integration: normalization for `gpt-5.1`/`gpt-5.1-codex`/`gpt-5.1-codex-mini`, expanded reasoning heuristics (including `reasoningEffort: "none"`), and preservation of the native `shell`/`apply_patch` tools emitted by Codex CLI. diff --git a/README.md b/README.md index 85307cb..65ccd3c 100644 --- a/README.md +++ b/README.md @@ -93,6 +93,22 @@ For the complete experience with all reasoning variants matching the official Co "store": false }, "models": { + "gpt-5.1-codex-max": { + "name": "GPT 5.1 Codex Max (OAuth)", + "limit": { + "context": 400000, + "output": 128000 + }, + "options": { + "reasoningEffort": "medium", + "reasoningSummary": "auto", + "textVerbosity": "medium", + "include": [ + "reasoning.encrypted_content" + ], + "store": false + } + }, "gpt-5.1-codex-low": { "name": "GPT 5.1 Codex Low (OAuth)", "limit": { @@ -422,7 +438,7 @@ For the complete experience with all reasoning variants matching the official Co **Global config**: `~/.config/opencode/opencode.json` **Project config**: `/.opencode.json` - This now gives you 20 model variants: the new GPT-5.1 lineup (recommended) plus every legacy gpt-5 preset for backwards compatibility. + This now gives you 21 model variants: the refreshed GPT-5.1 lineup (with Codex Max as the default) plus every legacy gpt-5 preset for backwards compatibility. All appear in the opencode model selector as "GPT 5.1 Codex Low (OAuth)", "GPT 5 High (OAuth)", etc. @@ -434,6 +450,7 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t | CLI Model ID | TUI Display Name | Reasoning Effort | Best For | |--------------|------------------|-----------------|----------| +| `gpt-5.1-codex-max` | GPT 5.1 Codex Max (OAuth) | Medium (Extra High optional) | Default flagship tier with optional `xhigh` reasoning for long, complex runs | | `gpt-5.1-codex-low` | GPT 5.1 Codex Low (OAuth) | Low | Fast code generation on the newest Codex tier | | `gpt-5.1-codex-medium` | GPT 5.1 Codex Medium (OAuth) | Medium | Balanced code + tooling workflows | | `gpt-5.1-codex-high` | GPT 5.1 Codex High (OAuth) | High | Multi-step coding tasks with deep tool use | @@ -444,6 +461,8 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t | `gpt-5.1-medium` | GPT 5.1 Medium (OAuth) | Medium | Default adaptive reasoning for everyday work | | `gpt-5.1-high` | GPT 5.1 High (OAuth) | High | Deep analysis when reliability matters most | +> **Extra High reasoning:** `reasoningEffort: "xhigh"` is exclusive to `gpt-5.1-codex-max`. Other models automatically map that option to `high` so their API calls remain valid. + #### Legacy GPT-5 lineup (still supported) | CLI Model ID | TUI Display Name | Reasoning Effort | Best For | @@ -505,7 +524,7 @@ These defaults match the official Codex CLI behavior and can be customized (see ### Recommended: Use Pre-Configured File The easiest way to get started is to use [`config/full-opencode.json`](./config/full-opencode.json), which provides: -- 20 pre-configured model variants matching the latest Codex CLI presets (GPT-5.1 + GPT-5) +- 21 pre-configured model variants matching the latest Codex CLI presets (GPT-5.1 Codex Max + GPT-5.1 + GPT-5) - Optimal settings for each reasoning level - All variants visible in the opencode model selector @@ -521,12 +540,12 @@ If you want to customize settings yourself, you can configure options at provide | Setting | GPT-5 / GPT-5.1 Values | GPT-5-Codex / Codex Mini Values | Plugin Default | |---------|-------------|-------------------|----------------| -| `reasoningEffort` | `none`, `minimal`, `low`, `medium`, `high` | `low`, `medium`, `high` | `medium` | +| `reasoningEffort` | `none`, `minimal`, `low`, `medium`, `high` | `low`, `medium`, `high`, `xhigh`* | `medium` | | `reasoningSummary` | `auto`, `detailed` | `auto`, `detailed` | `auto` | | `textVerbosity` | `low`, `medium`, `high` | `medium` only | `medium` | | `include` | Array of strings | Array of strings | `["reasoning.encrypted_content"]` | -> **Note**: `minimal` effort is auto-normalized to `low` for gpt-5-codex (not supported by the API). `none` is only supported on GPT-5.1 general models; when used with legacy gpt-5 it is normalized to `minimal`. +> **Note**: `minimal` effort is auto-normalized to `low` for gpt-5-codex (not supported by the API). `none` is only supported on GPT-5.1 general models; when used with legacy gpt-5 it is normalized to `minimal`. `xhigh` is exclusive to `gpt-5.1-codex-max`—other Codex presets automatically map it to `high`. #### Plugin-Level Settings diff --git a/config/full-opencode.json b/config/full-opencode.json index dd4ea69..64022e7 100644 --- a/config/full-opencode.json +++ b/config/full-opencode.json @@ -15,6 +15,22 @@ "store": false }, "models": { + "gpt-5.1-codex-max": { + "name": "GPT 5.1 Codex Max (OAuth)", + "limit": { + "context": 400000, + "output": 128000 + }, + "options": { + "reasoningEffort": "medium", + "reasoningSummary": "auto", + "textVerbosity": "medium", + "include": [ + "reasoning.encrypted_content" + ], + "store": false + } + }, "gpt-5.1-codex-low": { "name": "GPT 5.1 Codex Low (OAuth)", "limit": { diff --git a/config/minimal-opencode.json b/config/minimal-opencode.json index 6c41e04..0b2d291 100644 --- a/config/minimal-opencode.json +++ b/config/minimal-opencode.json @@ -8,5 +8,5 @@ } } }, - "model": "openai/gpt-5.1-codex" + "model": "openai/gpt-5.1-codex-max" } diff --git a/docs/configuration.md b/docs/configuration.md index 0af730e..e0fa8e2 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -368,6 +368,13 @@ Advanced plugin settings in `~/.opencode/openhax-codex-config.json`: } ``` +### Log file management + +Control local request/rolling log growth: +- `CODEX_LOG_MAX_BYTES` (default: 5_242_880) - rotate when the rolling log exceeds this many bytes. +- `CODEX_LOG_MAX_FILES` (default: 5) - number of rotated log files to retain (plus the active log). +- `CODEX_LOG_QUEUE_MAX` (default: 1000) - maximum buffered log entries before oldest entries are dropped. + ### CODEX_MODE **What it does:** diff --git a/docs/development/CONFIG_FIELDS.md b/docs/development/CONFIG_FIELDS.md index 6f4584e..25f166d 100644 --- a/docs/development/CONFIG_FIELDS.md +++ b/docs/development/CONFIG_FIELDS.md @@ -285,6 +285,11 @@ const parsedModel: ModelsDev.Model = { ```json { + "gpt-5.1-codex-max": { + "id": "gpt-5.1-codex-max", + "name": "GPT 5.1 Codex Max (OAuth)", + "options": { "reasoningEffort": "medium" } + }, "gpt-5.1-codex-low": { "id": "gpt-5.1-codex", "name": "GPT 5.1 Codex Low (OAuth)", @@ -301,36 +306,11 @@ const parsedModel: ModelsDev.Model = { **Why this matters:** - Config keys mirror the Codex CLI's 5.1 presets, making it obvious which tier you're targeting. - `reasoningEffort: "none"` is only valid for GPT-5.1 general models—the plugin automatically downgrades unsupported values for Codex/Codex Mini. -- Legacy GPT-5 entries can stick around for backwards compatibility, but new installs should prefer the 5.1 naming. - ---- - -### Example 4: If We Made Config Key = ID ❌ - -```json -{ - "gpt-5-codex": { - "id": "gpt-5-codex", - "name": "GPT 5 Codex Low (OAuth)", - "options": { "reasoningEffort": "low" } - }, - "gpt-5-codex": { // ❌ DUPLICATE KEY ERROR! - "id": "gpt-5-codex", - "name": "GPT 5 Codex High (OAuth)", - "options": { "reasoningEffort": "high" } - } -} -``` - -**Problem:** JavaScript objects can't have duplicate keys! - -**Result:** ❌ Can't have multiple variants - -### Reasoning Effort quick notes -- `reasoningEffort: "none"` is exclusive to GPT-5.1 general models and maps to the new "no reasoning" mode introduced by OpenAI. +- `reasoningEffort: "xhigh"` is exclusive to `gpt-5.1-codex-max`; other models automatically clamp it to `high`. - Legacy GPT-5, GPT-5-Codex, and Codex Mini presets automatically clamp unsupported values (`none` → `minimal`/`low`, `minimal` → `low` for Codex). - Mixing GPT-5.1 and GPT-5 presets inside the same config is fine—just keep config keys unique and let the plugin normalize them. + --- ## Why We Need Different Config Keys diff --git a/docs/development/TESTING.md b/docs/development/TESTING.md index c94ce43..18b4a5c 100644 --- a/docs/development/TESTING.md +++ b/docs/development/TESTING.md @@ -2,6 +2,8 @@ Comprehensive testing matrix for all config scenarios and backwards compatibility. +> **Logging note:** All test runs and plugin executions now write per-request JSON files plus a rolling `codex-plugin.log` under `~/.opencode/logs/codex-plugin/`. Set `ENABLE_PLUGIN_REQUEST_LOGGING=1` or `DEBUG_CODEX_PLUGIN=1` if you also want live console output in addition to the files. + ## Test Scenarios Matrix ### Scenario 1: Default OpenCode Models (No Custom Config) diff --git a/lib/logger.ts b/lib/logger.ts index 66aa60b..7d24e12 100644 --- a/lib/logger.ts +++ b/lib/logger.ts @@ -1,14 +1,21 @@ -import { writeFileSync } from "node:fs"; -import { join } from "node:path"; import type { OpencodeClient } from "@opencode-ai/sdk"; +import { appendFile, rename, rm, stat, writeFile } from "node:fs/promises"; +import { join } from "node:path"; import { PLUGIN_NAME } from "./constants.js"; import { ensureDirectory, getOpenCodePath } from "./utils/file-system-utils.js"; export const LOGGING_ENABLED = process.env.ENABLE_PLUGIN_REQUEST_LOGGING === "1"; -const DEBUG_ENABLED = process.env.DEBUG_CODEX_PLUGIN === "1" || LOGGING_ENABLED; +const DEBUG_FLAG_ENABLED = process.env.DEBUG_CODEX_PLUGIN === "1"; +const DEBUG_ENABLED = DEBUG_FLAG_ENABLED || LOGGING_ENABLED; +const CONSOLE_LOGGING_ENABLED = DEBUG_FLAG_ENABLED; const LOG_DIR = getOpenCodePath("logs", "codex-plugin"); +const ROLLING_LOG_FILE = join(LOG_DIR, "codex-plugin.log"); const IS_TEST_ENV = process.env.VITEST === "1" || process.env.NODE_ENV === "test"; +const LOG_ROTATION_MAX_BYTES = Math.max(1, getEnvNumber("CODEX_LOG_MAX_BYTES", 5 * 1024 * 1024)); +const LOG_ROTATION_MAX_FILES = Math.max(1, getEnvNumber("CODEX_LOG_MAX_FILES", 5)); +const LOG_QUEUE_MAX_LENGTH = Math.max(1, getEnvNumber("CODEX_LOG_QUEUE_MAX", 1000)); + type LogLevel = "debug" | "info" | "warn" | "error"; type LoggerOptions = { @@ -16,11 +23,27 @@ type LoggerOptions = { directory?: string; }; +type RollingLogEntry = { + timestamp: string; + service: string; + level: LogLevel; + message: string; + extra?: Record; +}; + let requestCounter = 0; let loggerClient: OpencodeClient | undefined; let projectDirectory: string | undefined; let announcedState = false; +const writeQueue: string[] = []; +let flushInProgress = false; +let flushScheduled = false; +let overflowNotified = false; +let pendingFlush: Promise | undefined; +let currentLogSize = 0; +let sizeInitialized = false; + export function configureLogger(options: LoggerOptions = {}): void { if (options.client) { loggerClient = options.client; @@ -45,7 +68,6 @@ export function configureLogger(options: LoggerOptions = {}): void { } export function logRequest(stage: string, data: Record): void { - if (!LOGGING_ENABLED) return; const payload = { timestamp: new Date().toISOString(), requestId: ++requestCounter, @@ -64,7 +86,6 @@ export function logRequest(stage: string, data: Record): void { } export function logDebug(message: string, data?: unknown): void { - if (!DEBUG_ENABLED) return; emit("debug", message, normalizeExtra(data)); } @@ -80,27 +101,48 @@ export function logError(message: string, data?: unknown): void { emit("error", message, normalizeExtra(data)); } +export async function flushRollingLogsForTest(): Promise { + scheduleFlush(); + if (pendingFlush) { + await pendingFlush; + } +} + function emit(level: LogLevel, message: string, extra?: Record): void { - const payload = { + const sanitizedExtra = sanitizeExtra(extra); + const entry: RollingLogEntry = { + timestamp: new Date().toISOString(), service: PLUGIN_NAME, level, message, - extra: sanitizeExtra(extra), + extra: sanitizedExtra, }; + appendRollingLog(entry); + if (loggerClient?.app) { void loggerClient.app .log({ - body: payload, + body: entry, query: projectDirectory ? { directory: projectDirectory } : undefined, }) - .catch((error) => fallback(level, message, payload.extra, error)); - return; + .catch((error) => + logToConsole("warn", "Failed to forward log entry", { + error: toErrorMessage(error), + }), + ); } - fallback(level, message, payload.extra); + + logToConsole(level, message, sanitizedExtra); } -function fallback(level: LogLevel, message: string, extra?: Record, error?: unknown): void { - if (IS_TEST_ENV && !LOGGING_ENABLED && !DEBUG_ENABLED && level !== "error") { +function logToConsole( + level: LogLevel, + message: string, + extra?: Record, + error?: unknown, +): void { + const shouldLog = CONSOLE_LOGGING_ENABLED || level === "warn" || level === "error"; + if (IS_TEST_ENV && !shouldLog) { return; } const prefix = `[${PLUGIN_NAME}] ${message}`; @@ -139,13 +181,148 @@ function persistRequestStage(stage: string, payload: Record): s try { ensureLogDir(); const filename = join(LOG_DIR, `request-${payload.requestId}-${stage}.json`); - writeFileSync(filename, JSON.stringify(payload, null, 2), "utf8"); + void writeFile(filename, JSON.stringify(payload, null, 2), "utf8").catch((error) => { + logToConsole("warn", "Failed to persist request log", { + stage, + error: toErrorMessage(error), + }); + }); return filename; } catch (err) { - emit("warn", "Failed to persist request log", { + logToConsole("warn", "Failed to prepare request log", { stage, - error: err instanceof Error ? err.message : String(err), + error: toErrorMessage(err), }); return undefined; } } + +function appendRollingLog(entry: RollingLogEntry): void { + const line = `${JSON.stringify(entry)}\n`; + enqueueLogLine(line); +} + +function enqueueLogLine(line: string): void { + if (writeQueue.length >= LOG_QUEUE_MAX_LENGTH) { + writeQueue.shift(); + if (!overflowNotified) { + overflowNotified = true; + logToConsole("warn", "Rolling log queue overflow; dropping oldest entries", { + maxQueueLength: LOG_QUEUE_MAX_LENGTH, + }); + } + } + writeQueue.push(line); + scheduleFlush(); +} + +function scheduleFlush(): void { + if (flushScheduled || flushInProgress) { + return; + } + flushScheduled = true; + pendingFlush = Promise.resolve() + .then(flushQueue) + .catch((error) => + logToConsole("warn", "Failed to flush rolling logs", { + error: toErrorMessage(error), + }), + ); +} + +async function flushQueue(): Promise { + if (flushInProgress) return; + flushInProgress = true; + flushScheduled = false; + + try { + ensureLogDir(); + while (writeQueue.length) { + const chunk = writeQueue.join(""); + writeQueue.length = 0; + const chunkBytes = Buffer.byteLength(chunk, "utf8"); + await maybeRotate(chunkBytes); + await appendFile(ROLLING_LOG_FILE, chunk, "utf8"); + currentLogSize += chunkBytes; + } + } catch (err) { + logToConsole("warn", "Failed to write rolling log", { + error: toErrorMessage(err), + }); + } finally { + flushInProgress = false; + if (writeQueue.length) { + scheduleFlush(); + } else { + overflowNotified = false; + } + } +} + +async function maybeRotate(incomingBytes: number): Promise { + await ensureLogSize(); + if (currentLogSize + incomingBytes <= LOG_ROTATION_MAX_BYTES) { + return; + } + await rotateLogs(); + currentLogSize = 0; +} + +async function ensureLogSize(): Promise { + if (sizeInitialized) return; + try { + const stats = await stat(ROLLING_LOG_FILE); + currentLogSize = stats.size; + } catch (error) { + const code = (error as NodeJS.ErrnoException).code; + if (code !== "ENOENT") { + logToConsole("warn", "Failed to stat rolling log", { error: toErrorMessage(error) }); + } + currentLogSize = 0; + } finally { + sizeInitialized = true; + } +} + +async function rotateLogs(): Promise { + const oldest = `${ROLLING_LOG_FILE}.${LOG_ROTATION_MAX_FILES}`; + try { + await rm(oldest, { force: true }); + } catch { + /* ignore */ + } + for (let index = LOG_ROTATION_MAX_FILES - 1; index >= 1; index -= 1) { + const source = `${ROLLING_LOG_FILE}.${index}`; + const target = `${ROLLING_LOG_FILE}.${index + 1}`; + try { + await rename(source, target); + } catch (error) { + if ((error as NodeJS.ErrnoException).code !== "ENOENT") { + throw error; + } + } + } + try { + await rename(ROLLING_LOG_FILE, `${ROLLING_LOG_FILE}.1`); + } catch (error) { + if ((error as NodeJS.ErrnoException).code !== "ENOENT") { + throw error; + } + } +} + +function getEnvNumber(name: string, fallback: number): number { + const raw = process.env[name]; + const parsed = raw ? Number(raw) : Number.NaN; + if (Number.isFinite(parsed) && parsed > 0) { + return parsed; + } + return fallback; +} + +function toErrorMessage(error: unknown): string { + if (error instanceof Error && error.message) { + return error.message; + } + return String(error); +} diff --git a/lib/request/request-transformer.ts b/lib/request/request-transformer.ts index 5961a34..763521f 100644 --- a/lib/request/request-transformer.ts +++ b/lib/request/request-transformer.ts @@ -227,6 +227,7 @@ export function normalizeModel(model: string | undefined): string { const contains = (needle: string) => sanitized.includes(needle); const hasGpt51 = contains("gpt-5-1") || sanitized.includes("gpt51"); + const hasCodexMax = contains("codex-max") || contains("codexmax"); if (contains("gpt-5-1-codex-mini") || (hasGpt51 && contains("codex-mini"))) { return "gpt-5.1-codex-mini"; @@ -234,6 +235,9 @@ export function normalizeModel(model: string | undefined): string { if (contains("codex-mini")) { return "gpt-5.1-codex-mini"; } + if (hasCodexMax) { + return "gpt-5.1-codex-max"; + } if (contains("gpt-5-1-codex") || (hasGpt51 && contains("codex"))) { return "gpt-5.1-codex"; } @@ -298,6 +302,7 @@ export function getReasoningConfig( normalizedOriginal.includes("codex-mini") || normalizedOriginal.includes("codex mini") || normalizedOriginal.includes("codex_mini"); + const isCodexMax = normalized === "gpt-5.1-codex-max"; const isCodexFamily = normalized.startsWith("gpt-5-codex") || normalized.startsWith("gpt-5.1-codex") || @@ -319,6 +324,11 @@ export function getReasoningConfig( } let effort = userConfig.reasoningEffort || defaultEffort; + const requestedXHigh = effort === "xhigh"; + + if (requestedXHigh && !isCodexMax) { + effort = "high"; + } if (isCodexMini) { if (effort === "minimal" || effort === "low" || effort === "none") { @@ -327,6 +337,10 @@ export function getReasoningConfig( if (effort !== "high") { effort = "medium"; } + } else if (isCodexMax) { + if (effort === "minimal" || effort === "none") { + effort = "low"; + } } else if (isCodexFamily) { if (effort === "minimal" || effort === "none") { effort = "low"; diff --git a/lib/types.ts b/lib/types.ts index be08a1a..eadc329 100644 --- a/lib/types.ts +++ b/lib/types.ts @@ -50,7 +50,7 @@ export interface UserConfig { * Configuration options for reasoning and text settings */ export interface ConfigOptions { - reasoningEffort?: "none" | "minimal" | "low" | "medium" | "high"; + reasoningEffort?: "none" | "minimal" | "low" | "medium" | "high" | "xhigh"; reasoningSummary?: "auto" | "concise" | "detailed"; textVerbosity?: "low" | "medium" | "high"; include?: string[]; @@ -60,7 +60,7 @@ export interface ConfigOptions { * Reasoning configuration for requests */ export interface ReasoningConfig { - effort: "none" | "minimal" | "low" | "medium" | "high"; + effort: "none" | "minimal" | "low" | "medium" | "high" | "xhigh"; summary: "auto" | "concise" | "detailed"; } diff --git a/scripts/test-all-models.sh b/scripts/test-all-models.sh index 3cc3c52..79e19f2 100755 --- a/scripts/test-all-models.sh +++ b/scripts/test-all-models.sh @@ -164,6 +164,7 @@ EOCONFIG # ============================================================================ update_config "full" + test_model "gpt-5.1-codex-max" "gpt-5.1-codex-max" "medium" "auto" "medium" test_model "gpt-5.1-codex-low" "gpt-5.1-codex" "low" "auto" "medium" test_model "gpt-5.1-codex-medium" "gpt-5.1-codex" "medium" "auto" "medium" test_model "gpt-5.1-codex-high" "gpt-5.1-codex" "high" "detailed" "medium" diff --git a/spec/gpt-51-codex-max.md b/spec/gpt-51-codex-max.md new file mode 100644 index 0000000..51d46f6 --- /dev/null +++ b/spec/gpt-51-codex-max.md @@ -0,0 +1,37 @@ +# Spec: GPT-5.1-Codex-Max integration + +## Context +Issue [open-hax/codex#26](https://github.com/open-hax/codex/issues/26) introduces the new `gpt-5.1-codex-max` model, which replaces `gpt-5.1-codex` as the default Codex surface and adds the "Extra High" (`xhigh`) reasoning effort tier. The current `codex-auth` plugin only normalizes `gpt-5.1`, `gpt-5.1-codex`, and `gpt-5.1-codex-mini` variants (`lib/request/request-transformer.ts:303-426`) and exposes reasoning tiers up to `high` (`lib/types.ts:36-50`, `test/request-transformer.test.ts:15-125`). Documentation (`AGENTS.md:6-111`, `README.md:93-442`, `docs/development/CONFIG_FIELDS.md:288-310`) and bundled configs (`config/full-opencode.json:18-150`, `config/minimal-opencode.json:1-32`) still describe `gpt-5.1-codex` as the flagship choice. We must align with the Codex CLI reference implementation (`codex-cli/codex-rs/common/src/model_presets.rs:53-107`) which already treats `gpt-5.1-codex-max` as the default preset and only exposes the `xhigh` reasoning option for this model. + +## References +- Issue: [open-hax/codex#26](https://github.com/open-hax/codex/issues/26) +- Request transformer logic: `lib/request/request-transformer.ts:303-426`, `lib/request/request-transformer.ts:825-955` +- Type definitions: `lib/types.ts:36-50` +- Tests: `test/request-transformer.test.ts:15-1450` +- Docs & config samples: `AGENTS.md:6-111`, `README.md:93-442`, `docs/development/CONFIG_FIELDS.md:288-310`, `config/full-opencode.json:18-150`, `config/minimal-opencode.json:1-32` +- Reference behavior: `codex-cli/codex-rs/common/src/model_presets.rs:53-131` (default reasoning options for Codex Max) + +## Requirements / Definition of Done +1. `normalizeModel()` must map `gpt-5.1-codex-max` and all aliases (`gpt51-codex-max`, `codex-max`, `gpt-5-codex-max`, etc.) to the canonical `gpt-5.1-codex-max` slug, prioritizing this match above the existing `gpt-5.1-codex` checks. +2. `ConfigOptions` and `ReasoningConfig` types must allow the new `"xhigh"` reasoning effort, and `getReasoningConfig()` must: + - Default `gpt-5.1-codex-max` to `medium` effort, mirroring Codex CLI presets. + - Accept `xhigh` only when the original model maps to `gpt-5.1-codex-max`; other models requesting `xhigh` should gracefully downgrade (e.g., to `high`). + - Preserve existing clamps for Codex Mini, legacy Codex, and lightweight GPT-5 variants. +3. `transformRequestBody()` must preserve Codex CLI defaults for GPT-5.1-Codex-Max requests (text verbosity `medium`, no parallel tool calls) and continue merging per-model overrides from user config. +4. Automated tests must cover: + - Normalization of new slug variants. + - Reasoning clamps/defaults for Codex Max, including `xhigh` acceptance and rejection for other families. + - `transformRequestBody()` behavior when `reasoningEffort: "xhigh"` is set for Codex Max vs. non-supported models. +5. Documentation and sample configs must describe `gpt-5.1-codex-max` as the new default and explain the `xhigh` reasoning tier where reasoning levels are enumerated. +6. Update change tracking (this spec + final summary) and ensure all tests (`npm test`) pass. + +## Plan +1. Update `lib/types.ts` to extend the reasoning effort union with `"xhigh"`, then adjust `normalizeModel()`/`getReasoningConfig()` in `lib/request/request-transformer.ts` for the new slug ordering, default effort, and `xhigh` gate. +2. Enhance `transformRequestBody()` logic/tests to verify reasoning selections involving `gpt-5.1-codex-max`, ensuring Codex models still disable parallel tool calls. +3. Add regression tests in `test/request-transformer.test.ts` (normalization, reasoning, integration) to cover Codex Max inputs and `xhigh` handling. +4. Refresh docs/config samples (`AGENTS.md`, `README.md`, `docs/development/CONFIG_FIELDS.md`, `config/*.json`) to mention Codex Max as the default Codex tier and introduce the `xhigh` effort level. +5. Run the full test suite (`npm test`) and capture results; document completion in this spec's change log and final response. + +## Change Log +- 2025-11-19: Initial spec drafted for GPT-5.1-Codex-Max normalization, reasoning, tests, and docs. +- 2025-11-19: Added Codex Max normalization, `xhigh` gating, tests, and documentation/config updates mirroring the Codex CLI rollout. diff --git a/spec/logging-rotation-async-io.md b/spec/logging-rotation-async-io.md new file mode 100644 index 0000000..bac2b88 --- /dev/null +++ b/spec/logging-rotation-async-io.md @@ -0,0 +1,30 @@ +# Logging rotation & async I/O spec + +## Context +- Rolling log currently uses `appendFileSync` and never rotates, so `codex-plugin.log` can grow without bound in long-running processes. +- Request stage files are persisted synchronously via `writeFileSync`, and rolling log writes occur on every emit, blocking the event loop. + +## Relevant files +- `lib/logger.ts`: append path setup and sync writes (`appendFileSync` in `appendRollingLog`, `writeFileSync` in `persistRequestStage`) — lines ~1-185. +- `lib/utils/file-system-utils.ts`: directory helpers (`ensureDirectory`, `safeWriteFile`) — lines ~1-77. +- `test/logger.test.ts`: expectations around sync writes/console behavior — lines ~1-113. +- `test/prompts-codex.test.ts`, `test/prompts-opencode-codex.test.ts`, `test/plugin-config.test.ts`: mock `appendFileSync` hooks that may need updates — see rg results. + +## Existing issues / PRs +- No open issues specifically about logging/rotation (checked `gh issue list`). +- Open PR #27 `feat/gpt-5.1-codex-max support with xhigh reasoning and persistent logging` on this branch; ensure changes stay compatible. + +## Definition of done +- Rolling log writes are asynchronous and buffered; synchronous hot-path blocking is removed. +- Log rotation enforced with configurable max size and retention of N files; old logs cleaned when limits hit. +- Write queue handles overflow gracefully (drops oldest or rate-limits) without crashing the process and surfaces a warning. +- Tests updated/added for new behavior; existing suites pass. +- Documentation/config defaults captured if new env/config options are introduced. + +## Requirements & approach sketch +- Introduce rotation settings (e.g., max bytes, max files) with reasonable defaults and env overrides. +- Implement a buffered async writer for the rolling log with sequential flushing to avoid contention and ensure ordering. +- On rotation trigger, rename current log with sequential suffix and prune files beyond retention. +- Define queue max length; on overflow, drop oldest buffered entries and emit a warning once per overflow window to avoid log storms. +- Keep request-stage JSON persistence working; consider leaving synchronous writes since they are occasional, but ensure they respect new directory management. +- Update tests/mocks to reflect async writer and rotation behavior. diff --git a/spec/persistent-logging.md b/spec/persistent-logging.md new file mode 100644 index 0000000..f51ebbf --- /dev/null +++ b/spec/persistent-logging.md @@ -0,0 +1,26 @@ +# Spec: Persistent Logger Defaults + +## Context +Tests emit many console lines because `logRequest`, `logWarn`, and other helpers write directly to stdout/stderr unless `ENABLE_PLUGIN_REQUEST_LOGGING` is disabled. The harness request is to keep test output quiet while still retaining full request telemetry: "Let's just always log to a file both in tests, and in production." Currently `lib/logger.ts` only writes JSON request stages when `ENABLE_PLUGIN_REQUEST_LOGGING=1` (see `logRequest` around lines 47-65). Debug logs are also suppressed unless `DEBUG_CODEX_PLUGIN` is set, which means the only persistent record is console spam. We need a file-first logger that always captures request/response metadata without cluttering unit tests or production stdout. + +## References +- Logger implementation: `lib/logger.ts:1-149` +- Logger tests: `test/logger.test.ts:1-132` +- Testing guide (mentions logging expectations): `docs/development/TESTING.md:1-200` + +## Requirements / Definition of Done +1. `logRequest` must always persist per-request JSON files under `~/.opencode/logs/codex-plugin/` regardless of env vars, while console output remains opt-in (`ENABLE_PLUGIN_REQUEST_LOGGING` or `DEBUG_CODEX_PLUGIN` to mirror current behavior for stdout). +2. `logDebug`, `logInfo`, `logWarn`, and `logError` should write to a rolling log file (one per session/date is acceptable) *and* continue to emit to stdout/stderr only when the corresponding env var enables it. The file logs should capture level, timestamp, and context to simplify search. +3. Logger tests must cover the new default behavior (file writes happen without env vars, console output stays silent). Add regression coverage for both request-stage JSONs and the new aggregate log file. +4. Documentation (`docs/development/TESTING.md` or README logging section if present) must mention that logs are always written to `~/.opencode/logs/codex-plugin/` and how to enable console mirroring via env vars. +5. Ensure file logging uses ASCII/JSON content and is resilient when directories are missing (auto-create). Console noise in `npm test` should drop as a result. + +## Plan +1. Update `lib/logger.ts`: remove `LOGGING_ENABLED` gating for persistence, introduce helper(s) for writing request JSON + append-only log file; gate console emission using env flags. Reuse existing `ensureLogDir()` logic. +2. Extend logger tests to cover default persistence, console gating, and append log behavior. Mock fs to inspect file writes without touching disk. +3. Refresh docs to describe the new always-on file logging and optional console mirrors. Mention location + env toggles for developer reference. +4. Run `npm test` to ensure the quieter logging still passes and the new tests cover the behavior. + +## Change Log +- 2025-11-19: Drafted spec for persistent logger defaults per user request. +- 2025-11-19: Implemented always-on file logging, rolling log file, console gating, updated tests, and documentation. diff --git a/test/logger.test.ts b/test/logger.test.ts index 0e5329e..1121bd0 100644 --- a/test/logger.test.ts +++ b/test/logger.test.ts @@ -1,134 +1,181 @@ -import { afterEach, beforeEach, describe, expect, it, vi } from "vitest"; +import { describe, it, expect, vi, beforeEach } from "vitest"; const fsMocks = { - writeFileSync: vi.fn(), + writeFile: vi.fn(), + appendFile: vi.fn(), mkdirSync: vi.fn(), existsSync: vi.fn(), + stat: vi.fn(), + rename: vi.fn(), + rm: vi.fn(), }; -const homedirMock = vi.fn(() => "/mock-home"); - vi.mock("node:fs", () => ({ - writeFileSync: fsMocks.writeFileSync, - mkdirSync: fsMocks.mkdirSync, existsSync: fsMocks.existsSync, + mkdirSync: fsMocks.mkdirSync, })); -vi.mock("node:os", () => ({ +vi.mock("node:fs/promises", () => ({ __esModule: true, - homedir: homedirMock, + writeFile: fsMocks.writeFile, + appendFile: fsMocks.appendFile, + stat: fsMocks.stat, + rename: fsMocks.rename, + rm: fsMocks.rm, })); -describe("Logger Module", () => { - const originalEnv = { ...process.env }; - const logSpy = vi.spyOn(console, "log").mockImplementation(() => {}); - const warnSpy = vi.spyOn(console, "warn").mockImplementation(() => {}); - const errorSpy = vi.spyOn(console, "error").mockImplementation(() => {}); - - beforeEach(() => { - vi.clearAllMocks(); - Object.assign(process.env, originalEnv); - delete process.env.ENABLE_PLUGIN_REQUEST_LOGGING; - delete process.env.DEBUG_CODEX_PLUGIN; - fsMocks.writeFileSync.mockReset(); - fsMocks.mkdirSync.mockReset(); - fsMocks.existsSync.mockReset(); - homedirMock.mockReturnValue("/mock-home"); - logSpy.mockClear(); - warnSpy.mockClear(); - errorSpy.mockClear(); - }); +vi.mock("node:os", () => ({ + __esModule: true, + homedir: () => "/mock-home", +})); - afterEach(() => { - Object.assign(process.env, originalEnv); - }); +const originalEnv = { ...process.env }; +const logSpy = vi.spyOn(console, "log").mockImplementation(() => {}); +const warnSpy = vi.spyOn(console, "warn").mockImplementation(() => {}); +const errorSpy = vi.spyOn(console, "error").mockImplementation(() => {}); + +beforeEach(() => { + vi.resetModules(); + Object.assign(process.env, originalEnv); + delete process.env.ENABLE_PLUGIN_REQUEST_LOGGING; + delete process.env.DEBUG_CODEX_PLUGIN; + delete process.env.CODEX_LOG_MAX_BYTES; + delete process.env.CODEX_LOG_MAX_FILES; + delete process.env.CODEX_LOG_QUEUE_MAX; + fsMocks.writeFile.mockReset(); + fsMocks.appendFile.mockReset(); + fsMocks.mkdirSync.mockReset(); + fsMocks.existsSync.mockReset(); + fsMocks.stat.mockReset(); + fsMocks.rename.mockReset(); + fsMocks.rm.mockReset(); + fsMocks.appendFile.mockResolvedValue(undefined); + fsMocks.writeFile.mockResolvedValue(undefined); + fsMocks.stat.mockRejectedValue(Object.assign(new Error("no file"), { code: "ENOENT" })); + logSpy.mockClear(); + warnSpy.mockClear(); + errorSpy.mockClear(); +}); +describe("logger", () => { it("LOGGING_ENABLED reflects env state", async () => { process.env.ENABLE_PLUGIN_REQUEST_LOGGING = "1"; const { LOGGING_ENABLED } = await import("../lib/logger.js"); expect(LOGGING_ENABLED).toBe(true); }); - it("logRequest skips writing when logging disabled", async () => { - // Since LOGGING_ENABLED is evaluated at module load time, - // and ES modules are cached, we need to test the behavior - // based on the current environment state - delete process.env.ENABLE_PLUGIN_REQUEST_LOGGING; - - // Clear module cache to get fresh evaluation - vi.unmock("../lib/logger.js"); - const { logRequest } = await import("../lib/logger.js"); + it("logRequest writes stage file and rolling log by default", async () => { + fsMocks.existsSync.mockReturnValue(false); + const { logRequest, flushRollingLogsForTest } = await import("../lib/logger.js"); - fsMocks.existsSync.mockReturnValue(true); logRequest("stage-one", { foo: "bar" }); + await flushRollingLogsForTest(); - // If LOGGING_ENABLED was false, no writes should occur - // Note: Due to module caching in vitest, this test assumes - // the environment was clean when the module was first loaded + expect(fsMocks.mkdirSync).toHaveBeenCalledWith("/mock-home/.opencode/logs/codex-plugin", { + recursive: true, + }); + const [requestPath, payload, encoding] = fsMocks.writeFile.mock.calls[0]; + expect(requestPath).toBe("/mock-home/.opencode/logs/codex-plugin/request-1-stage-one.json"); + expect(encoding).toBe("utf8"); + const parsedPayload = JSON.parse(payload as string); + expect(parsedPayload.stage).toBe("stage-one"); + expect(parsedPayload.foo).toBe("bar"); + + const [logPath, logLine, logEncoding] = fsMocks.appendFile.mock.calls[0]; + expect(logPath).toBe("/mock-home/.opencode/logs/codex-plugin/codex-plugin.log"); + expect(logEncoding).toBe("utf8"); + expect(logLine as string).toContain('"stage":"stage-one"'); + expect(logSpy).not.toHaveBeenCalled(); }); - it("logRequest creates directory and writes when enabled", async () => { - process.env.ENABLE_PLUGIN_REQUEST_LOGGING = "1"; - let existsCall = 0; - fsMocks.existsSync.mockImplementation(() => existsCall++ > 0); - const { logRequest } = await import("../lib/logger.js"); + it("logDebug appends to rolling log without printing to console by default", async () => { + fsMocks.existsSync.mockReturnValue(true); + const { logDebug, flushRollingLogsForTest } = await import("../lib/logger.js"); - logRequest("before", { some: "data" }); + logDebug("debug-message", { detail: "info" }); + await flushRollingLogsForTest(); - expect(fsMocks.mkdirSync).toHaveBeenCalledWith("/mock-home/.opencode/logs/codex-plugin", { - recursive: true, - }); - expect(fsMocks.writeFileSync).toHaveBeenCalledOnce(); + expect(fsMocks.appendFile).toHaveBeenCalledTimes(1); + expect(logSpy).not.toHaveBeenCalled(); + }); + + it("logWarn emits to console even without env overrides", async () => { + fsMocks.existsSync.mockReturnValue(true); + const { logWarn, flushRollingLogsForTest } = await import("../lib/logger.js"); + + logWarn("warning"); + await flushRollingLogsForTest(); - const [, jsonString] = fsMocks.writeFileSync.mock.calls[0]; - const parsed = JSON.parse(jsonString as string); - expect(parsed.stage).toBe("before"); - expect(parsed.some).toBe("data"); - expect(typeof parsed.requestId).toBe("number"); + expect(warnSpy).toHaveBeenCalledWith("[openai-codex-plugin] warning"); }); - it("logRequest records errors from writeFileSync", async () => { + it("logInfo does not mirror to console unless debug flag is set", async () => { + fsMocks.existsSync.mockReturnValue(true); + const { logInfo, flushRollingLogsForTest } = await import("../lib/logger.js"); + logInfo("info-message"); + await flushRollingLogsForTest(); + expect(logSpy).not.toHaveBeenCalled(); + process.env.ENABLE_PLUGIN_REQUEST_LOGGING = "1"; + vi.resetModules(); fsMocks.existsSync.mockReturnValue(true); - fsMocks.writeFileSync.mockImplementation(() => { - throw new Error("boom"); - }); - const { logRequest } = await import("../lib/logger.js"); + const { logInfo: envLogInfo, flushRollingLogsForTest: flushEnabled } = await import("../lib/logger.js"); + envLogInfo("info-message"); + await flushEnabled(); + expect(logSpy).not.toHaveBeenCalled(); + }); - logRequest("error-stage", { boom: true }); + it("persist failures log warnings and still append entries", async () => { + fsMocks.existsSync.mockReturnValue(true); + fsMocks.writeFile.mockRejectedValue(new Error("boom")); + const { logRequest, flushRollingLogsForTest } = await import("../lib/logger.js"); + + logRequest("stage-two", { foo: "bar" }); + await flushRollingLogsForTest(); expect(warnSpy).toHaveBeenCalledWith( - '[openai-codex-plugin] Failed to persist request log {"stage":"error-stage","error":"boom"}', + '[openai-codex-plugin] Failed to persist request log {"stage":"stage-two","error":"boom"}', ); + expect(fsMocks.appendFile).toHaveBeenCalled(); }); - it("logDebug logs only when enabled", async () => { - // Ensure a clean import without debug/logging enabled - delete process.env.DEBUG_CODEX_PLUGIN; - delete process.env.ENABLE_PLUGIN_REQUEST_LOGGING; - await vi.resetModules(); - let mod = await import("../lib/logger.js"); - mod.logDebug("should not log"); - expect(logSpy).not.toHaveBeenCalled(); + it("rotates logs when size exceeds limit", async () => { + process.env.CODEX_LOG_MAX_BYTES = "10"; + process.env.CODEX_LOG_MAX_FILES = "2"; + fsMocks.existsSync.mockReturnValue(true); + fsMocks.stat.mockResolvedValue({ size: 9 }); + const { logDebug, flushRollingLogsForTest } = await import("../lib/logger.js"); - // Enable debug and reload module to re-evaluate DEBUG_ENABLED - process.env.DEBUG_CODEX_PLUGIN = "1"; - await vi.resetModules(); - mod = await import("../lib/logger.js"); - mod.logDebug("hello", { a: 1 }); - expect(logSpy).toHaveBeenCalledWith('[openai-codex-plugin] hello {"a":1}'); - }); + logDebug("trigger-rotation"); + await flushRollingLogsForTest(); - it("logWarn always logs", async () => { - const { logWarn } = await import("../lib/logger.js"); - logWarn("warning", { detail: "info" }); - expect(warnSpy).toHaveBeenCalledWith('[openai-codex-plugin] warning {"detail":"info"}'); + expect(fsMocks.rm).toHaveBeenCalledWith("/mock-home/.opencode/logs/codex-plugin/codex-plugin.log.2", { + force: true, + }); + expect(fsMocks.rename).toHaveBeenCalledWith( + "/mock-home/.opencode/logs/codex-plugin/codex-plugin.log", + "/mock-home/.opencode/logs/codex-plugin/codex-plugin.log.1", + ); + expect(fsMocks.appendFile).toHaveBeenCalled(); }); - it("logWarn logs message without data", async () => { - const { logWarn } = await import("../lib/logger.js"); - warnSpy.mockClear(); - logWarn("just-message"); - expect(warnSpy).toHaveBeenCalledWith("[openai-codex-plugin] just-message"); + it("drops oldest buffered logs when queue overflows", async () => { + process.env.CODEX_LOG_QUEUE_MAX = "2"; + fsMocks.existsSync.mockReturnValue(true); + const { logDebug, flushRollingLogsForTest } = await import("../lib/logger.js"); + + logDebug("first"); + logDebug("second"); + logDebug("third"); + await flushRollingLogsForTest(); + + expect(fsMocks.appendFile).toHaveBeenCalledTimes(1); + const appended = fsMocks.appendFile.mock.calls[0][1] as string; + expect(appended).toContain('"message":"second"'); + expect(appended).toContain('"message":"third"'); + expect(appended).not.toContain('"message":"first"'); + expect(warnSpy).toHaveBeenCalledWith( + '[openai-codex-plugin] Rolling log queue overflow; dropping oldest entries {"maxQueueLength":2}', + ); }); }); diff --git a/test/plugin-config.test.ts b/test/plugin-config.test.ts index 8cf92ac..abdc0ec 100644 --- a/test/plugin-config.test.ts +++ b/test/plugin-config.test.ts @@ -10,6 +10,7 @@ vi.mock("node:fs", () => ({ readFileSync: vi.fn(), writeFileSync: vi.fn(), mkdirSync: vi.fn(), + appendFileSync: vi.fn(), })); // Get mocked functions diff --git a/test/prompts-codex.test.ts b/test/prompts-codex.test.ts index 2cca9c3..9496d55 100644 --- a/test/prompts-codex.test.ts +++ b/test/prompts-codex.test.ts @@ -6,6 +6,7 @@ const files = new Map(); const existsSync = vi.fn((file: string) => files.has(file)); const readFileSync = vi.fn((file: string) => files.get(file) ?? ""); const writeFileSync = vi.fn((file: string, content: string) => files.set(file, content)); +const appendFileSync = vi.fn((file: string, content: string) => files.set(`${file}-rolling`, content)); const mkdirSync = vi.fn(); const homedirMock = vi.fn(() => "/mock-home"); const fetchMock = vi.fn(); @@ -15,11 +16,13 @@ vi.mock("node:fs", () => ({ existsSync, readFileSync, writeFileSync, + appendFileSync, mkdirSync, }, existsSync, readFileSync, writeFileSync, + appendFileSync, mkdirSync, })); @@ -38,13 +41,15 @@ describe("Codex Instructions Fetcher", () => { existsSync.mockClear(); readFileSync.mockClear(); writeFileSync.mockClear(); + appendFileSync.mockClear(); mkdirSync.mockClear(); homedirMock.mockReturnValue("/mock-home"); fetchMock.mockClear(); - global.fetch = fetchMock; + (global as any).fetch = fetchMock; codexInstructionsCache.clear(); }); + afterEach(() => { // Cleanup global fetch if needed delete (global as any).fetch; diff --git a/test/prompts-opencode-codex.test.ts b/test/prompts-opencode-codex.test.ts index 22ec7e5..e407ee4 100644 --- a/test/prompts-opencode-codex.test.ts +++ b/test/prompts-opencode-codex.test.ts @@ -1,28 +1,45 @@ -import { join } from "node:path"; -import { afterEach, beforeEach, describe, expect, it, vi } from "vitest"; -import { openCodePromptCache } from "../lib/cache/session-cache.js"; +import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest'; +import { join } from 'node:path'; +import { openCodePromptCache } from '../lib/cache/session-cache.js'; const files = new Map(); const readFileMock = vi.fn(); const writeFileMock = vi.fn(); const mkdirMock = vi.fn(); -const homedirMock = vi.fn(() => "/mock-home"); +const homedirMock = vi.fn(() => '/mock-home'); const fetchMock = vi.fn(); const recordCacheHitMock = vi.fn(); const recordCacheMissMock = vi.fn(); +const existsSync = vi.fn(() => false); +const appendFileSync = vi.fn(); +const writeFileSync = vi.fn(); +const mkdirSync = vi.fn(); -vi.mock("node:fs/promises", () => ({ +vi.mock('node:fs/promises', () => ({ mkdir: mkdirMock, readFile: readFileMock, writeFile: writeFileMock, })); -vi.mock("node:os", () => ({ +vi.mock('node:fs', () => ({ + default: { + existsSync, + appendFileSync, + writeFileSync, + mkdirSync, + }, + existsSync, + appendFileSync, + writeFileSync, + mkdirSync, +})); + +vi.mock('node:os', () => ({ __esModule: true, homedir: homedirMock, })); -vi.mock("../lib/cache/session-cache.js", () => ({ +vi.mock('../lib/cache/session-cache.js', () => ({ openCodePromptCache: { get: vi.fn(), set: vi.fn(), @@ -31,299 +48,293 @@ vi.mock("../lib/cache/session-cache.js", () => ({ getOpenCodeCacheKey: vi.fn(), })); -vi.mock("../lib/cache/cache-metrics.js", () => ({ +vi.mock('../lib/cache/cache-metrics.js', () => ({ recordCacheHit: recordCacheHitMock, recordCacheMiss: recordCacheMissMock, })); -describe("OpenCode Codex Prompt Fetcher", () => { - const cacheDir = join("/mock-home", ".opencode", "cache"); - const cacheFile = join(cacheDir, "opencode-codex.txt"); - const cacheMetaFile = join(cacheDir, "opencode-codex-meta.json"); +describe('OpenCode Codex Prompt Fetcher', () => { + const cacheDir = join('/mock-home', '.opencode', 'cache'); + const cacheFile = join(cacheDir, 'opencode-codex.txt'); + const cacheMetaFile = join(cacheDir, 'opencode-codex-meta.json'); beforeEach(() => { files.clear(); readFileMock.mockClear(); writeFileMock.mockClear(); mkdirMock.mockClear(); - homedirMock.mockReturnValue("/mock-home"); + homedirMock.mockReturnValue('/mock-home'); fetchMock.mockClear(); recordCacheHitMock.mockClear(); recordCacheMissMock.mockClear(); + existsSync.mockReset(); + appendFileSync.mockReset(); + writeFileSync.mockReset(); + mkdirSync.mockReset(); openCodePromptCache.clear(); - vi.stubGlobal("fetch", fetchMock); + vi.stubGlobal('fetch', fetchMock); }); afterEach(() => { vi.unstubAllGlobals(); }); - describe("getOpenCodeCodexPrompt", () => { - it("returns cached content from session cache when available", async () => { - const cachedData = "cached-prompt-content"; - openCodePromptCache.get = vi.fn().mockReturnValue({ data: cachedData, etag: "etag-123" }); + describe('getOpenCodeCodexPrompt', () => { + it('returns cached content from session cache when available', async () => { + const cachedData = 'cached-prompt-content'; + openCodePromptCache.get = vi.fn().mockReturnValue({ data: cachedData, etag: 'etag-123' }); - const { getOpenCodeCodexPrompt } = await import("../lib/prompts/opencode-codex.js"); + const { getOpenCodeCodexPrompt } = await import('../lib/prompts/opencode-codex.js'); const result = await getOpenCodeCodexPrompt(); expect(result).toBe(cachedData); - expect(recordCacheHitMock).toHaveBeenCalledWith("opencodePrompt"); + expect(recordCacheHitMock).toHaveBeenCalledWith('opencodePrompt'); expect(recordCacheMissMock).not.toHaveBeenCalled(); expect(readFileMock).not.toHaveBeenCalled(); }); - it("falls back to file cache when session cache misses", async () => { + it('falls back to file cache when session cache misses', async () => { openCodePromptCache.get = vi.fn().mockReturnValue(undefined); - const cachedContent = "file-cached-content"; + const cachedContent = 'file-cached-content'; const cachedMeta = { etag: '"file-etag"', lastChecked: Date.now() - 20 * 60 * 1000 }; // 20 minutes ago (outside TTL) readFileMock.mockImplementation((path) => { if (path === cacheFile) return Promise.resolve(cachedContent); if (path === cacheMetaFile) return Promise.resolve(JSON.stringify(cachedMeta)); - return Promise.reject(new Error("File not found")); + return Promise.reject(new Error('File not found')); }); - fetchMock.mockResolvedValue( - new Response("fresh-content", { - status: 200, - headers: { etag: '"new-etag"' }, - }), - ); + fetchMock.mockResolvedValue(new Response('fresh-content', { + status: 200, + headers: { etag: '"new-etag"' } + })); - const { getOpenCodeCodexPrompt } = await import("../lib/prompts/opencode-codex.js"); + const { getOpenCodeCodexPrompt } = await import('../lib/prompts/opencode-codex.js'); const result = await getOpenCodeCodexPrompt(); - expect(result).toBe("fresh-content"); - expect(recordCacheMissMock).toHaveBeenCalledWith("opencodePrompt"); + expect(result).toBe('fresh-content'); + expect(recordCacheMissMock).toHaveBeenCalledWith('opencodePrompt'); expect(writeFileMock).toHaveBeenCalledTimes(2); // Check that both files were written (order doesn't matter) const writeCalls = writeFileMock.mock.calls; expect(writeCalls).toHaveLength(2); - + // Find calls by file path - const contentFileCall = writeCalls.find((call) => call[0] === cacheFile); - const metaFileCall = writeCalls.find((call) => call[0] === cacheMetaFile); - + const contentFileCall = writeCalls.find(call => call[0] === cacheFile); + const metaFileCall = writeCalls.find(call => call[0] === cacheMetaFile); + expect(contentFileCall).toBeTruthy(); expect(metaFileCall).toBeTruthy(); - expect(contentFileCall?.[1]).toBe("fresh-content"); - expect(contentFileCall?.[2]).toBe("utf-8"); - expect(metaFileCall?.[2]).toBe("utf-8"); - expect(metaFileCall?.[1]).toContain("new-etag"); + expect(contentFileCall![1]).toBe('fresh-content'); + expect(contentFileCall![2]).toBe('utf-8'); + expect(metaFileCall![2]).toBe('utf-8'); + expect(metaFileCall![1]).toContain('new-etag'); }); - it("uses file cache when within TTL period", async () => { + it('uses file cache when within TTL period', async () => { openCodePromptCache.get = vi.fn().mockReturnValue(undefined); - const cachedContent = "recent-cache-content"; + const cachedContent = 'recent-cache-content'; const recentTime = Date.now() - 5 * 60 * 1000; // 5 minutes ago const cachedMeta = { etag: '"recent-etag"', lastChecked: recentTime }; readFileMock.mockImplementation((path) => { if (path === cacheFile) return Promise.resolve(cachedContent); if (path === cacheMetaFile) return Promise.resolve(JSON.stringify(cachedMeta)); - return Promise.reject(new Error("File not found")); + return Promise.reject(new Error('File not found')); }); - const { getOpenCodeCodexPrompt } = await import("../lib/prompts/opencode-codex.js"); + const { getOpenCodeCodexPrompt } = await import('../lib/prompts/opencode-codex.js'); const result = await getOpenCodeCodexPrompt(); expect(result).toBe(cachedContent); expect(fetchMock).not.toHaveBeenCalled(); - expect(openCodePromptCache.set).toHaveBeenCalledWith("main", { + expect(openCodePromptCache.set).toHaveBeenCalledWith('main', { data: cachedContent, - etag: '"recent-etag"', + etag: '"recent-etag"' }); }); - it("handles 304 Not Modified response", async () => { + it('handles 304 Not Modified response', async () => { openCodePromptCache.get = vi.fn().mockReturnValue(undefined); - const cachedContent = "not-modified-content"; + const cachedContent = 'not-modified-content'; const oldTime = Date.now() - 20 * 60 * 1000; // 20 minutes ago const cachedMeta = { etag: '"old-etag"', lastChecked: oldTime }; readFileMock.mockImplementation((path) => { if (path === cacheFile) return Promise.resolve(cachedContent); if (path === cacheMetaFile) return Promise.resolve(JSON.stringify(cachedMeta)); - return Promise.reject(new Error("File not found")); + return Promise.reject(new Error('File not found')); }); - fetchMock.mockResolvedValue( - new Response(null, { - status: 304, - headers: {}, - }), - ); + fetchMock.mockResolvedValue(new Response(null, { + status: 304, + headers: {} + })); - const { getOpenCodeCodexPrompt } = await import("../lib/prompts/opencode-codex.js"); + const { getOpenCodeCodexPrompt } = await import('../lib/prompts/opencode-codex.js'); const result = await getOpenCodeCodexPrompt(); expect(result).toBe(cachedContent); expect(fetchMock).toHaveBeenCalledTimes(1); const fetchCall = fetchMock.mock.calls[0]; - expect(fetchCall[0]).toContain("github"); - expect(typeof fetchCall[1]).toBe("object"); - expect(fetchCall[1]).toHaveProperty("headers"); - expect((fetchCall[1] as any).headers).toEqual({ "If-None-Match": '"old-etag"' }); + expect(fetchCall[0]).toContain('github'); + expect(typeof fetchCall[1]).toBe('object'); + expect(fetchCall[1]).toHaveProperty('headers'); + expect((fetchCall[1] as any).headers).toEqual({ 'If-None-Match': '"old-etag"' }); }); - it("handles fetch failure with fallback to cache", async () => { + it('handles fetch failure with fallback to cache', async () => { openCodePromptCache.get = vi.fn().mockReturnValue(undefined); - const cachedContent = "fallback-content"; + const cachedContent = 'fallback-content'; const oldTime = Date.now() - 20 * 60 * 1000; const cachedMeta = { etag: '"fallback-etag"', lastChecked: oldTime }; readFileMock.mockImplementation((path) => { if (path === cacheFile) return Promise.resolve(cachedContent); if (path === cacheMetaFile) return Promise.resolve(JSON.stringify(cachedMeta)); - return Promise.reject(new Error("File not found")); + return Promise.reject(new Error('File not found')); }); - fetchMock.mockRejectedValue(new Error("Network error")); + fetchMock.mockRejectedValue(new Error('Network error')); - const { getOpenCodeCodexPrompt } = await import("../lib/prompts/opencode-codex.js"); + const { getOpenCodeCodexPrompt } = await import('../lib/prompts/opencode-codex.js'); const result = await getOpenCodeCodexPrompt(); expect(result).toBe(cachedContent); - expect(openCodePromptCache.set).toHaveBeenCalledWith("main", { + expect(openCodePromptCache.set).toHaveBeenCalledWith('main', { data: cachedContent, - etag: '"fallback-etag"', + etag: '"fallback-etag"' }); }); - it("throws error when no cache available and fetch fails", async () => { + it('throws error when no cache available and fetch fails', async () => { openCodePromptCache.get = vi.fn().mockReturnValue(undefined); - readFileMock.mockRejectedValue(new Error("No cache file")); + readFileMock.mockRejectedValue(new Error('No cache file')); - fetchMock.mockRejectedValue(new Error("Network error")); + fetchMock.mockRejectedValue(new Error('Network error')); - const { getOpenCodeCodexPrompt } = await import("../lib/prompts/opencode-codex.js"); + const { getOpenCodeCodexPrompt } = await import('../lib/prompts/opencode-codex.js'); await expect(getOpenCodeCodexPrompt()).rejects.toThrow( - "Failed to fetch OpenCode codex.txt and no cache available", + 'Failed to fetch OpenCode codex.txt and no cache available' ); }); - it("handles non-200 response status with fallback to cache", async () => { + it('handles non-200 response status with fallback to cache', async () => { openCodePromptCache.get = vi.fn().mockReturnValue(undefined); - const cachedContent = "error-fallback-content"; + const cachedContent = 'error-fallback-content'; const oldTime = Date.now() - 20 * 60 * 1000; const cachedMeta = { etag: '"error-etag"', lastChecked: oldTime }; readFileMock.mockImplementation((path) => { if (path === cacheFile) return Promise.resolve(cachedContent); if (path === cacheMetaFile) return Promise.resolve(JSON.stringify(cachedMeta)); - return Promise.reject(new Error("File not found")); + return Promise.reject(new Error('File not found')); }); - fetchMock.mockResolvedValue(new Response("Error", { status: 500 })); + fetchMock.mockResolvedValue(new Response('Error', { status: 500 })); - const { getOpenCodeCodexPrompt } = await import("../lib/prompts/opencode-codex.js"); + const { getOpenCodeCodexPrompt } = await import('../lib/prompts/opencode-codex.js'); const result = await getOpenCodeCodexPrompt(); expect(result).toBe(cachedContent); }); - it("creates cache directory when it does not exist", async () => { + it('creates cache directory when it does not exist', async () => { openCodePromptCache.get = vi.fn().mockReturnValue(undefined); - readFileMock.mockRejectedValue(new Error("No cache files")); - fetchMock.mockResolvedValue( - new Response("new-content", { - status: 200, - headers: { etag: '"new-etag"' }, - }), - ); + readFileMock.mockRejectedValue(new Error('No cache files')); + fetchMock.mockResolvedValue(new Response('new-content', { + status: 200, + headers: { etag: '"new-etag"' } + })); - const { getOpenCodeCodexPrompt } = await import("../lib/prompts/opencode-codex.js"); + const { getOpenCodeCodexPrompt } = await import('../lib/prompts/opencode-codex.js'); await getOpenCodeCodexPrompt(); expect(mkdirMock).toHaveBeenCalledWith(cacheDir, { recursive: true }); }); - it("handles missing etag in response", async () => { + it('handles missing etag in response', async () => { openCodePromptCache.get = vi.fn().mockReturnValue(undefined); - readFileMock.mockRejectedValue(new Error("No cache files")); - fetchMock.mockResolvedValue( - new Response("no-etag-content", { - status: 200, - headers: {}, // No etag header - }), - ); + readFileMock.mockRejectedValue(new Error('No cache files')); + fetchMock.mockResolvedValue(new Response('no-etag-content', { + status: 200, + headers: {} // No etag header + })); - const { getOpenCodeCodexPrompt } = await import("../lib/prompts/opencode-codex.js"); + const { getOpenCodeCodexPrompt } = await import('../lib/prompts/opencode-codex.js'); const result = await getOpenCodeCodexPrompt(); - expect(result).toBe("no-etag-content"); + expect(result).toBe('no-etag-content'); expect(writeFileMock).toHaveBeenCalledWith( cacheMetaFile, expect.stringContaining('"etag": ""'), - "utf-8", + 'utf-8' ); }); - it("handles malformed cache metadata", async () => { + it('handles malformed cache metadata', async () => { openCodePromptCache.get = vi.fn().mockReturnValue(undefined); - const cachedContent = "good-content"; + const cachedContent = 'good-content'; readFileMock.mockImplementation((path) => { if (path === cacheFile) return Promise.resolve(cachedContent); - if (path === cacheMetaFile) return Promise.resolve("invalid json"); - return Promise.reject(new Error("File not found")); + if (path === cacheMetaFile) return Promise.resolve('invalid json'); + return Promise.reject(new Error('File not found')); }); - fetchMock.mockResolvedValue( - new Response("fresh-content", { - status: 200, - headers: { etag: '"fresh-etag"' }, - }), - ); + fetchMock.mockResolvedValue(new Response('fresh-content', { + status: 200, + headers: { etag: '"fresh-etag"' } + })); - const { getOpenCodeCodexPrompt } = await import("../lib/prompts/opencode-codex.js"); + const { getOpenCodeCodexPrompt } = await import('../lib/prompts/opencode-codex.js'); const result = await getOpenCodeCodexPrompt(); - expect(result).toBe("fresh-content"); + expect(result).toBe('fresh-content'); }); }); - describe("getCachedPromptPrefix", () => { - it("returns first N characters of cached content", async () => { - const fullContent = "This is the full cached prompt content for testing"; + describe('getCachedPromptPrefix', () => { + it('returns first N characters of cached content', async () => { + const fullContent = 'This is the full cached prompt content for testing'; readFileMock.mockResolvedValue(fullContent); - const { getCachedPromptPrefix } = await import("../lib/prompts/opencode-codex.js"); + const { getCachedPromptPrefix } = await import('../lib/prompts/opencode-codex.js'); const result = await getCachedPromptPrefix(10); - expect(result).toBe("This is th"); - expect(readFileMock).toHaveBeenCalledWith(cacheFile, "utf-8"); + expect(result).toBe('This is th'); + expect(readFileMock).toHaveBeenCalledWith(cacheFile, 'utf-8'); }); - it("returns null when cache file does not exist", async () => { - readFileMock.mockRejectedValue(new Error("File not found")); + it('returns null when cache file does not exist', async () => { + readFileMock.mockRejectedValue(new Error('File not found')); - const { getCachedPromptPrefix } = await import("../lib/prompts/opencode-codex.js"); + const { getCachedPromptPrefix } = await import('../lib/prompts/opencode-codex.js'); const result = await getCachedPromptPrefix(); expect(result).toBeNull(); }); - it("uses default character count when not specified", async () => { - const fullContent = "A".repeat(100); + it('uses default character count when not specified', async () => { + const fullContent = 'A'.repeat(100); readFileMock.mockResolvedValue(fullContent); - const { getCachedPromptPrefix } = await import("../lib/prompts/opencode-codex.js"); + const { getCachedPromptPrefix } = await import('../lib/prompts/opencode-codex.js'); const result = await getCachedPromptPrefix(); - expect(result).toBe("A".repeat(50)); + expect(result).toBe('A'.repeat(50)); }); - it("handles content shorter than requested characters", async () => { - const shortContent = "Short"; + it('handles content shorter than requested characters', async () => { + const shortContent = 'Short'; readFileMock.mockResolvedValue(shortContent); - const { getCachedPromptPrefix } = await import("../lib/prompts/opencode-codex.js"); + const { getCachedPromptPrefix } = await import('../lib/prompts/opencode-codex.js'); const result = await getCachedPromptPrefix(20); - expect(result).toBe("Short"); + expect(result).toBe('Short'); }); }); -}); +}); \ No newline at end of file diff --git a/test/request-transformer.test.ts b/test/request-transformer.test.ts index eb29bbd..9b591ee 100644 --- a/test/request-transformer.test.ts +++ b/test/request-transformer.test.ts @@ -1,36 +1,21 @@ -import { describe, expect, it } from "vitest"; +import { describe, it, expect } from "vitest"; import { - addCodexBridgeMessage, - addToolRemapMessage, - filterInput, - filterOpenCodeSystemPrompts, + normalizeModel, getModelConfig, getReasoningConfig, + filterInput, + addToolRemapMessage, isOpenCodeSystemPrompt, - normalizeModel, - transformRequestBody, + filterOpenCodeSystemPrompts, + addCodexBridgeMessage, + transformRequestBody as transformRequestBodyInternal, } from "../lib/request/request-transformer.js"; -import type { TransformRequestOptions } from "../lib/request/request-transformer.js"; -import type { InputItem, RequestBody, SessionContext, UserConfig } from "../lib/types.js"; - -async function runTransform( - body: RequestBody, - codexInstructions: string, - userConfig?: UserConfig, - codexMode = true, - options?: TransformRequestOptions, - sessionContext?: SessionContext, -) { - const result = await transformRequestBody( - body, - codexInstructions, - userConfig, - codexMode, - options, - sessionContext, - ); +import type { RequestBody, UserConfig, InputItem } from "../lib/types.js"; + +const transformRequestBody = async (...args: Parameters) => { + const result = await transformRequestBodyInternal(...args); return result.body; -} +}; describe("normalizeModel", () => { it("should normalize gpt-5", async () => { @@ -82,6 +67,13 @@ describe("normalizeModel", () => { expect(normalizeModel("openai/codex-mini-latest")).toBe("gpt-5.1-codex-mini"); }); + it("should normalize codex max variants to gpt-5.1-codex-max", async () => { + expect(normalizeModel("gpt-5.1-codex-max")).toBe("gpt-5.1-codex-max"); + expect(normalizeModel("gpt51-codex-max")).toBe("gpt-5.1-codex-max"); + expect(normalizeModel("gpt-5-codex-max")).toBe("gpt-5.1-codex-max"); + expect(normalizeModel("codex-max")).toBe("gpt-5.1-codex-max"); + }); + it("should normalize gpt-5.1 general presets to gpt-5.1", async () => { expect(normalizeModel("gpt-5.1")).toBe("gpt-5.1"); expect(normalizeModel("gpt-5.1-medium")).toBe("gpt-5.1"); @@ -144,6 +136,32 @@ describe("getReasoningConfig (gpt-5.1)", () => { }); }); +describe("getReasoningConfig (gpt-5.1-codex-max)", () => { + it("defaults to medium and allows xhigh effort", async () => { + const defaults = getReasoningConfig("gpt-5.1-codex-max", {}); + expect(defaults.effort).toBe("medium"); + + const xhigh = getReasoningConfig("gpt-5.1-codex-max", { reasoningEffort: "xhigh" }); + expect(xhigh.effort).toBe("xhigh"); + }); + + it("downgrades minimal or none to low for codex max", async () => { + const minimal = getReasoningConfig("gpt-5.1-codex-max", { reasoningEffort: "minimal" }); + expect(minimal.effort).toBe("low"); + + const none = getReasoningConfig("gpt-5.1-codex-max", { reasoningEffort: "none" }); + expect(none.effort).toBe("low"); + }); + + it("downgrades xhigh to high on other models", async () => { + const codex = getReasoningConfig("gpt-5.1-codex", { reasoningEffort: "xhigh" }); + expect(codex.effort).toBe("high"); + + const general = getReasoningConfig("gpt-5", { reasoningEffort: "xhigh" }); + expect(general.effort).toBe("high"); + }); +}); + describe("filterInput", () => { it("should handle null/undefined in filterInput", async () => { expect(filterInput(null as any)).toBeNull(); @@ -160,7 +178,7 @@ describe("filterInput", () => { const input: InputItem[] = [{ type: "message", role: "user", content: "hello" }]; const result = filterInput(input); expect(result).toEqual(input); - expect(result?.[0]).not.toHaveProperty("id"); + expect(result![0]).not.toHaveProperty("id"); }); it("should remove ALL message IDs (rs_, msg_, etc.) for store:false compatibility", async () => { @@ -173,12 +191,12 @@ describe("filterInput", () => { // All items should remain (no filtering), but ALL IDs removed expect(result).toHaveLength(3); - expect(result?.[0]).not.toHaveProperty("id"); - expect(result?.[1]).not.toHaveProperty("id"); - expect(result?.[2]).not.toHaveProperty("id"); - expect(result?.[0].content).toBe("hello"); - expect(result?.[1].content).toBe("world"); - expect(result?.[2].content).toBe("test"); + expect(result![0]).not.toHaveProperty("id"); + expect(result![1]).not.toHaveProperty("id"); + expect(result![2]).not.toHaveProperty("id"); + expect(result![0].content).toBe("hello"); + expect(result![1].content).toBe("world"); + expect(result![2].content).toBe("test"); }); it("removes metadata when normalizing stateless input", async () => { @@ -194,11 +212,11 @@ describe("filterInput", () => { const result = filterInput(input); expect(result).toHaveLength(1); - expect(result?.[0]).not.toHaveProperty("id"); - expect(result?.[0].type).toBe("message"); - expect(result?.[0].role).toBe("user"); - expect(result?.[0].content).toBe("test"); - expect(result?.[0]).not.toHaveProperty("metadata"); + expect(result![0]).not.toHaveProperty("id"); + expect(result![0].type).toBe("message"); + expect(result![0].role).toBe("user"); + expect(result![0].content).toBe("test"); + expect(result![0]).not.toHaveProperty("metadata"); }); it("preserves metadata when IDs are preserved for host caching", async () => { @@ -214,8 +232,8 @@ describe("filterInput", () => { const result = filterInput(input, { preserveIds: true }); expect(result).toHaveLength(1); - expect(result?.[0]).toHaveProperty("id", "msg_123"); - expect(result?.[0]).toHaveProperty("metadata"); + expect(result![0]).toHaveProperty("id", "msg_123"); + expect(result![0]).toHaveProperty("metadata"); }); it("should handle mixed items with and without IDs", async () => { @@ -228,12 +246,12 @@ describe("filterInput", () => { // All items kept, IDs removed from items that had them expect(result).toHaveLength(3); - expect(result?.[0]).not.toHaveProperty("id"); - expect(result?.[1]).not.toHaveProperty("id"); - expect(result?.[2]).not.toHaveProperty("id"); - expect(result?.[0].content).toBe("1"); - expect(result?.[1].content).toBe("2"); - expect(result?.[2].content).toBe("3"); + expect(result![0]).not.toHaveProperty("id"); + expect(result![1]).not.toHaveProperty("id"); + expect(result![2]).not.toHaveProperty("id"); + expect(result![0].content).toBe("1"); + expect(result![1].content).toBe("2"); + expect(result![2].content).toBe("3"); }); it("should handle custom ID formats (future-proof)", async () => { @@ -244,8 +262,8 @@ describe("filterInput", () => { const result = filterInput(input); expect(result).toHaveLength(2); - expect(result?.[0]).not.toHaveProperty("id"); - expect(result?.[1]).not.toHaveProperty("id"); + expect(result![0]).not.toHaveProperty("id"); + expect(result![1]).not.toHaveProperty("id"); }); it("should return undefined for undefined input", async () => { @@ -403,9 +421,9 @@ describe("addToolRemapMessage", () => { const result = addToolRemapMessage(input, true); expect(result).toHaveLength(2); - expect(result?.[0].role).toBe("developer"); - expect(result?.[0].type).toBe("message"); - expect((result?.[0].content as any)[0].text).toContain("apply_patch"); + expect(result![0].role).toBe("developer"); + expect(result![0].type).toBe("message"); + expect((result![0].content as any)[0].text).toContain("apply_patch"); }); it("should not modify input when tools not present", async () => { @@ -528,7 +546,7 @@ describe("filterOpenCodeSystemPrompts", () => { ]; const result = await filterOpenCodeSystemPrompts(input); expect(result).toHaveLength(1); - expect(result?.[0].role).toBe("user"); + expect(result![0].role).toBe("user"); }); it("should keep user messages", async () => { @@ -566,8 +584,8 @@ describe("filterOpenCodeSystemPrompts", () => { const result = await filterOpenCodeSystemPrompts(input); // Should filter codex.txt but keep AGENTS.md expect(result).toHaveLength(2); - expect(result?.[0].content).toContain("AGENTS.md"); - expect(result?.[1].role).toBe("user"); + expect(result![0].content).toContain("AGENTS.md"); + expect(result![1].role).toBe("user"); }); it("should keep environment+AGENTS.md concatenated message", async () => { @@ -589,50 +607,8 @@ describe("filterOpenCodeSystemPrompts", () => { const result = await filterOpenCodeSystemPrompts(input); // Should filter first message (codex.txt) but keep second (env+AGENTS.md) expect(result).toHaveLength(2); - expect(result?.[0].content).toContain("AGENTS.md"); - expect(result?.[1].role).toBe("user"); - }); - - it("should preserve auto-compaction summaries but drop file instructions", async () => { - const input: InputItem[] = [ - { - type: "message", - role: "developer", - content: [ - { - type: "input_text", - text: "Auto-compaction summary saved to ~/.opencode/summaries/session.md", - }, - { type: "input_text", text: "- Built caching layer and refreshed metrics." }, - { type: "input_text", text: "Open the summary file for the full log." }, - ], - }, - { type: "message", role: "user", content: "hello" }, - ]; - const result = await filterOpenCodeSystemPrompts(input); - expect(result).toHaveLength(2); - const summary = result?.[0]; - expect(summary.role).toBe("developer"); - expect(typeof summary.content).toBe("string"); - const summaryText = summary.content as string; - expect(summaryText).toContain("Auto-compaction summary"); - expect(summaryText).toContain("Built caching layer"); - expect(summaryText).not.toContain("~/.opencode"); - expect(summaryText).not.toContain("summary file"); - }); - - it("should drop compaction prompts that only reference summary files", async () => { - const input: InputItem[] = [ - { - type: "message", - role: "developer", - content: "Auto-compaction triggered. Write the summary to summary_file.", - }, - { type: "message", role: "user", content: "hello" }, - ]; - const result = await filterOpenCodeSystemPrompts(input); - expect(result).toHaveLength(1); - expect(result?.[0].role).toBe("user"); + expect(result![0].content).toContain("AGENTS.md"); + expect(result![1].role).toBe("user"); }); it("should return undefined for undefined input", async () => { @@ -640,64 +616,15 @@ describe("filterOpenCodeSystemPrompts", () => { }); }); -describe("compaction integration", () => { - const instructions = "Codex instructions"; - - it("rewrites input when manual codex-compact command is present", async () => { - const body: RequestBody = { - model: "gpt-5", - input: [ - { type: "message", role: "developer", content: "AGENTS" }, - { type: "message", role: "user", content: "Do work" }, - { type: "message", role: "user", content: "/codex-compact" }, - ], - }; - const original = body.input?.map((item) => JSON.parse(JSON.stringify(item))); - const result = await transformRequestBody(body, instructions, undefined, true, { - compaction: { - settings: { enabled: true, autoLimitTokens: undefined, autoMinMessages: 8 }, - commandText: "codex-compact", - originalInput: original, - }, - }); - - expect(result.compactionDecision?.mode).toBe("command"); - expect(result.body.input).toHaveLength(2); - expect(result.body.tools).toBeUndefined(); - }); - - it("auto-compacts when token limit exceeded", async () => { - const longUser = "lorem ipsum ".repeat(200); - const body: RequestBody = { - model: "gpt-5", - input: [ - { type: "message", role: "user", content: longUser }, - { type: "message", role: "assistant", content: "ack" }, - ], - }; - const original = body.input?.map((item) => JSON.parse(JSON.stringify(item))); - const result = await transformRequestBody(body, instructions, undefined, true, { - compaction: { - settings: { enabled: true, autoLimitTokens: 10, autoMinMessages: 1 }, - commandText: null, - originalInput: original, - }, - }); - - expect(result.compactionDecision?.mode).toBe("auto"); - expect(result.body.input).toHaveLength(2); - }); -}); - describe("addCodexBridgeMessage", () => { it("should prepend bridge message when tools present", async () => { const input = [{ type: "message", role: "user", content: [{ type: "input_text", text: "test" }] }]; const result = addCodexBridgeMessage(input, true); expect(result).toHaveLength(2); - expect(result?.[0].role).toBe("developer"); - expect(result?.[0].type).toBe("message"); - expect((result?.[0].content as any)[0].text).toContain("Codex in OpenCode"); + expect(result![0].role).toBe("developer"); + expect(result![0].type).toBe("message"); + expect((result![0].content as any)[0].text).toContain("Codex in OpenCode"); }); it("should not modify input when tools not present", async () => { @@ -711,7 +638,7 @@ describe("addCodexBridgeMessage", () => { }); }); -describe("runTransform", () => { +describe("transformRequestBody", () => { const codexInstructions = "Test Codex Instructions"; it("preserves existing prompt_cache_key passed by host (OpenCode)", async () => { @@ -722,7 +649,7 @@ describe("runTransform", () => { // host-provided field is allowed by plugin prompt_cache_key: "ses_host_key_123", }; - const result: any = await runTransform(body, codexInstructions); + const result: any = await transformRequestBody(body, codexInstructions); expect(result.prompt_cache_key).toBe("ses_host_key_123"); }); @@ -732,7 +659,7 @@ describe("runTransform", () => { input: [], promptCacheKey: "ses_camel_key_456", }; - const result: any = await runTransform(body, codexInstructions); + const result: any = await transformRequestBody(body, codexInstructions); expect(result.prompt_cache_key).toBe("ses_camel_key_456"); }); @@ -742,17 +669,60 @@ describe("runTransform", () => { input: [], metadata: { conversation_id: "meta-conv-123" }, }; - const result: any = await runTransform(body, codexInstructions); + const result: any = await transformRequestBody(body, codexInstructions); expect(result.prompt_cache_key).toBe("cache_meta-conv-123"); }); + it("derives fork-aware prompt_cache_key when fork id is present in metadata", async () => { + const body: RequestBody = { + model: "gpt-5", + metadata: { + conversation_id: "meta-conv-123", + forkId: "branch-1", + }, + input: [], + } as any; + const result: any = await transformRequestBody(body, codexInstructions); + expect(result.prompt_cache_key).toBe("cache_meta-conv-123-fork-branch-1"); + }); + + it("derives fork-aware prompt_cache_key when fork id is present in root", async () => { + const body: RequestBody = { + model: "gpt-5", + conversation_id: "meta-conv-123", + fork_id: "branch-2", + input: [], + } as any; + const result: any = await transformRequestBody(body, codexInstructions); + expect(result.prompt_cache_key).toBe("cache_meta-conv-123-fork-branch-2"); + }); + + it("reuses the same prompt_cache_key across non-structural overrides", async () => { + const baseBody: RequestBody = { + model: "gpt-5", + metadata: { + conversation_id: "meta-conv-789", + forkId: "fork-x", + }, + input: [], + } as any; + const body1: RequestBody = { ...baseBody } as RequestBody; + const body2: RequestBody = { ...baseBody, text: { verbosity: "low" as const } } as RequestBody; + + const result1: any = await transformRequestBody(body1, codexInstructions); + const result2: any = await transformRequestBody(body2, codexInstructions); + + expect(result1.prompt_cache_key).toBe("cache_meta-conv-789-fork-fork-x"); + expect(result2.prompt_cache_key).toBe("cache_meta-conv-789-fork-fork-x"); + }); + it("derives fork-aware prompt_cache_key when fork id is present in metadata", async () => { const body: RequestBody = { model: "gpt-5", input: [], metadata: { conversation_id: "meta-conv-123", forkId: "branch-1" }, }; - const result: any = await runTransform(body, codexInstructions); + const result: any = await transformRequestBody(body, codexInstructions); expect(result.prompt_cache_key).toBe("cache_meta-conv-123-fork-branch-1"); }); @@ -763,7 +733,42 @@ describe("runTransform", () => { metadata: { conversation_id: "meta-conv-123" }, forkId: "branch-2" as any, } as any; - const result: any = await runTransform(body, codexInstructions); + const result: any = await transformRequestBody(body, codexInstructions); + expect(result.prompt_cache_key).toBe("cache_meta-conv-123-fork-branch-2"); + }); + + it("reuses the same prompt_cache_key across non-structural overrides", async () => { + const baseMetadata = { conversation_id: "meta-conv-789", forkId: "fork-x" }; + const body1: RequestBody = { + model: "gpt-5", + input: [], + metadata: { ...baseMetadata }, + }; + const body2: RequestBody = { + model: "gpt-5", + input: [], + metadata: { ...baseMetadata }, + // Soft overrides that should not change the cache key + max_output_tokens: 1024, + reasoning: { effort: "high" } as any, + text: { verbosity: "high" } as any, + }; + + const result1: any = await transformRequestBody(body1, codexInstructions); + const result2: any = await transformRequestBody(body2, codexInstructions); + + expect(result1.prompt_cache_key).toBe("cache_meta-conv-789-fork-fork-x"); + expect(result2.prompt_cache_key).toBe("cache_meta-conv-789-fork-fork-x"); + }); + + it("derives fork-aware prompt_cache_key when fork id is present in root", async () => { + const body: RequestBody = { + model: "gpt-5", + input: [], + metadata: { conversation_id: "meta-conv-123" }, + forkId: "branch-2" as any, + } as any; + const result: any = await transformRequestBody(body, codexInstructions); expect(result.prompt_cache_key).toBe("cache_meta-conv-123-fork-branch-2"); }); @@ -784,23 +789,45 @@ describe("runTransform", () => { text: { verbosity: "high" } as any, }; - const result1: any = await runTransform(body1, codexInstructions); - const result2: any = await runTransform(body2, codexInstructions); + const result1: any = await transformRequestBody(body1, codexInstructions); + const result2: any = await transformRequestBody(body2, codexInstructions); + + expect(result1.prompt_cache_key).toBe("cache_meta-conv-789-fork-fork-x"); + expect(result2.prompt_cache_key).toBe("cache_meta-conv-789-fork-fork-x"); + }); + + it("reuses the same prompt_cache_key across non-structural overrides", async () => { + const baseMetadata = { conversation_id: "meta-conv-789", forkId: "fork-x" }; + const body1: RequestBody = { + model: "gpt-5", + input: [], + metadata: { ...baseMetadata }, + }; + const body2: RequestBody = { + model: "gpt-5", + input: [], + metadata: { ...baseMetadata }, + // Soft overrides that should not change the cache key + max_output_tokens: 1024, + reasoning: { effort: "high" } as any, + text: { verbosity: "high" } as any, + }; + + const result1: any = await transformRequestBody(body1, codexInstructions); + const result2: any = await transformRequestBody(body2, codexInstructions); expect(result1.prompt_cache_key).toBe("cache_meta-conv-789-fork-fork-x"); expect(result2.prompt_cache_key).toBe("cache_meta-conv-789-fork-fork-x"); }); - it("generates deterministic fallback prompt_cache_key when no identifiers exist", async () => { + it("generates fallback prompt_cache_key when no identifiers exist", async () => { const body: RequestBody = { model: "gpt-5", input: [], }; - const result1: any = await runTransform(body, codexInstructions); - const result2: any = await runTransform(body, codexInstructions); - expect(typeof result1.prompt_cache_key).toBe("string"); - expect(result1.prompt_cache_key).toMatch(/^cache_[a-f0-9]{12}$/); - expect(result2.prompt_cache_key).toBe(result1.prompt_cache_key); + const result: any = await transformRequestBody(body, codexInstructions); + expect(typeof result.prompt_cache_key).toBe("string"); + expect(result.prompt_cache_key).toMatch(/^cache_/); }); it("should set required Codex fields", async () => { @@ -808,7 +835,7 @@ describe("runTransform", () => { model: "gpt-5", input: [], }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); expect(result.store).toBe(false); expect(result.stream).toBe(true); @@ -820,7 +847,7 @@ describe("runTransform", () => { model: "gpt-5-mini", input: [], }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); expect(result.model).toBe("gpt-5"); }); @@ -829,7 +856,7 @@ describe("runTransform", () => { model: "gpt-5", input: [], }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); expect(result.reasoning?.effort).toBe("medium"); expect(result.reasoning?.summary).toBe("auto"); @@ -847,7 +874,7 @@ describe("runTransform", () => { }, models: {}, }; - const result = await runTransform(body, codexInstructions, userConfig, true, { + const result = await transformRequestBody(body, codexInstructions, userConfig, true, { preserveIds: false, }); @@ -855,12 +882,46 @@ describe("runTransform", () => { expect(result.reasoning?.summary).toBe("detailed"); }); + it("should keep xhigh reasoning effort for gpt-5.1-codex-max", async () => { + const body: RequestBody = { + model: "gpt-5.1-codex-max", + input: [], + }; + const userConfig: UserConfig = { + global: { + reasoningEffort: "xhigh", + }, + models: {}, + }; + const result = await transformRequestBody(body, codexInstructions, userConfig, true, { + preserveIds: false, + }); + expect(result.reasoning?.effort).toBe("xhigh"); + }); + + it("should downgrade xhigh reasoning for non-codex-max models", async () => { + const body: RequestBody = { + model: "gpt-5.1-codex", + input: [], + }; + const userConfig: UserConfig = { + global: { + reasoningEffort: "xhigh", + }, + models: {}, + }; + const result = await transformRequestBody(body, codexInstructions, userConfig, true, { + preserveIds: false, + }); + expect(result.reasoning?.effort).toBe("high"); + }); + it("should apply default text verbosity", async () => { const body: RequestBody = { model: "gpt-5", input: [], }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); expect(result.text?.verbosity).toBe("medium"); }); @@ -873,7 +934,7 @@ describe("runTransform", () => { global: { textVerbosity: "low" }, models: {}, }; - const result = await runTransform(body, codexInstructions, userConfig, true, { + const result = await transformRequestBody(body, codexInstructions, userConfig, true, { preserveIds: false, }); expect(result.text?.verbosity).toBe("low"); @@ -884,7 +945,7 @@ describe("runTransform", () => { model: "gpt-5", input: [], }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); expect(result.include).toEqual(["reasoning.encrypted_content"]); }); @@ -897,7 +958,7 @@ describe("runTransform", () => { global: { include: ["custom_field", "reasoning.encrypted_content"] }, models: {}, }; - const result = await runTransform(body, codexInstructions, userConfig, true, { + const result = await transformRequestBody(body, codexInstructions, userConfig, true, { preserveIds: false, }); expect(result.include).toEqual(["custom_field", "reasoning.encrypted_content"]); @@ -911,14 +972,14 @@ describe("runTransform", () => { { type: "message", role: "user", content: "new" }, ], }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); // All items kept, IDs removed expect(result.input).toHaveLength(2); - expect(result.input?.[0]).not.toHaveProperty("id"); - expect(result.input?.[1]).not.toHaveProperty("id"); - expect(result.input?.[0].content).toBe("old"); - expect(result.input?.[1].content).toBe("new"); + expect(result.input![0]).not.toHaveProperty("id"); + expect(result.input![1]).not.toHaveProperty("id"); + expect(result.input![0].content).toBe("old"); + expect(result.input![1].content).toBe("new"); }); it("should preserve IDs when preserveIds option is set", async () => { @@ -929,7 +990,7 @@ describe("runTransform", () => { { id: "call_1", type: "function_call", role: "assistant" }, ], }; - const result = await runTransform(body, codexInstructions, undefined, true, { + const result = await transformRequestBody(body, codexInstructions, undefined, true, { preserveIds: true, }); @@ -945,7 +1006,7 @@ describe("runTransform", () => { promptCacheKey: "camelcase-key", prompt_cache_key: "snakecase-key", }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); // Should prioritize snake_case over camelCase expect(result.prompt_cache_key).toBe("snakecase-key"); @@ -957,8 +1018,8 @@ describe("runTransform", () => { input: [{ type: "message", role: "user", content: "hello" }], tools: [{ name: "test_tool" }], }; - const result = await runTransform(body, codexInstructions); - expect(result.input?.[0].role).toBe("developer"); + const result = await transformRequestBody(body, codexInstructions); + expect(result.input![0].role).toBe("developer"); }); it("should not add tool remap message when tools absent", async () => { @@ -966,8 +1027,8 @@ describe("runTransform", () => { model: "gpt-5", input: [{ type: "message", role: "user", content: "hello" }], }; - const result = await runTransform(body, codexInstructions); - expect(result.input?.[0].role).toBe("user"); + const result = await transformRequestBody(body, codexInstructions); + expect(result.input![0].role).toBe("user"); }); it("should remove unsupported parameters", async () => { @@ -977,7 +1038,7 @@ describe("runTransform", () => { max_output_tokens: 1000, max_completion_tokens: 2000, }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); expect(result.max_output_tokens).toBeUndefined(); expect(result.max_completion_tokens).toBeUndefined(); }); @@ -991,7 +1052,7 @@ describe("runTransform", () => { global: { reasoningEffort: "minimal" }, models: {}, }; - const result = await runTransform(body, codexInstructions, userConfig, true, { + const result = await transformRequestBody(body, codexInstructions, userConfig, true, { preserveIds: false, }); expect(result.reasoning?.effort).toBe("low"); @@ -1006,7 +1067,7 @@ describe("runTransform", () => { global: { reasoningEffort: "minimal" }, models: {}, }; - const result = await runTransform(body, codexInstructions, userConfig, true, { + const result = await transformRequestBody(body, codexInstructions, userConfig, true, { preserveIds: false, }); expect(result.reasoning?.effort).toBe("minimal"); @@ -1017,7 +1078,7 @@ describe("runTransform", () => { model: "gpt-5-nano", input: [], }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); expect(result.reasoning?.effort).toBe("minimal"); }); @@ -1028,11 +1089,11 @@ describe("runTransform", () => { input: [{ type: "message", role: "user", content: "hello" }], tools: [{ name: "test_tool" }], }; - const result = await runTransform(body, codexInstructions, undefined, true); + const result = await transformRequestBody(body, codexInstructions, undefined, true); expect(result.input).toHaveLength(2); - expect(result.input?.[0].role).toBe("developer"); - expect((result.input?.[0].content as any)[0].text).toContain("Codex in OpenCode"); + expect(result.input![0].role).toBe("developer"); + expect((result.input![0].content as any)[0].text).toContain("Codex in OpenCode"); }); it("should filter OpenCode prompts when codexMode=true", async () => { @@ -1048,13 +1109,13 @@ describe("runTransform", () => { ], tools: [{ name: "test_tool" }], }; - const result = await runTransform(body, codexInstructions, undefined, true); + const result = await transformRequestBody(body, codexInstructions, undefined, true); // Should have bridge message + user message (OpenCode prompt filtered out) expect(result.input).toHaveLength(2); - expect(result.input?.[0].role).toBe("developer"); - expect((result.input?.[0].content as any)[0].text).toContain("Codex in OpenCode"); - expect(result.input?.[1].role).toBe("user"); + expect(result.input![0].role).toBe("developer"); + expect((result.input![0].content as any)[0].text).toContain("Codex in OpenCode"); + expect(result.input![1].role).toBe("user"); }); it("should not add bridge message when codexMode=true but no tools", async () => { @@ -1062,10 +1123,10 @@ describe("runTransform", () => { model: "gpt-5", input: [{ type: "message", role: "user", content: "hello" }], }; - const result = await runTransform(body, codexInstructions, undefined, true); + const result = await transformRequestBody(body, codexInstructions, undefined, true); expect(result.input).toHaveLength(1); - expect(result.input?.[0].role).toBe("user"); + expect(result.input![0].role).toBe("user"); }); it("should use tool remap message when codexMode=false", async () => { @@ -1074,11 +1135,11 @@ describe("runTransform", () => { input: [{ type: "message", role: "user", content: "hello" }], tools: [{ name: "test_tool" }], }; - const result = await runTransform(body, codexInstructions, undefined, false); + const result = await transformRequestBody(body, codexInstructions, undefined, false); expect(result.input).toHaveLength(2); - expect(result.input?.[0].role).toBe("developer"); - expect((result.input?.[0].content as any)[0].text).toContain("apply_patch"); + expect(result.input![0].role).toBe("developer"); + expect((result.input![0].content as any)[0].text).toContain("apply_patch"); }); it("should not filter OpenCode prompts when codexMode=false", async () => { @@ -1094,14 +1155,14 @@ describe("runTransform", () => { ], tools: [{ name: "test_tool" }], }; - const result = await runTransform(body, codexInstructions, undefined, false); + const result = await transformRequestBody(body, codexInstructions, undefined, false); // Should have tool remap + opencode prompt + user message expect(result.input).toHaveLength(3); - expect(result.input?.[0].role).toBe("developer"); - expect((result.input?.[0].content as any)[0].text).toContain("apply_patch"); - expect(result.input?.[1].role).toBe("developer"); - expect(result.input?.[2].role).toBe("user"); + expect(result.input![0].role).toBe("developer"); + expect((result.input![0].content as any)[0].text).toContain("apply_patch"); + expect(result.input![1].role).toBe("developer"); + expect(result.input![2].role).toBe("user"); }); it("should default to codexMode=true when parameter not provided", async () => { @@ -1111,11 +1172,11 @@ describe("runTransform", () => { tools: [{ name: "test_tool" }], }; // Not passing codexMode parameter - should default to true - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); // Should use bridge message (codexMode=true by default) - expect(result.input?.[0].role).toBe("developer"); - expect((result.input?.[0].content as any)[0].text).toContain("Codex in OpenCode"); + expect(result.input![0].role).toBe("developer"); + expect((result.input![0].content as any)[0].text).toContain("Codex in OpenCode"); }); }); @@ -1132,7 +1193,7 @@ describe("runTransform", () => { models: {}, }; - const result = await runTransform(body, codexInstructions, userConfig, true, { + const result = await transformRequestBody(body, codexInstructions, userConfig, true, { preserveIds: false, }); @@ -1147,7 +1208,7 @@ describe("runTransform", () => { input: [], }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); expect(result.model).toBe("gpt-5"); // Normalized expect(result.reasoning?.effort).toBe("minimal"); // Lightweight default @@ -1173,7 +1234,7 @@ describe("runTransform", () => { input: [], }; - const result = await runTransform(body, codexInstructions, userConfig, true, { + const result = await transformRequestBody(body, codexInstructions, userConfig, true, { preserveIds: false, }); @@ -1188,7 +1249,7 @@ describe("runTransform", () => { input: [], }; - const result = await runTransform(body, codexInstructions, userConfig, true, { + const result = await transformRequestBody(body, codexInstructions, userConfig, true, { preserveIds: false, }); @@ -1203,7 +1264,7 @@ describe("runTransform", () => { input: [], }; - const result = await runTransform(body, codexInstructions, userConfig, true, { + const result = await transformRequestBody(body, codexInstructions, userConfig, true, { preserveIds: false, }); @@ -1228,7 +1289,7 @@ describe("runTransform", () => { input: [], }; - const result = await runTransform(body, codexInstructions, userConfig, true, { + const result = await transformRequestBody(body, codexInstructions, userConfig, true, { preserveIds: false, }); @@ -1254,7 +1315,7 @@ describe("runTransform", () => { input: [], }; - const result = await runTransform(body, codexInstructions, userConfig, true, { + const result = await transformRequestBody(body, codexInstructions, userConfig, true, { preserveIds: false, }); @@ -1267,7 +1328,7 @@ describe("runTransform", () => { input: [], }; - const result = await runTransform(body, codexInstructions, userConfig, true, { + const result = await transformRequestBody(body, codexInstructions, userConfig, true, { preserveIds: false, }); @@ -1287,11 +1348,11 @@ describe("runTransform", () => { ], }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); // All items kept, ALL IDs removed expect(result.input).toHaveLength(4); - expect(result.input?.every((item) => !item.id)).toBe(true); + expect(result.input!.every((item) => !item.id)).toBe(true); expect(result.store).toBe(false); // Stateless mode expect(result.include).toEqual(["reasoning.encrypted_content"]); }); @@ -1321,7 +1382,7 @@ describe("runTransform", () => { tools: [{ name: "edit" }], }; - const result = await runTransform(body, codexInstructions, userConfig, true, { + const result = await transformRequestBody(body, codexInstructions, userConfig, true, { preserveIds: false, }); @@ -1329,7 +1390,7 @@ describe("runTransform", () => { expect(result.model).toBe("gpt-5-codex"); // IDs removed - expect(result.input?.every((item) => !item.id)).toBe(true); + expect(result.input!.every((item) => !item.id)).toBe(true); // Per-model options applied expect(result.reasoning?.effort).toBe("low"); @@ -1351,7 +1412,7 @@ describe("runTransform", () => { model: "gpt-5", input: [], }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); expect(result.input).toEqual([]); }); @@ -1360,7 +1421,7 @@ describe("runTransform", () => { model: "gpt-5", input: null as any, }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); expect(result.input).toBeNull(); }); @@ -1369,7 +1430,7 @@ describe("runTransform", () => { model: "gpt-5", input: undefined as any, }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); expect(result.input).toBeUndefined(); }); @@ -1383,7 +1444,7 @@ describe("runTransform", () => { { not: "a valid item" } as any, ], }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); expect(result.input).toHaveLength(4); }); @@ -1404,9 +1465,9 @@ describe("runTransform", () => { }, ], }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); expect(result.input).toHaveLength(1); - expect(Array.isArray(result.input?.[0].content)).toBe(true); + expect(Array.isArray(result.input![0].content)).toBe(true); }); it("should handle very long model names", async () => { @@ -1414,7 +1475,7 @@ describe("runTransform", () => { model: "very-long-model-name-with-gpt-5-codex-and-extra-stuff", input: [], }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); expect(result.model).toBe("gpt-5-codex"); }); @@ -1423,7 +1484,7 @@ describe("runTransform", () => { model: "gpt-5-codex@v1.0#beta", input: [], }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); expect(result.model).toBe("gpt-5-codex"); }); @@ -1432,7 +1493,7 @@ describe("runTransform", () => { model: "", input: [], }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); expect(result.model).toBe("gpt-5.1"); }); @@ -1445,7 +1506,7 @@ describe("runTransform", () => { summary: null as any, } as any, }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); // Should override with defaults expect(result.reasoning?.effort).toBe("medium"); expect(result.reasoning?.summary).toBe("auto"); @@ -1459,7 +1520,7 @@ describe("runTransform", () => { verbosity: "invalid" as any, } as any, }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); // Should override with defaults expect(result.text?.verbosity).toBe("medium"); }); @@ -1470,7 +1531,7 @@ describe("runTransform", () => { input: [], include: ["invalid", "field", null as any, undefined as any], }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); // Should override with defaults expect(result.include).toEqual(["reasoning.encrypted_content"]); }); @@ -1486,7 +1547,7 @@ describe("runTransform", () => { applyRequest: () => null, } as any; - const result = await runTransform( + const result = await transformRequestBody( body, codexInstructions, undefined, @@ -1505,10 +1566,10 @@ describe("runTransform", () => { input: [{ type: "message", role: "user", content: "test" }], tools: [null, undefined, { name: "valid_tool" }, "not an object" as any], }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); // Should still add bridge message since tools array exists expect(result.input).toHaveLength(2); - expect(result.input?.[0].role).toBe("developer"); + expect(result.input![0].role).toBe("developer"); }); it("should handle empty tools array", async () => { @@ -1517,10 +1578,10 @@ describe("runTransform", () => { input: [{ type: "message", role: "user", content: "test" }], tools: [], }; - const result = await runTransform(body, codexInstructions); + const result = await transformRequestBody(body, codexInstructions); // Should not add bridge message for empty tools array expect(result.input).toHaveLength(1); - expect(result.input?.[0].role).toBe("user"); + expect(result.input![0].role).toBe("user"); }); it("should handle metadata edge cases", async () => { @@ -1533,14 +1594,14 @@ describe("runTransform", () => { nested: { id: "value" }, }, }; - const result1 = await runTransform(body, codexInstructions); + const result1 = await transformRequestBody(body, codexInstructions); const firstKey = result1.prompt_cache_key; // Should generate fallback cache key expect(typeof firstKey).toBe("string"); expect(firstKey).toMatch(/^cache_/); // Second transform of the same body should reuse the existing key - const result2 = await runTransform(body, codexInstructions); + const result2 = await transformRequestBody(body, codexInstructions); expect(result2.prompt_cache_key).toBe(firstKey); }); @@ -1550,8 +1611,8 @@ describe("runTransform", () => { model: "gpt-5", input: [{ type: "message", role: "user", content: longContent }], }; - const result = await runTransform(body, codexInstructions); - expect(result.input?.[0].content).toBe(longContent); + const result = await transformRequestBody(body, codexInstructions); + expect(result.input![0].content).toBe(longContent); }); it("should handle unicode content", async () => { @@ -1560,8 +1621,8 @@ describe("runTransform", () => { model: "gpt-5", input: [{ type: "message", role: "user", content: unicodeContent }], }; - const result = await runTransform(body, codexInstructions); - expect(result.input?.[0].content).toBe(unicodeContent); + const result = await transformRequestBody(body, codexInstructions); + expect(result.input![0].content).toBe(unicodeContent); }); }); });