Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
5717706
docs: update AGENTS.md for gpt-5.1-codex-max support
riatzukiza Nov 19, 2025
41755ac
chore: add v3.3.0 changelog entry for gpt-5.1-codex-max
riatzukiza Nov 19, 2025
9c7fb71
docs: update README for gpt-5.1-codex-max integration
riatzukiza Nov 19, 2025
8536bf1
config: add gpt-5.1-codex-max to full-opencode.json
riatzukiza Nov 19, 2025
564bac7
config: update minimal-opencode.json default to gpt-5.1-codex-max
riatzukiza Nov 19, 2025
2f2d238
docs: update CONFIG_FIELDS.md for gpt-5.1-codex-max
riatzukiza Nov 19, 2025
266606b
docs: add persistent logging note to TESTING.md
riatzukiza Nov 19, 2025
9175628
feat: implement persistent rolling logging in logger.ts
riatzukiza Nov 19, 2025
9cac2c2
feat: add gpt-5.1-codex-max support to request transformer
riatzukiza Nov 19, 2025
88008a9
types: add xhigh reasoning effort to TypeScript interfaces
riatzukiza Nov 19, 2025
b309387
test: add gpt-5.1-codex-max to test-all-models.sh
riatzukiza Nov 19, 2025
c452368
test: fix codex-fetcher test headers mock
riatzukiza Nov 19, 2025
3ee37ef
test: update logger tests for persistent rolling logging
riatzukiza Nov 19, 2025
3976d2e
test: add appendFileSync mock to plugin-config tests
riatzukiza Nov 19, 2025
b5a2683
test: add appendFileSync mock to prompts-codex tests
riatzukiza Nov 19, 2025
6f8cf66
test: add comprehensive fs mocks to prompts-opencode-codex tests
riatzukiza Nov 19, 2025
bd06f6e
test: add comprehensive gpt-5.1-codex-max test coverage
riatzukiza Nov 19, 2025
d451b3d
docs: add specification files for gpt-5.1-codex-max and persistent lo…
riatzukiza Nov 19, 2025
4406707
merge: resolve conflicts with staging branch
riatzukiza Nov 19, 2025
e3144f8
fix failing tests
riatzukiza Nov 19, 2025
774bcfa
fixed minor type error
riatzukiza Nov 19, 2025
4b57476
test: remove redundant env reset and header mock
riatzukiza Nov 19, 2025
c9b80f8
Reduce console logging to debug flag
riatzukiza Nov 19, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This file provides coding guidance for AI agents (including Claude Code, Codex,

## Overview

This is an **opencode plugin** that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It allows users to access `gpt-5-codex`, `gpt-5-codex-mini`, and `gpt-5` models through their ChatGPT subscription instead of using OpenAI Platform API credits.
This is an **opencode plugin** that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It now mirrors the Codex CLI lineup, making `gpt-5.1-codex-max` (with optional `xhigh` reasoning) the default alongside the existing `gpt-5.1-codex`, `gpt-5.1-codex-mini`, and legacy `gpt-5` models—all available through a ChatGPT subscription instead of OpenAI Platform API credits.

**Key architecture principle**: 7-step fetch flow that intercepts opencode's OpenAI SDK requests, transforms them for the ChatGPT backend API, and handles OAuth token management.

Expand Down Expand Up @@ -157,6 +157,8 @@ This plugin **intentionally differs from opencode defaults** because it accesses
| `store` | true | false | Required for ChatGPT backend |
| `include` | (not set) | `["reasoning.encrypted_content"]` | Required for stateless operation |

> **Extra High reasoning**: `reasoningEffort: "xhigh"` is only honored for `gpt-5.1-codex-max`. Other models automatically downgrade it to `high` so their API calls remain valid.

## File Paths & Locations

- **Plugin config**: `~/.opencode/openhax-codex-config.json`
Expand Down
11 changes: 11 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,17 @@

All notable changes to this project are documented here. Dates use the ISO format (YYYY-MM-DD).

## [3.3.0] - 2025-11-19
### Added
- Codex Max support that mirrors the Codex CLI: normalization for every `gpt-5.1-codex-max` alias, `reasoningEffort: "xhigh"`, and unit tests covering both the transformer and request body integration path.
- Documentation and configuration updates calling out Codex Max as the flagship preset, plus refreshed samples showing how to opt into the Extra High reasoning mode.

### Changed
- Sample configs (`full` + `minimal`), README tables, AGENTS.md, and the diagnostics script now prefer `gpt-5.1-codex-max`, keeping plugin defaults aligned with Codex CLI behaviour.

### Fixed
- Requests that specify `reasoningEffort: "xhigh"` for non-supported models are now automatically downgraded to `high`, preventing API errors when Codex Max isn't selected.

## [3.2.0] - 2025-11-13
### Added
- GPT-5.1 family integration: normalization for `gpt-5.1`/`gpt-5.1-codex`/`gpt-5.1-codex-mini`, expanded reasoning heuristics (including `reasoningEffort: "none"`), and preservation of the native `shell`/`apply_patch` tools emitted by Codex CLI.
Expand Down
27 changes: 23 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,22 @@ For the complete experience with all reasoning variants matching the official Co
"store": false
},
"models": {
"gpt-5.1-codex-max": {
"name": "GPT 5.1 Codex Max (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.1-codex-low": {
"name": "GPT 5.1 Codex Low (OAuth)",
"limit": {
Expand Down Expand Up @@ -422,7 +438,7 @@ For the complete experience with all reasoning variants matching the official Co
**Global config**: `~/.config/opencode/opencode.json`
**Project config**: `<project>/.opencode.json`

This now gives you 20 model variants: the new GPT-5.1 lineup (recommended) plus every legacy gpt-5 preset for backwards compatibility.
This now gives you 21 model variants: the refreshed GPT-5.1 lineup (with Codex Max as the default) plus every legacy gpt-5 preset for backwards compatibility.

All appear in the opencode model selector as "GPT 5.1 Codex Low (OAuth)", "GPT 5 High (OAuth)", etc.

Expand All @@ -434,6 +450,7 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t

| CLI Model ID | TUI Display Name | Reasoning Effort | Best For |
|--------------|------------------|-----------------|----------|
| `gpt-5.1-codex-max` | GPT 5.1 Codex Max (OAuth) | Medium (Extra High optional) | Default flagship tier with optional `xhigh` reasoning for long, complex runs |
| `gpt-5.1-codex-low` | GPT 5.1 Codex Low (OAuth) | Low | Fast code generation on the newest Codex tier |
| `gpt-5.1-codex-medium` | GPT 5.1 Codex Medium (OAuth) | Medium | Balanced code + tooling workflows |
| `gpt-5.1-codex-high` | GPT 5.1 Codex High (OAuth) | High | Multi-step coding tasks with deep tool use |
Expand All @@ -444,6 +461,8 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t
| `gpt-5.1-medium` | GPT 5.1 Medium (OAuth) | Medium | Default adaptive reasoning for everyday work |
| `gpt-5.1-high` | GPT 5.1 High (OAuth) | High | Deep analysis when reliability matters most |

> **Extra High reasoning:** `reasoningEffort: "xhigh"` is exclusive to `gpt-5.1-codex-max`. Other models automatically map that option to `high` so their API calls remain valid.

#### Legacy GPT-5 lineup (still supported)

| CLI Model ID | TUI Display Name | Reasoning Effort | Best For |
Expand Down Expand Up @@ -505,7 +524,7 @@ These defaults match the official Codex CLI behavior and can be customized (see
### Recommended: Use Pre-Configured File

The easiest way to get started is to use [`config/full-opencode.json`](./config/full-opencode.json), which provides:
- 20 pre-configured model variants matching the latest Codex CLI presets (GPT-5.1 + GPT-5)
- 21 pre-configured model variants matching the latest Codex CLI presets (GPT-5.1 Codex Max + GPT-5.1 + GPT-5)
- Optimal settings for each reasoning level
- All variants visible in the opencode model selector

Expand All @@ -521,12 +540,12 @@ If you want to customize settings yourself, you can configure options at provide

| Setting | GPT-5 / GPT-5.1 Values | GPT-5-Codex / Codex Mini Values | Plugin Default |
|---------|-------------|-------------------|----------------|
| `reasoningEffort` | `none`, `minimal`, `low`, `medium`, `high` | `low`, `medium`, `high` | `medium` |
| `reasoningEffort` | `none`, `minimal`, `low`, `medium`, `high` | `low`, `medium`, `high`, `xhigh`* | `medium` |
| `reasoningSummary` | `auto`, `detailed` | `auto`, `detailed` | `auto` |
| `textVerbosity` | `low`, `medium`, `high` | `medium` only | `medium` |
| `include` | Array of strings | Array of strings | `["reasoning.encrypted_content"]` |

> **Note**: `minimal` effort is auto-normalized to `low` for gpt-5-codex (not supported by the API). `none` is only supported on GPT-5.1 general models; when used with legacy gpt-5 it is normalized to `minimal`.
> **Note**: `minimal` effort is auto-normalized to `low` for gpt-5-codex (not supported by the API). `none` is only supported on GPT-5.1 general models; when used with legacy gpt-5 it is normalized to `minimal`. `xhigh` is exclusive to `gpt-5.1-codex-max`—other Codex presets automatically map it to `high`.

#### Plugin-Level Settings

Expand Down
16 changes: 16 additions & 0 deletions config/full-opencode.json
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,22 @@
"store": false
},
"models": {
"gpt-5.1-codex-max": {
"name": "GPT 5.1 Codex Max (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.1-codex-low": {
"name": "GPT 5.1 Codex Low (OAuth)",
"limit": {
Expand Down
2 changes: 1 addition & 1 deletion config/minimal-opencode.json
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,5 @@
}
}
},
"model": "openai/gpt-5.1-codex"
"model": "openai/gpt-5.1-codex-max"
}
7 changes: 7 additions & 0 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -368,6 +368,13 @@ Advanced plugin settings in `~/.opencode/openhax-codex-config.json`:
}
```

### Log file management

Control local request/rolling log growth:
- `CODEX_LOG_MAX_BYTES` (default: 5_242_880) - rotate when the rolling log exceeds this many bytes.
- `CODEX_LOG_MAX_FILES` (default: 5) - number of rotated log files to retain (plus the active log).
- `CODEX_LOG_QUEUE_MAX` (default: 1000) - maximum buffered log entries before oldest entries are dropped.

### CODEX_MODE

**What it does:**
Expand Down
34 changes: 7 additions & 27 deletions docs/development/CONFIG_FIELDS.md
Original file line number Diff line number Diff line change
Expand Up @@ -285,6 +285,11 @@ const parsedModel: ModelsDev.Model = {

```json
{
"gpt-5.1-codex-max": {
"id": "gpt-5.1-codex-max",
"name": "GPT 5.1 Codex Max (OAuth)",
"options": { "reasoningEffort": "medium" }
},
"gpt-5.1-codex-low": {
"id": "gpt-5.1-codex",
"name": "GPT 5.1 Codex Low (OAuth)",
Expand All @@ -301,36 +306,11 @@ const parsedModel: ModelsDev.Model = {
**Why this matters:**
- Config keys mirror the Codex CLI's 5.1 presets, making it obvious which tier you're targeting.
- `reasoningEffort: "none"` is only valid for GPT-5.1 general models—the plugin automatically downgrades unsupported values for Codex/Codex Mini.
- Legacy GPT-5 entries can stick around for backwards compatibility, but new installs should prefer the 5.1 naming.

---

### Example 4: If We Made Config Key = ID ❌

```json
{
"gpt-5-codex": {
"id": "gpt-5-codex",
"name": "GPT 5 Codex Low (OAuth)",
"options": { "reasoningEffort": "low" }
},
"gpt-5-codex": { // ❌ DUPLICATE KEY ERROR!
"id": "gpt-5-codex",
"name": "GPT 5 Codex High (OAuth)",
"options": { "reasoningEffort": "high" }
}
}
```

**Problem:** JavaScript objects can't have duplicate keys!

**Result:** ❌ Can't have multiple variants

### Reasoning Effort quick notes
- `reasoningEffort: "none"` is exclusive to GPT-5.1 general models and maps to the new "no reasoning" mode introduced by OpenAI.
- `reasoningEffort: "xhigh"` is exclusive to `gpt-5.1-codex-max`; other models automatically clamp it to `high`.
- Legacy GPT-5, GPT-5-Codex, and Codex Mini presets automatically clamp unsupported values (`none` → `minimal`/`low`, `minimal` → `low` for Codex).
- Mixing GPT-5.1 and GPT-5 presets inside the same config is fine—just keep config keys unique and let the plugin normalize them.


---

## Why We Need Different Config Keys
Expand Down
2 changes: 2 additions & 0 deletions docs/development/TESTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@

Comprehensive testing matrix for all config scenarios and backwards compatibility.

> **Logging note:** All test runs and plugin executions now write per-request JSON files plus a rolling `codex-plugin.log` under `~/.opencode/logs/codex-plugin/`. Set `ENABLE_PLUGIN_REQUEST_LOGGING=1` or `DEBUG_CODEX_PLUGIN=1` if you also want live console output in addition to the files.

## Test Scenarios Matrix

### Scenario 1: Default OpenCode Models (No Custom Config)
Expand Down
Loading
Loading