Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion codex-rs/core/src/models_manager/manager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ use crate::models_manager::model_presets::builtin_model_presets;
const MODEL_CACHE_FILE: &str = "models_cache.json";
const DEFAULT_MODEL_CACHE_TTL: Duration = Duration::from_secs(300);
const MODELS_REFRESH_TIMEOUT: Duration = Duration::from_secs(5);
const OPENAI_DEFAULT_API_MODEL: &str = "gpt-5.1-codex-max";
const OPENAI_DEFAULT_API_MODEL: &str = "gpt-5.2-codex";
const OPENAI_DEFAULT_CHATGPT_MODEL: &str = "gpt-5.2-codex";
const CODEX_AUTO_BALANCED_MODEL: &str = "codex-auto-balanced";

Expand Down
2 changes: 1 addition & 1 deletion codex-rs/core/src/models_manager/model_presets.rs
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ static PRESETS: Lazy<Vec<ModelPreset>> = Lazy::new(|| {
is_default: true,
upgrade: None,
show_in_picker: true,
supported_in_api: false,
supported_in_api: true,
},
ModelPreset {
id: "gpt-5.1-codex-max".to_string(),
Expand Down
5 changes: 3 additions & 2 deletions codex-rs/core/tests/suite/list_models.rs
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ async fn list_models_returns_chatgpt_models() -> Result<()> {

fn expected_models_for_api_key() -> Vec<ModelPreset> {
vec![
gpt_52_codex(),
gpt_5_1_codex_max(),
gpt_5_1_codex_mini(),
gpt_5_2(),
Expand Down Expand Up @@ -108,7 +109,7 @@ fn gpt_52_codex() -> ModelPreset {
is_default: true,
upgrade: None,
show_in_picker: true,
supported_in_api: false,
supported_in_api: true,
}
}

Expand Down Expand Up @@ -137,7 +138,7 @@ fn gpt_5_1_codex_max() -> ModelPreset {
"Extra high reasoning depth for complex problems",
),
],
is_default: true,
is_default: false,
upgrade: Some(gpt52_codex_upgrade()),
show_in_picker: true,
supported_in_api: true,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,12 @@ expression: popup
Select Model and Effort
Access legacy models by running codex -m <model_name> or in your config.toml

› 1. gpt-5.1-codex-max (default) Codex-optimized flagship for deep and fast
reasoning.
2. gpt-5.1-codex-mini Optimized for codex. Cheaper, faster, but
less capable.
3. gpt-5.2 Latest frontier model with improvements
across knowledge, reasoning and coding
› 1. gpt-5.2-codex (default) Latest frontier agentic coding model.
2. gpt-5.1-codex-max Codex-optimized flagship for deep and fast
reasoning.
3. gpt-5.1-codex-mini Optimized for codex. Cheaper, faster, but less
capable.
4. gpt-5.2 Latest frontier model with improvements across
knowledge, reasoning and coding

Press enter to select reasoning effort, or esc to dismiss.
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,12 @@ expression: popup
Select Model and Effort
Access legacy models by running codex -m <model_name> or in your config.toml

› 1. gpt-5.1-codex-max (default) Codex-optimized flagship for deep and fast
reasoning.
2. gpt-5.1-codex-mini Optimized for codex. Cheaper, faster, but
less capable.
3. gpt-5.2 Latest frontier model with improvements
across knowledge, reasoning and coding
› 1. gpt-5.2-codex (default) Latest frontier agentic coding model.
2. gpt-5.1-codex-max Codex-optimized flagship for deep and fast
reasoning.
3. gpt-5.1-codex-mini Optimized for codex. Cheaper, faster, but less
capable.
4. gpt-5.2 Latest frontier model with improvements across
knowledge, reasoning and coding

Press enter to select reasoning effort, or esc to dismiss.
Loading