Skip to content

feiskyer/codex-settings

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenAI Codex CLI Settings and Custom Prompts

A curated collection of configurations and custom prompts for OpenAI Codex CLI, designed to enhance your development workflow with various model providers and reusable prompt templates.

For Claude Code settings, agents and custom commands, please refer feiskyer/claude-code-settings.

Overview

This repository provides:

  • Flexible Configuration: Support for multiple model providers (LiteLLM/Copilot proxy, ChatGPT subscription, Azure OpenAI, OpenRouter, ModelScope, Kimi)
  • Custom Prompts: Reusable prompt templates for common development tasks
  • Best Practices: Pre-configured settings optimized for development workflows
  • Easy Setup: Simple installation and configuration process

Quick Start

Installation

# Backup existing Codex configuration (if any)
mv ~/.codex ~/.codex.bak

# Clone this repository to ~/.codex
git clone https://github.com/feiskyer/codex-settings.git ~/.codex

# Or symlink if you prefer to keep it elsewhere
ln -s /path/to/codex-settings ~/.codex

Basic Configuration

The default config.toml uses LiteLLM as a gateway. To use it:

  1. Install LiteLLM and Codex CLI:

    pip install -U 'litellm[proxy]'
    npm install -g @openai/codex
  2. Create a LiteLLM config file (full example litellm_config.yaml):

    general_settings:
      master_key: sk-dummy
    litellm_settings:
      drop_params: true
    model_list:
    - model_name: gpt-5
      litellm_params:
        model: github_copilot/gpt-5
        extra_headers:
          editor-version: "vscode/1.104.3"
          editor-plugin-version: "copilot-chat/0.26.7"
          Copilot-Integration-Id: "vscode-chat"
          user-agent: "GitHubCopilotChat/0.26.7"
          x-github-api-version: "2025-04-01"
  3. Start LiteLLM proxy:

    litellm --config ~/.codex/litellm_config.yaml
    # Runs on http://localhost:4000 by default
  4. Run Codex:

    codex

Configuration Files

Main Configuration

  • config.toml: Default configuration using LiteLLM gateway
    • Model: gpt-5 via model_provider = "github" (Copilot proxy on http://localhost:4000)
    • Approval policy: on-request; reasoning summary: detailed; reasoning effort: high; raw agent reasoning visible
    • MCP servers: claude (local), exa (hosted), chrome (DevTools over npx)

Alternative Configurations

Located in configs/ directory:

To use an alternative config:

# Take ChatGPT for example
cp ~/.codex/configs/chatgpt.toml config.toml
codex

Custom Prompts

Custom prompts are stored in the prompts/ directory. Access them via the /prompts: slash menu in Codex.

  • /prompts:deep-reflector - Analyze development sessions to extract learnings, patterns, and improvements for future interactions.
  • /prompts:insight-documenter [breakthrough] - Capture and document significant technical breakthroughs into reusable knowledge assets.
  • /prompts:instruction-reflector - Analyze and improve Codex instructions in AGENTS.md based on conversation history.
  • /prompts:github-issue-fixer [issue-number] - Systematically analyze, plan, and implement fixes for GitHub issues with PR creation.
  • /prompts:github-pr-reviewer [pr-number] - Perform thorough GitHub pull request code analysis and review.
  • /prompts:ui-engineer [requirements] - Create production-ready frontend solutions with modern UI/UX standards.
  • /prompts:prompt-creator [requirements] - Create Codex custom prompts with proper structure and best practices.

Creating Custom Prompts

  1. Create a new .md file in ~/.codex/prompts/
  2. Use argument placeholders:
    • $1 to $9: Positional arguments
    • $ARGUMENTS: All arguments joined by spaces
    • $$: Literal dollar sign
  3. Restart Codex to load new prompts

Configuration Options

Approval Policies

  • untrusted: Prompt for untrusted commands (recommended)
  • on-failure: Only prompt when sandbox commands fail
  • on-request: Model decides when to ask
  • never: Auto-approve all commands (use with caution)

Sandbox Modes

  • read-only: Can read files, no writes or network
  • workspace-write: Can write to workspace, network configurable
  • danger-full-access: Full system access (use in containers only)

Reasoning Settings

For reasoning-capable models (o3, gpt-5):

  • Effort: minimal, low, medium, high
  • Summary: auto, concise, detailed, none

Shell Environment

Control which environment variables are passed to subprocesses:

[shell_environment_policy]
inherit = "all"  # all, core, none
exclude = ["AWS_*", "AZURE_*"]  # Exclude patterns
set = { CI = "1" }  # Force-set values

Advanced Features

Profiles

Define multiple configuration profiles:

[profiles.fast]
model = "gpt-4o-mini"
approval_policy = "never"
model_reasoning_effort = "low"

[profiles.reasoning]
model = "o3"
approval_policy = "on-failure"
model_reasoning_effort = "high"

Use with: codex --profile reasoning

MCP Servers

Extend Codex with Model Context Protocol servers:

[mcp_servers.context7]
command = "npx"
args = ["-y", "@upstash/context7-mcp@latest"]

[mcp_servers.claude]
command = "claude"
args = ["mcp", "serve"]

Project Documentation

Codex automatically reads AGENTS.md files in your project to understand context. Please always create one in your project root with /init command on your first codex run.

References

Contributing

Contributions welcome! Feel free to:

  • Add new custom prompts
  • Share alternative configurations
  • Improve documentation
  • Report issues and suggest features

LICENSE

This project is released under MIT License - See LICENSE for details.

About

OpenAI Codex CLI settings, configurations and prompts for vibe coding

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published