A curated collection of configurations and custom prompts for OpenAI Codex CLI, designed to enhance your development workflow with various model providers and reusable prompt templates.
For Claude Code settings, agents and custom commands, please refer feiskyer/claude-code-settings.
This repository provides:
- Flexible Configuration: Support for multiple model providers (LiteLLM/Copilot proxy, ChatGPT subscription, Azure OpenAI, OpenRouter, ModelScope, Kimi)
- Custom Prompts: Reusable prompt templates for common development tasks
- Best Practices: Pre-configured settings optimized for development workflows
- Easy Setup: Simple installation and configuration process
# Backup existing Codex configuration (if any)
mv ~/.codex ~/.codex.bak
# Clone this repository to ~/.codex
git clone https://github.com/feiskyer/codex-settings.git ~/.codex
# Or symlink if you prefer to keep it elsewhere
ln -s /path/to/codex-settings ~/.codexThe default config.toml uses LiteLLM as a gateway. To use it:
-
Install LiteLLM and Codex CLI:
pip install -U 'litellm[proxy]' npm install -g @openai/codex -
Create a LiteLLM config file (full example litellm_config.yaml):
general_settings: master_key: sk-dummy litellm_settings: drop_params: true model_list: - model_name: gpt-5 litellm_params: model: github_copilot/gpt-5 extra_headers: editor-version: "vscode/1.104.3" editor-plugin-version: "copilot-chat/0.26.7" Copilot-Integration-Id: "vscode-chat" user-agent: "GitHubCopilotChat/0.26.7" x-github-api-version: "2025-04-01"
-
Start LiteLLM proxy:
litellm --config ~/.codex/litellm_config.yaml # Runs on http://localhost:4000 by default
-
Run Codex:
codex
- config.toml: Default configuration using LiteLLM gateway
- Model:
gpt-5viamodel_provider = "github"(Copilot proxy on http://localhost:4000) - Approval policy:
on-request; reasoning summary:detailed; reasoning effort:high; raw agent reasoning visible - MCP servers:
claude(local),exa(hosted),chrome(DevTools overnpx)
- Model:
Located in configs/ directory:
- OpenAI ChatGPT: Use ChatGPT subscription provider
- Azure OpenAI: Use Azure OpenAI service provider
- Github Copilot: Use Github Copilot via LiteLLM proxy
- OpenRouter: Use OpenRouter provider
- Model Scope: Use ModelScope provider
- Kimi: Use Moonshot Kimi provider
To use an alternative config:
# Take ChatGPT for example
cp ~/.codex/configs/chatgpt.toml config.toml
codexCustom prompts are stored in the prompts/ directory. Access them via the /prompts: slash menu in Codex.
/prompts:deep-reflector- Analyze development sessions to extract learnings, patterns, and improvements for future interactions./prompts:insight-documenter [breakthrough]- Capture and document significant technical breakthroughs into reusable knowledge assets./prompts:instruction-reflector- Analyze and improve Codex instructions in AGENTS.md based on conversation history./prompts:github-issue-fixer [issue-number]- Systematically analyze, plan, and implement fixes for GitHub issues with PR creation./prompts:github-pr-reviewer [pr-number]- Perform thorough GitHub pull request code analysis and review./prompts:ui-engineer [requirements]- Create production-ready frontend solutions with modern UI/UX standards./prompts:prompt-creator [requirements]- Create Codex custom prompts with proper structure and best practices.
- Create a new
.mdfile in~/.codex/prompts/ - Use argument placeholders:
$1to$9: Positional arguments$ARGUMENTS: All arguments joined by spaces$$: Literal dollar sign
- Restart Codex to load new prompts
untrusted: Prompt for untrusted commands (recommended)on-failure: Only prompt when sandbox commands failon-request: Model decides when to asknever: Auto-approve all commands (use with caution)
read-only: Can read files, no writes or networkworkspace-write: Can write to workspace, network configurabledanger-full-access: Full system access (use in containers only)
For reasoning-capable models (o3, gpt-5):
- Effort:
minimal,low,medium,high - Summary:
auto,concise,detailed,none
Control which environment variables are passed to subprocesses:
[shell_environment_policy]
inherit = "all" # all, core, none
exclude = ["AWS_*", "AZURE_*"] # Exclude patterns
set = { CI = "1" } # Force-set valuesDefine multiple configuration profiles:
[profiles.fast]
model = "gpt-4o-mini"
approval_policy = "never"
model_reasoning_effort = "low"
[profiles.reasoning]
model = "o3"
approval_policy = "on-failure"
model_reasoning_effort = "high"Use with: codex --profile reasoning
Extend Codex with Model Context Protocol servers:
[mcp_servers.context7]
command = "npx"
args = ["-y", "@upstash/context7-mcp@latest"]
[mcp_servers.claude]
command = "claude"
args = ["mcp", "serve"]Codex automatically reads AGENTS.md files in your project to understand context. Please always create one in your project root with /init command on your first codex run.
Contributions welcome! Feel free to:
- Add new custom prompts
- Share alternative configurations
- Improve documentation
- Report issues and suggest features
This project is released under MIT License - See LICENSE for details.