What it is
Codex CLI is OpenAI's official command-line agent for coding, released in 2025 and maintained on the openai/codex GitHub repo under Apache-2.0. It runs locally in your terminal, hooks into ChatGPT Pro/Plus or API auth, and uses OpenAI's reasoning models (GPT-5 family + the o-series) with full file-edit, shell, and patch tooling.
Codex CLI is also the tool that popularized the AGENTS.md convention — a vendor-neutral counterpart to Claude's CLAUDE.md. AGENTS.md is now adopted by multiple agents (Aider, Continue.dev, soon Gemini CLI), making your project-level instructions portable.
It's the tool to reach for when:
- You already pay for ChatGPT Pro/Plus and want to use that quota for terminal coding.
- You want a vendor-neutral instruction file that other agents will respect.
- You like OpenAI's reasoning model strengths on bug-finding and refactor-planning passes.
Why it stands out
| Feature | What it gives you |
|---|---|
| ChatGPT auth | Sign in with your ChatGPT Pro/Plus account — no separate API key needed |
| AGENTS.md | Vendor-neutral instruction file, supported across agents |
| Apache-2.0 | Same OSI license as Gemini CLI; you can fork freely |
| MCP support | Added late 2025; Codex CLI now reads ~/.codex/mcp.json |
| Approval modes | --auto-edit, --full-auto, --read-only for graduated trust |
| Reasoning models | GPT-5 / o-series strengths in deep planning passes |
| Sandboxed shell | Apple Seatbelt on macOS, Landlock on Linux for safer command exec |
How to install
# macOS / Linux / WSL
npm install -g @openai/codex
# Or via Homebrew
brew install codex
After install, run codex in any project directory. The first run prompts for ChatGPT login (browser OAuth) or OPENAI_API_KEY. Drop an AGENTS.md in your repo root with project conventions and Codex CLI will read it on every session start.
What to install in Codex CLI
Codex CLI's surface uses two main configs: ~/.codex/config.toml for global settings and AGENTS.md (project-level) for instructions. Three TokRepo packs to start:
- AGENTS.md Templates — battle-tested instruction files for Python / Node / Go / Rust monorepos
- MCP Server Stack — Postgres, GitHub, Filesystem MCP servers shared with Claude Code
- OpenAI Reasoning Workflow Pack — codex commands tuned for o-series strengths
Run tokrepo install against any of these and your AGENTS.md plus ~/.codex/ directory pick up the same baseline OpenAI's research previews demo.
Common pitfalls
- Mixing API key and ChatGPT auth — pick one; mixing causes the CLI to ping both endpoints with conflicting quotas.
- Forgetting
--full-autocost — full auto runs all proposed shell commands; pair with sandboxing or you'll re-runnpm install30 times in a refactor loop. - AGENTS.md size creep — Codex re-reads it every turn. Keep under 200 lines or move chunks into per-directory
AGENTS.mdfiles. - MCP path resolution — Codex looks at
~/.codex/mcp.json, not.claude/. Don't symlink them; convert with the includedtokrepo migrateflag. - macOS Seatbelt blocks GUI tools — if your tasks need browser automation, run with
--sandbox=noneonly inside a disposable VM, never on your dev box.
Relationship to other tools
Codex CLI's biggest win is the AGENTS.md convention — many agents now respect that file, so writing it well pays off across your toolchain. Compared to Claude Code, Codex CLI has a less mature subagent ecosystem but tighter integration with ChatGPT Pro accounts. Compared to Gemini CLI, it lacks the multimodal smoothness but offers the o-series reasoning models for hard refactors. Most teams in 2026 use Codex CLI as their AGENTS.md authoring environment and rotate other agents in for specific phases.
Most-installed for Codex CLI
7 assets that work with Codex CLI
Frequently asked questions
Is Codex CLI free?
The CLI itself is Apache-2.0 open source and free. To run it you need either an OpenAI API key (pay-as-you-go) or a ChatGPT Pro/Plus subscription. ChatGPT Pro at $200/mo gives the most generous Codex CLI quota; Plus at $20/mo works for light usage.
What is AGENTS.md and why does Codex CLI use it?
AGENTS.md is a vendor-neutral file at your project root that tells any agent how to work in that codebase — coding conventions, build commands, things to avoid. Codex CLI was the first major agent to ship support, and the spec at agents.md is now adopted by Aider, Continue, and others. Write once, port across tools.
Does Codex CLI work with Claude or Gemini models?
Officially Codex CLI ships pointing at OpenAI models (GPT-5, o-series). Community forks let you swap base URLs to OpenRouter or Anthropic-compatible endpoints, but tool-call schemas differ enough that it's brittle. The mainstream path is OpenAI models for Codex CLI, separate CLIs for other vendors.
Codex CLI vs Claude Code — which is better?
Different strengths. Claude Code has the most mature subagent and hooks ecosystem. Codex CLI has the best ChatGPT Pro integration and started the AGENTS.md convention. Many devs run both: Codex CLI when they want OpenAI's reasoning models, Claude Code for long-running multi-step refactors.
Can Codex CLI run in CI / non-interactively?
Yes. Use codex --quiet --full-auto with OPENAI_API_KEY set. Common pattern: GitHub Actions job that runs Codex CLI to draft a PR review or codemod, then humans approve before merge. Combine with --sandbox=workspace-write to limit damage.
Browse every tool on the home page
8 tools curated · favicon-fetched logos · type-aware bundles
Back to all tools