CLI ToolsApr 6, 2026·2 min read

OpenAI Codex CLI — Terminal Coding Agent by OpenAI

Official OpenAI terminal coding agent. Reads your codebase, writes code, runs commands, and manages files from the command line. Multi-model support with sandbox execution. 20,000+ stars.

TL;DR
OpenAI Codex CLI is a terminal coding agent with sandbox execution and multi-model support.
§01

What it is

OpenAI Codex CLI is the official terminal-based coding agent from OpenAI. It reads your codebase, writes code, executes commands, and manages files directly from the command line. Features include sandbox execution, multi-model support (o4-mini, o3, GPT-4o), and configurable approval modes from fully manual to fully autonomous.

Codex CLI is designed as OpenAI's answer to Claude Code, targeting developers who prefer OpenAI models for their coding workflows.

§02

How it saves time or tokens

Codex CLI provides three approval modes that let you control the automation level. In 'suggest' mode, it shows proposals for your approval. In 'auto-edit' mode, it applies file edits automatically but asks permission for commands. In 'auto' mode, it operates fully autonomously. The sandbox execution environment restricts network access and filesystem access to the project directory, making autonomous mode safe for development tasks.

§03

How to use

  1. Install Codex CLI:
npm install -g @openai/codex
  1. Start an interactive session:
codex
  1. Run with a specific task:
codex 'Add input validation to the signup endpoint'
codex --auto 'Write tests for the auth module'
codex --model o3 'Design the database schema for user preferences'
§04

Example

Using Codex CLI with different modes and models:

# Suggest mode: review each change before applying
codex --mode suggest 'Refactor the error handling in api/routes.ts'

# Auto-edit mode: auto-apply file changes, ask for commands
codex --mode auto-edit 'Add TypeScript types to all API handlers'

# Full auto with a specific model
codex --mode auto --model o4-mini 'Fix the failing test in auth.test.ts'

# Custom system prompt for project-specific context
codex --model o3 \
  'This is a Next.js project with Prisma ORM. \
   Add pagination to the /api/posts endpoint.'
§05

Related on TokRepo

§06

Common pitfalls

  • Running in 'auto' mode on production codebases without reviewing the changes first. Start with 'suggest' mode for unfamiliar tasks.
  • Not setting the OPENAI_API_KEY environment variable before running. Export your API key in your shell profile.
  • Codex CLI's sandbox disables network access by default, which can cause tests that call external APIs to fail. Configure sandbox permissions for your project's needs.

Frequently Asked Questions

How does Codex CLI compare to Claude Code?+

Both are terminal-based coding agents that read code, write files, and execute commands. Codex CLI uses OpenAI models (o4-mini, o3, GPT-4o) while Claude Code uses Anthropic Claude models. Each has different strengths depending on the task and model preference.

What is sandbox execution?+

Codex CLI runs commands in a sandboxed environment by default. Network access is disabled and filesystem access is restricted to the project directory. This prevents accidental modifications to system files or unintended network requests.

What models does Codex CLI support?+

Codex CLI supports o4-mini (fast, cost-effective), o3 (highest capability), and GPT-4o. You can switch models per invocation with the --model flag.

Can Codex CLI work offline?+

No. Codex CLI sends your code and prompts to OpenAI's API for processing. It requires an internet connection and an OpenAI API key.

Is Codex CLI open source?+

Yes. Codex CLI is open source on GitHub under the Apache 2.0 license. You can inspect the code, contribute, and fork it.

Citations (3)
🙏

Source & Thanks

Created by OpenAI. Licensed under Apache 2.0.

codex — ⭐ 20,000+

Thanks to OpenAI for making terminal-based AI coding mainstream.

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets