SWE-agent — Automated Bug Fixing with AI Agents
AI agent system from Princeton NLP that autonomously fixes GitHub issues. Navigates codebases, writes patches, and submits pull requests. Resolves 12% of real-world SWE-bench tasks. 15,000+ stars.
What it is
SWE-agent is an AI agent system from Princeton NLP that autonomously fixes GitHub issues. Given a bug report or feature request, it navigates the codebase, understands the problem, writes a patch, and submits a pull request. It uses a custom agent-computer interface (ACI) designed for code editing that gives the AI structured commands for file navigation, editing, and testing.
SWE-agent targets research teams studying AI coding capabilities and development teams exploring automated bug fixing. It demonstrates how purpose-built tool interfaces improve AI agent performance on real-world software engineering tasks.
Why it saves time or tokens
SWE-agent's custom ACI reduces the tokens needed for code exploration compared to generic tool-use approaches. Instead of reading entire files, the agent uses structured commands to navigate, search, and edit specific sections. This focused interaction pattern means fewer wasted tokens on irrelevant code and faster convergence on the fix.
How to use
- Clone the SWE-agent repository and install dependencies
- Configure your LLM API key (OpenAI, Anthropic, etc.)
- Point SWE-agent at a GitHub issue and let it attempt a fix
Example
# Run SWE-agent on a GitHub issue
python run.py \
--model_name claude-sonnet-4-20250514 \
--data_path issue.json \
--config_file config/default.yaml
# SWE-agent will:
# 1. Read the issue description
# 2. Explore the repository structure
# 3. Find relevant files
# 4. Write and test a patch
# 5. Submit the fix
| ACI Command | Purpose |
|---|---|
| find_file | Locate files by name |
| search_dir | Search for text patterns |
| open | View file contents |
| edit | Modify specific lines |
| submit | Propose the patch |
Related on TokRepo
- AI tools for coding — AI coding agents and developer tools on TokRepo
- AI tools for agents — autonomous AI agent frameworks
Common pitfalls
- SWE-agent works best on well-described issues with clear reproduction steps; vague issues produce unreliable patches
- The agent may propose patches that pass tests but introduce subtle regressions; human review is still essential
- Running SWE-agent on large repositories with complex build systems increases token costs and reduces success rates
Frequently Asked Questions
SWE-agent's success rate varies by model and benchmark. On the SWE-bench dataset of real-world GitHub issues, it resolves a meaningful percentage of tasks end-to-end. Success rates improve with more capable models and well-structured issue descriptions. Check the SWE-bench leaderboard for current numbers.
SWE-agent supports OpenAI GPT-4, Anthropic Claude, and other LLM providers. The choice of model significantly affects performance. More capable models produce better patches. You configure the model in the SWE-agent config file.
SWE-agent performs best on bugs with clear symptoms, error messages, or test failures. It handles code logic bugs, missing error handling, and simple feature additions well. Complex architectural issues, performance bugs, or security vulnerabilities are harder for it to address autonomously.
The ACI provides structured commands (find, open, edit, search) instead of raw shell access. This constrains the agent's action space, reducing errors and token waste. The ACI also provides formatted output that the AI can parse reliably, improving its ability to navigate large codebases.
Yes. SWE-agent runs locally and accesses repositories through Git. You clone your private repo, create an issue description file, and point SWE-agent at it. The code and patches stay on your machine. Only the issue description and code diffs are sent to the LLM API.
Citations (3)
- SWE-agent GitHub— SWE-agent from Princeton NLP for automated bug fixing
- SWE-bench— SWE-bench benchmark for evaluating AI coding agents
- SWE-agent Paper— Agent-computer interfaces for code editing
Related on TokRepo
Source & Thanks
Created by Princeton NLP. Licensed under MIT.
SWE-agent — ⭐ 15,000+
Thanks to Princeton NLP for advancing the frontier of automated software engineering.
Discussion
Related Assets
NAPI-RS — Build Node.js Native Addons in Rust
Write high-performance Node.js native modules in Rust with automatic TypeScript type generation and cross-platform prebuilt binaries.
Mamba — Fast Cross-Platform Package Manager
A drop-in conda replacement written in C++ that resolves environments in seconds instead of minutes.
Plasmo — The Browser Extension Framework
Build, test, and publish browser extensions for Chrome, Firefox, and Edge using React or Vue with hot-reload and automatic manifest generation.