Main
Treat gptme as a “tool runner with a chat UI”:
- Put it in a repo and feed it structured input (
git diff,make test, logs) so it can act on concrete artifacts. - Prefer
-nin CI so it never blocks for confirmation; use-yfor supervised runs you can interrupt. - If you need web access, install the optional extra (
gptme[browser]) and validate the sandboxing policy for your environment.
README excerpt (verbatim)
gptme
/ʤiː piː tiː miː/
what does it stand for?
Getting Started • Downloads • Website • Documentation
📜 A personal AI agent that runs anywhere a terminal runs — your laptop,
ssh sessions, tmux, headless servers, CI pipelines.
Provider-agnostic, local-first, and unconstrained: ships with shell, Python, web,
vision, and everything else an agent needs.
A great coding agent, but general-purpose enough to assist in all kinds of knowledge-work.
Free and open-source. Works with Anthropic, OpenAI, Google, xAI, DeepSeek, OpenRouter,
or fully local via llama.cpp — your data, your models, your terminal.
FAQ
Q: Does gptme include MCP support? A: README states MCP support is included by default; you can add extras for more capabilities.
Q: What Python version is required? A: Prerequisites list Python 3.10 or newer.
Q: How do I make it safe for CI?
A: Use -n for fully non-interactive runs, and consider -y only when a human can monitor and interrupt.