Main
Pick your provider strategy first: use free tiers (Gemini/Qwen), OAuth for subscriptions (Claude/Codex), or point to a local OpenAI-compatible base URL.
Use non-interactive mode for automation (CI scripts) and interactive REPL for exploratory coding and refactors.
When you need tools, connect external MCP integrations (README notes MCP support) and keep secrets in provider-specific configs.
Source-backed notes
- README shows provider commands like
/auth+/provider+/modelfor Gemini/Qwen/Anthropic/Codex. - README includes install options via Homebrew and npm (
npm install -g @vybestack/llxprt-code) and a no-install npx example. - README describes both interactive REPL workflows and non-interactive single-command mode for automation.
FAQ
- Do I have to use one provider?: No — README is built around switching providers and models per session.
- Is it only interactive?: No — README shows non-interactive mode for scripted usage and CI-style automation.
- Can I use it with local models?: Yes — README mentions local providers (LM Studio/llama.cpp/Ollama) and OpenAI-compatible base URLs.