AIChat — All-in-One LLM CLI with 20+ Providers
AIChat is a CLI for 20+ LLM providers with shell assistant, RAG, agents, and function calling. 9.7K+ stars. MIT/Apache 2.0.
What it is
AIChat is a command-line interface that connects to 20+ LLM providers including OpenAI, Anthropic, Google, Mistral, Ollama, and more. It provides interactive chat, shell assistant mode, RAG (retrieval-augmented generation), function calling, and agent capabilities from a single binary.
AIChat targets developers and power users who prefer terminal-based workflows and want one tool for all their LLM interactions instead of separate CLIs for each provider.
How it saves time or tokens
AIChat's shell assistant mode interprets natural language commands and generates shell scripts. Instead of remembering complex command-line flags, you describe what you want and AIChat produces the command. The RAG feature indexes local files and provides context-aware answers without manual copy-pasting.
Switching between providers is a config change, so you can test the same prompt across models without leaving the terminal.
How to use
- Install AIChat:
cargo install aichator download a prebuilt binary - Configure your API keys in
~/.config/aichat/config.yaml - Start interactive chat:
aichat - Use shell mode:
aichat -e 'find all Python files larger than 1MB'
Example
# Interactive chat with Claude
aichat --model claude-sonnet-4-20250514
# Shell assistant: describe what you need
aichat -e 'compress all PNG files in current directory to 80% quality'
# Output: find . -name '*.png' -exec convert {} -quality 80 {} \;
# RAG: index a directory and ask questions
aichat --rag-add ./docs/
aichat --rag 'What is the API rate limit?'
# Function calling with custom tools
aichat --agent 'Check the weather in San Francisco and create a calendar event'
Related on TokRepo
- Coding tools -- Terminal-based developer tools
- Automation tools -- Shell automation with AI
Common pitfalls
- Each provider requires its own API key configured in the YAML file; missing keys produce silent failures
- RAG indexing uses local embeddings by default; large document sets can be slow without GPU acceleration
- Shell assistant mode executes suggested commands only with confirmation; review commands before accepting
Frequently Asked Questions
AIChat supports OpenAI, Anthropic, Google Gemini, Mistral, Ollama, Groq, Perplexity, Azure OpenAI, AWS Bedrock, and more. The full list exceeds 20 providers. Each provider is configured in the YAML config file.
Yes. AIChat supports Ollama and other local model providers. Configure an Ollama endpoint in the config file and use any locally running model for chat, RAG, and shell assistance without internet access.
AIChat indexes files using embeddings and stores them in a local vector database. When you ask a question with --rag, it retrieves relevant document chunks and includes them as context in the LLM prompt. This provides accurate answers grounded in your documents.
Yes. AIChat supports function calling where the LLM can invoke pre-defined tools (HTTP requests, shell commands, file operations) during a conversation. This enables agent-like behavior from the terminal.
Yes. AIChat is dual-licensed under MIT and Apache 2.0. The source code is available on GitHub. You can build from source with Rust's cargo build system.
Citations (3)
- AIChat GitHub— AIChat is a CLI for 20+ LLM providers with 9.7K+ GitHub stars
- AIChat README— RAG and function calling capabilities in CLI tools
- Ollama— Ollama local model serving
Related on TokRepo
Source & Thanks
sigoden/aichat — 9,700+ GitHub stars
Discussion
Related Assets
NAPI-RS — Build Node.js Native Addons in Rust
Write high-performance Node.js native modules in Rust with automatic TypeScript type generation and cross-platform prebuilt binaries.
Mamba — Fast Cross-Platform Package Manager
A drop-in conda replacement written in C++ that resolves environments in seconds instead of minutes.
Plasmo — The Browser Extension Framework
Build, test, and publish browser extensions for Chrome, Firefox, and Edge using React or Vue with hot-reload and automatic manifest generation.