CLI ToolsApr 1, 2026·1 min read

AIChat — All-in-One LLM CLI with 20+ Providers

AIChat is a CLI for 20+ LLM providers with shell assistant, RAG, agents, and function calling. 9.7K+ stars. MIT/Apache 2.0.

TL;DR
AIChat provides a unified CLI for 20+ LLM providers with shell assistant, RAG, and agents.
§01

What it is

AIChat is a command-line interface that connects to 20+ LLM providers including OpenAI, Anthropic, Google, Mistral, Ollama, and more. It provides interactive chat, shell assistant mode, RAG (retrieval-augmented generation), function calling, and agent capabilities from a single binary.

AIChat targets developers and power users who prefer terminal-based workflows and want one tool for all their LLM interactions instead of separate CLIs for each provider.

§02

How it saves time or tokens

AIChat's shell assistant mode interprets natural language commands and generates shell scripts. Instead of remembering complex command-line flags, you describe what you want and AIChat produces the command. The RAG feature indexes local files and provides context-aware answers without manual copy-pasting.

Switching between providers is a config change, so you can test the same prompt across models without leaving the terminal.

§03

How to use

  1. Install AIChat: cargo install aichat or download a prebuilt binary
  2. Configure your API keys in ~/.config/aichat/config.yaml
  3. Start interactive chat: aichat
  4. Use shell mode: aichat -e 'find all Python files larger than 1MB'
§04

Example

# Interactive chat with Claude
aichat --model claude-sonnet-4-20250514

# Shell assistant: describe what you need
aichat -e 'compress all PNG files in current directory to 80% quality'
# Output: find . -name '*.png' -exec convert {} -quality 80 {} \;

# RAG: index a directory and ask questions
aichat --rag-add ./docs/
aichat --rag 'What is the API rate limit?'

# Function calling with custom tools
aichat --agent 'Check the weather in San Francisco and create a calendar event'
§05

Related on TokRepo

§06

Common pitfalls

  • Each provider requires its own API key configured in the YAML file; missing keys produce silent failures
  • RAG indexing uses local embeddings by default; large document sets can be slow without GPU acceleration
  • Shell assistant mode executes suggested commands only with confirmation; review commands before accepting

Frequently Asked Questions

Which LLM providers does AIChat support?+

AIChat supports OpenAI, Anthropic, Google Gemini, Mistral, Ollama, Groq, Perplexity, Azure OpenAI, AWS Bedrock, and more. The full list exceeds 20 providers. Each provider is configured in the YAML config file.

Does AIChat work offline with local models?+

Yes. AIChat supports Ollama and other local model providers. Configure an Ollama endpoint in the config file and use any locally running model for chat, RAG, and shell assistance without internet access.

How does RAG work in AIChat?+

AIChat indexes files using embeddings and stores them in a local vector database. When you ask a question with --rag, it retrieves relevant document chunks and includes them as context in the LLM prompt. This provides accurate answers grounded in your documents.

Can AIChat call external tools?+

Yes. AIChat supports function calling where the LLM can invoke pre-defined tools (HTTP requests, shell commands, file operations) during a conversation. This enables agent-like behavior from the terminal.

Is AIChat open source?+

Yes. AIChat is dual-licensed under MIT and Apache 2.0. The source code is available on GitHub. You can build from source with Rust's cargo build system.

Citations (3)
  • AIChat GitHub— AIChat is a CLI for 20+ LLM providers with 9.7K+ GitHub stars
  • AIChat README— RAG and function calling capabilities in CLI tools
  • Ollama— Ollama local model serving
🙏

Source & Thanks

sigoden/aichat — 9,700+ GitHub stars

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets