ScriptsApr 2, 2026·2 min read

LLM — CLI Swiss Army Knife for Language Models

Run prompts from the terminal, log everything to SQLite, manage 50+ providers via plugins. By Django co-creator. 11K+ stars.

TO
TokRepo精选 · Community
Quick Use

Use it first, then decide how deep to go

This block should tell both the user and the agent what to copy, install, and apply first.

```bash pip install llm ``` ```bash # Set your API key llm keys set openai # Paste your key when prompted # Run a prompt llm "Explain quantum computing in one paragraph" # Pipe content cat error.log | llm "What's wrong here?" # Start a conversation llm chat -m claude-3.5-sonnet # Use a different model llm -m gpt-4o "Write a Python function to parse CSV" ``` Install plugins for more providers: ```bash llm install llm-claude-3 # Anthropic Claude llm install llm-gemini # Google Gemini llm install llm-ollama # Local models llm install llm-gpt4all # Local GPT4All ``` ---
Intro
LLM is a command-line tool and Python library by Simon Willison (co-creator of Django) with 11,500+ GitHub stars for interacting with large language models. It works with OpenAI, Anthropic, Google, local models, and 50+ providers via a plugin ecosystem. The killer feature: every prompt and response is automatically logged to a local SQLite database, making it trivial to search, analyze, and audit your LLM usage. With prompt templates, embeddings support, and pipe-friendly design, LLM is the power user's tool for working with AI from the terminal. Works with: OpenAI, Anthropic Claude, Google Gemini, Ollama, llama.cpp, GPT4All, 50+ providers via plugins. Best for developers who want a composable, scriptable interface to LLMs. Setup time: under 2 minutes. ---
## LLM Core Features ### Prompt Logging to SQLite Every interaction is saved automatically: ```bash # View recent prompts llm logs # Search logs llm logs -q "quantum computing" # SQL queries on your LLM history llm logs --sql "SELECT model, count(*) FROM responses GROUP BY model" # Export as JSON llm logs --json | jq '.[] | .response' ``` Database stored at `~/.llm/logs.db` — open with any SQLite tool. ### Plugin Ecosystem 50+ plugins for different providers and features: ```bash # Model providers llm install llm-claude-3 # Anthropic Claude 3/4 llm install llm-gemini # Google Gemini llm install llm-ollama # Local Ollama models llm install llm-gpt4all # Local GPT4All llm install llm-mistral # Mistral AI llm install llm-replicate # Replicate hosted models # Features llm install llm-cmd # Generate shell commands llm install llm-cluster # Cluster embeddings llm install llm-embed-jina # Jina embeddings # List installed plugins llm plugins ``` ### Prompt Templates Save and reuse prompts: ```bash # Create a template llm --system "You are a code reviewer. Be concise." \ "Review this code" --save review # Use the template cat main.py | llm -t review # List templates llm templates ``` ### Embeddings Generate, store, and search vector embeddings: ```bash # Embed a collection of files llm embed-multi docs -m 3-small --files docs/ '*.md' # Semantic search llm similar docs -c "How does authentication work?" # Cluster documents llm install llm-cluster llm cluster docs 5 # 5 clusters ``` ### Pipe-Friendly Design ```bash # Summarize a file cat README.md | llm "Summarize this in 3 bullets" # Process command output git diff | llm "Write a commit message for this diff" # Chain with other tools curl -s https://api.example.com/data | llm "Parse this JSON and list the top 5" # Batch processing cat urls.txt | while read url; do curl -s "$url" | llm "Extract the main topic" >> topics.txt done ``` ### Python Library ```python import llm model = llm.get_model("gpt-4o") response = model.prompt("Explain transformers") print(response.text()) # With conversation conversation = model.conversation() response1 = conversation.prompt("What is RAG?") response2 = conversation.prompt("How do I implement it?") ``` --- ## FAQ **Q: What is LLM?** A: LLM is a CLI tool and Python library with 11,500+ GitHub stars by Simon Willison for interacting with 50+ LLM providers, featuring automatic SQLite logging, prompt templates, embeddings, and a rich plugin ecosystem. **Q: How is LLM different from other CLI AI tools?** A: LLM's unique strength is its SQLite logging (every interaction is searchable/queryable) and plugin ecosystem (50+ providers). It's also the most Unix-philosophy-aligned AI tool — designed to pipe, compose, and script. **Q: Is LLM free?** A: Yes, open-source under Apache-2.0. You bring your own API keys. ---
🙏

Source & Thanks

> Created by [Simon Willison](https://github.com/simonw). Licensed under Apache-2.0. > > [llm](https://github.com/simonw/llm) — ⭐ 11,500+ Thanks to Simon Willison for building the most composable CLI for language models.

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets