LLM — CLI Tool for 100+ Language Models
LLM is a CLI and Python library for accessing 100+ LLMs via APIs or locally. 11.5K+ stars. SQLite logging, embeddings, structured data. Apache 2.0.
What it is
LLM is a command-line tool and Python library by Simon Willison that provides a unified interface to 100+ language models. It works with both API-based models (OpenAI, Anthropic, Google) and local models (via plugins). Every prompt and response is automatically logged to a SQLite database for later analysis.
Developers, researchers, and power users who work with multiple LLM providers and want a single CLI to query them all will find LLM particularly useful. It eliminates the need to install separate SDKs for each provider.
How it saves time or tokens
LLM's plugin architecture means you install once and add providers as needed. The SQLite logging captures every interaction automatically, so you never lose a useful response. The embeddings subsystem lets you build semantic search over your logged conversations without a separate vector database.
How to use
- Install LLM via pip and configure your API keys for the providers you want to use.
- Run prompts from the command line with
llmor use the Python API in scripts. - Query your conversation history using
llm logsand the built-in SQLite database.
Example
# Install and configure
pip install llm
llm keys set openai
# Simple prompt
llm 'Explain the CAP theorem in three sentences'
# Use a specific model
llm -m claude-3.5-sonnet 'Write a Python decorator for retry logic'
# View conversation logs
llm logs -n 5
# Generate embeddings
llm embed -m ada-002 -c 'vector search query'
Related on TokRepo
- Local LLM tools — Tools for running language models locally including Ollama and LM Studio.
- AI tools for coding — CLI tools and agents for AI-assisted development.
Common pitfalls
- Not installing provider plugins. LLM ships with OpenAI support by default; other providers like Anthropic or local models require separate plugin installation via
llm install llm-anthropic. - Forgetting that logs accumulate. The SQLite database grows with every prompt. Periodically review and clean old logs if disk space is a concern.
- Assuming all models support the same features. Streaming, system prompts, and tool use vary by provider. Check the plugin documentation for model-specific capabilities.
Frequently Asked Questions
LLM supports 100+ models through its plugin system. Built-in support covers OpenAI models. Additional plugins add Anthropic, Google, Mistral, local models via Ollama, and many more.
LLM itself is free and open source under the Apache 2.0 license. You pay only for the API calls to commercial providers like OpenAI or Anthropic. Local models run at no additional cost.
Yes. Through plugins like llm-ollama and llm-gpt4all, LLM can query locally running models. The same CLI interface works regardless of whether the model is remote or local.
Every prompt you send and every response you receive is automatically stored in a local SQLite database. You can query this database with SQL, search your history, and analyze token usage patterns.
Yes. LLM provides a Python API alongside the CLI. You can import it in scripts, use it in notebooks, and integrate it into larger applications while retaining the automatic logging feature.
Citations (3)
- LLM GitHub— LLM CLI tool by Simon Willison
- LLM Docs— LLM plugin system documentation
- OpenAI API Docs— OpenAI API for language model access
Related on TokRepo
Source & Thanks
Created by Simon Willison. Apache 2.0. simonw/llm — 11,500+ GitHub stars
Discussion
Related Assets
doctest — The Fastest Feature-Rich C++ Testing Framework
doctest is a single-header C++ testing framework designed for minimal compile-time overhead and maximum speed.
Chai — BDD/TDD Assertion Library for Node.js
Chai is a flexible assertion library for Node.js and browsers that supports expect, should, and assert styles.
Supertest — HTTP Assertion Library for Node.js APIs
Supertest provides a high-level API for testing HTTP servers in Node.js with fluent assertion chaining.