AnythingLLM — All-in-One AI Desktop with MCP
Full-stack AI desktop app with RAG, agents, MCP support, and multi-model chat. AnythingLLM manages documents, embeddings, and vector stores in one private interface.
What it is
AnythingLLM is a full-stack AI desktop application that combines RAG (retrieval-augmented generation), AI agents, MCP server support, and multi-model chat in one interface. It manages documents, embeddings, and vector stores locally, keeping your data private. You can chat with your documents, run agents with tools, and switch between LLM providers.
This tool is for users who want a single application for all their AI needs without sending data to cloud services. It works with both cloud LLMs and local models.
How it saves time or tokens
AnythingLLM eliminates the need to set up separate tools for document chat, RAG pipelines, and agent workflows. Everything runs in one application with a unified interface. The built-in vector store removes the need for external database setup. MCP support means agents can use external tools without custom integration code. The estimated token cost is around 3,900 tokens per session.
How to use
- Download AnythingLLM for your platform.
- Configure your LLM provider (OpenAI, Anthropic, Ollama, etc.).
- Create a workspace and upload documents.
- Chat with your documents or configure agents.
# Or run via Docker
docker pull mintplexlabs/anythingllm
docker run -d \
-p 3001:3001 \
-v anythingllm_storage:/app/server/storage \
mintplexlabs/anythingllm
# Access at http://localhost:3001
Example
Configuring an MCP server in AnythingLLM:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/docs"]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "your-token"
}
}
}
}
Agents in AnythingLLM can use these MCP tools to read files, search GitHub, and more.
Related on TokRepo
- RAG tools — More retrieval-augmented generation tools
- Self-hosted solutions — Private, self-hosted AI tools
Common pitfalls
- AnythingLLM runs locally and uses your machine's resources. Large document collections require significant RAM and disk space for embeddings.
- The built-in vector store is suitable for personal use. For team or production workloads, consider connecting an external vector database.
- MCP server support requires the servers to be installed on your machine. Ensure Node.js and required packages are available.
- Model switching changes behavior. Responses vary significantly between providers. Test important workflows on your target model.
- Docker deployment is recommended for consistent environments. Native installs may have dependency conflicts.
- Review the official documentation before deploying to production to ensure compatibility with your specific environment and requirements.
- Start with default settings and customize incrementally. Changing too many configuration options at once makes debugging harder.
Frequently Asked Questions
AnythingLLM supports OpenAI, Anthropic Claude, Google Gemini, Ollama (local models), LM Studio, Azure OpenAI, and many other providers. You configure the provider in settings and switch between them per workspace.
Upload documents (PDF, DOCX, TXT, code files) to a workspace. AnythingLLM chunks, embeds, and stores them in the built-in vector database. When you ask questions, relevant chunks are retrieved and included in the prompt context.
Yes. AnythingLLM integrates with Ollama and LM Studio for fully local LLM inference. Your data never leaves your machine. This is ideal for sensitive documents and air-gapped environments.
Yes. AnythingLLM supports the Model Context Protocol, allowing agents to use MCP servers as tools. Configure MCP servers in the settings, and agents can call them during conversations.
Yes. AnythingLLM is open-source and available on GitHub. You can self-host it, modify it, and contribute to the project.
Citations (3)
- AnythingLLM GitHub— AnythingLLM is a full-stack AI desktop application
- AnythingLLM Docs— AnythingLLM documentation and setup
- MCP Documentation— Model Context Protocol specification
Related on TokRepo
Source & Thanks
Created by Mintplex Labs. Licensed under MIT.
Mintplex-Labs/anything-llm — 35k+ stars