MCP ConfigsApr 8, 2026·3 min read

AnythingLLM — All-in-One AI Desktop with MCP

Full-stack AI desktop app with RAG, agents, MCP support, and multi-model chat. AnythingLLM manages documents, embeddings, and vector stores in one private interface.

TL;DR
AnythingLLM bundles RAG, agents, MCP tools, and multi-model chat in one private desktop app.
§01

What it is

AnythingLLM is a full-stack AI desktop application that combines RAG (retrieval-augmented generation), AI agents, MCP server support, and multi-model chat in one interface. It manages documents, embeddings, and vector stores locally, keeping your data private. You can chat with your documents, run agents with tools, and switch between LLM providers.

This tool is for users who want a single application for all their AI needs without sending data to cloud services. It works with both cloud LLMs and local models.

§02

How it saves time or tokens

AnythingLLM eliminates the need to set up separate tools for document chat, RAG pipelines, and agent workflows. Everything runs in one application with a unified interface. The built-in vector store removes the need for external database setup. MCP support means agents can use external tools without custom integration code. The estimated token cost is around 3,900 tokens per session.

§03

How to use

  1. Download AnythingLLM for your platform.
  2. Configure your LLM provider (OpenAI, Anthropic, Ollama, etc.).
  3. Create a workspace and upload documents.
  4. Chat with your documents or configure agents.
# Or run via Docker
docker pull mintplexlabs/anythingllm

docker run -d \
  -p 3001:3001 \
  -v anythingllm_storage:/app/server/storage \
  mintplexlabs/anythingllm

# Access at http://localhost:3001
§04

Example

Configuring an MCP server in AnythingLLM:

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/docs"]
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_TOKEN": "your-token"
      }
    }
  }
}

Agents in AnythingLLM can use these MCP tools to read files, search GitHub, and more.

§05

Related on TokRepo

§06

Common pitfalls

  • AnythingLLM runs locally and uses your machine's resources. Large document collections require significant RAM and disk space for embeddings.
  • The built-in vector store is suitable for personal use. For team or production workloads, consider connecting an external vector database.
  • MCP server support requires the servers to be installed on your machine. Ensure Node.js and required packages are available.
  • Model switching changes behavior. Responses vary significantly between providers. Test important workflows on your target model.
  • Docker deployment is recommended for consistent environments. Native installs may have dependency conflicts.
  • Review the official documentation before deploying to production to ensure compatibility with your specific environment and requirements.
  • Start with default settings and customize incrementally. Changing too many configuration options at once makes debugging harder.

Frequently Asked Questions

Which LLM providers does AnythingLLM support?+

AnythingLLM supports OpenAI, Anthropic Claude, Google Gemini, Ollama (local models), LM Studio, Azure OpenAI, and many other providers. You configure the provider in settings and switch between them per workspace.

How does the RAG system work?+

Upload documents (PDF, DOCX, TXT, code files) to a workspace. AnythingLLM chunks, embeds, and stores them in the built-in vector database. When you ask questions, relevant chunks are retrieved and included in the prompt context.

Can I use local models with AnythingLLM?+

Yes. AnythingLLM integrates with Ollama and LM Studio for fully local LLM inference. Your data never leaves your machine. This is ideal for sensitive documents and air-gapped environments.

Does AnythingLLM support MCP?+

Yes. AnythingLLM supports the Model Context Protocol, allowing agents to use MCP servers as tools. Configure MCP servers in the settings, and agents can call them during conversations.

Is AnythingLLM open-source?+

Yes. AnythingLLM is open-source and available on GitHub. You can self-host it, modify it, and contribute to the project.

Citations (3)
🙏

Source & Thanks

Created by Mintplex Labs. Licensed under MIT.

Mintplex-Labs/anything-llm — 35k+ stars

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.