MCP ConfigsApr 8, 2026·3 min read

AnythingLLM — All-in-One AI Desktop with MCP

Full-stack AI desktop app with RAG, agents, MCP support, and multi-model chat. AnythingLLM manages documents, embeddings, and vector stores in one private interface.

MC
MCP Hub · Community
Quick Use

Use it first, then decide how deep to go

This block should tell both the user and the agent what to copy, install, and apply first.

  1. Download from anythingllm.com (Mac/Windows/Linux)
  2. Choose your LLM provider (OpenAI, Anthropic, Ollama, or 15+ others)
  3. Upload documents → Ask questions with full RAG
# Or run via Docker
docker pull mintplexlabs/anythingllm
docker run -p 3001:3001 mintplexlabs/anythingllm

What is AnythingLLM?

AnythingLLM is a full-stack AI application that combines chat, RAG (document Q&A), agents, and MCP server support into one desktop app. Upload PDFs, websites, or code — AnythingLLM handles embedding, vector storage, and retrieval automatically. Connect any LLM provider, use built-in agents with tool calling, and extend functionality through MCP servers. Everything runs privately on your machine.

Answer-Ready: AnythingLLM is an all-in-one AI desktop app with chat, RAG, agents, and MCP support. Upload documents, connect any LLM, ask questions with automatic retrieval. Built-in vector database, multi-user support, and agent workspace. 35k+ GitHub stars.

Best for: Teams wanting private, self-hosted AI with document Q&A. Works with: OpenAI, Anthropic Claude, Ollama, Azure, and 15+ LLM providers. Setup time: Under 3 minutes.

Core Features

1. Document RAG (Zero Config)

Upload any document type and ask questions:

  • PDF, DOCX, TXT, CSV
  • Websites (auto-scrape)
  • YouTube transcripts
  • GitHub repos
  • Confluence, Notion exports

2. MCP Server Support

// Connect MCP servers in settings
{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path"]
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"]
    }
  }
}

3. Multi-Provider LLM Support

Provider Models
OpenAI GPT-4o, GPT-4o-mini
Anthropic Claude Sonnet, Claude Haiku
Ollama Llama, Mistral, Gemma (local)
Azure Azure OpenAI
AWS Bedrock All Bedrock models
Google Gemini Pro
LM Studio Any local model
OpenRouter 100+ models

4. Agent Workspace

Built-in agent capabilities:

  • Web browsing and search
  • Code execution
  • File management
  • RAG-enhanced responses
  • MCP tool calling

5. Multi-User & Permissions

Admin → Manage users, workspaces, models
Manager → Create workspaces, upload docs
Default → Chat within assigned workspaces

6. Built-in Vector Database

No external setup needed — AnythingLLM includes LanceDB. Also supports:

  • Pinecone
  • Chroma
  • Weaviate
  • Qdrant
  • Milvus

Use Cases

Use Case How
Company Wiki Q&A Upload docs → RAG workspace
Code Assistant Connect GitHub MCP + Ollama
Research Upload papers → Ask questions
Customer Support Upload knowledge base → Agent

FAQ

Q: Is it truly private? A: Yes, desktop app runs fully local. Use Ollama for local models and LanceDB (built-in) for vectors. Nothing leaves your machine.

Q: How does MCP integration work? A: Configure MCP servers in settings. Agents can call MCP tools alongside built-in tools for extended capabilities.

Q: Can multiple people use it? A: Yes, multi-user support with role-based access control. Deploy via Docker for team use.

🙏

Source & Thanks

Created by Mintplex Labs. Licensed under MIT.

Mintplex-Labs/anything-llm — 35k+ stars

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets