# Context7 — Up-to-Date Docs MCP for AI Editors > MCP server that provides AI code editors with up-to-date library documentation. Eliminates hallucinations from outdated training data. Supports 1000+ libraries. 51K+ stars. ## Install Merge the JSON below into your `.mcp.json`: ## Quick Use ```json { "mcpServers": { "context7": { "command": "npx", "args": ["-y", "@context7/mcp@latest"] } } } ``` Add to your Claude Code, Cursor, or Windsurf MCP config. Then ask: "Using context7, show me how to use React Server Components." --- ## Intro Context7 is an MCP server that gives AI code editors access to up-to-date, version-specific library documentation. Instead of relying on outdated training data, your AI assistant fetches real documentation for React, Next.js, Python, Node.js, and 1,000+ other libraries in real-time. Eliminates code hallucinations caused by stale API knowledge. 51,000+ GitHub stars, MIT licensed. **Best for**: Any developer using AI coding tools who wants accurate, current documentation **Works with**: Claude Code, Cursor, Windsurf, Gemini CLI, any MCP client --- ## How It Works 1. You ask your AI assistant about a library (e.g., "How do I use Next.js App Router?") 2. Context7 MCP fetches the latest documentation for that library version 3. AI generates code based on current docs — not outdated training data ### Why It Matters - **React 19** patterns differ from React 18 — training data may be stale - **Next.js 15** App Router has different APIs than Pages Router - **Python 3.12** has features LLMs don't know about ### Supported Libraries 1,000+ libraries indexed and updated continuously: React, Next.js, Vue, Svelte, Angular, Node.js, Express, FastAPI, Django, Flask, TailwindCSS, Prisma, Drizzle, PostgreSQL, MongoDB, Redis, Docker, Kubernetes, Terraform, AWS SDK, and many more. --- ### FAQ **Q: What is Context7?** A: An MCP server providing AI code editors with real-time, up-to-date library documentation. Eliminates hallucinations from stale training data. 1,000+ libraries supported. 51K+ stars. **Q: Does it slow down my AI responses?** A: Minimal latency — documentation is pre-indexed and cached. Most lookups complete in under 500ms. --- ## Source & Thanks > Created by [Upstash](https://github.com/upstash). Licensed under MIT. > [upstash/context7](https://github.com/upstash/context7) — 51,000+ GitHub stars --- Source: https://tokrepo.com/en/workflows/80630bbc-db0f-4254-bed5-8e5b639e5a34 Author: MCP Hub