# Firecrawl MCP — Web Search & Scrape Tools > Add Firecrawl MCP to your agent to search, scrape, and extract full-page content. Run via npx with an API key; fits Cursor, Claude Code, VS Code. ## Install Merge the JSON below into your `.mcp.json`: # Firecrawl MCP — Web Search & Scrape Tools > Add Firecrawl MCP to your agent to search, scrape, and extract full-page content. Run via npx with an API key; fits Cursor, Claude Code, VS Code. ## Quick Use 1) Install ```bash export FIRECRAWL_API_KEY=fc-YOUR_API_KEY && npx -y firecrawl-mcp ``` 2) Run ```bash Ask your agent: "Search the web for X and summarize the key points with sources." ``` 3) Verify ```bash Confirm the MCP server starts and your client lists Firecrawl tools ``` --- ## Intro **Best for**: agents that need reliable web data (search + scraping) and want a single MCP tool surface instead of ad-hoc curl + copy/paste **Works with**: MCP-capable clients (Cursor, Claude Code, VS Code), Node.js + npx, Firecrawl API key **Setup time**: 6 minutes ### Quant Data - Runs via `npx -y firecrawl-mcp` (repo) - Setup time ~6 minutes --- ## How to Use It Well Start by using Firecrawl for one concrete task: search → scrape 3–5 URLs → extract structured bullets. Once the output is stable, formalize it as a reusable agent instruction. ### Adoption Checklist - Start with one real task and keep the scope narrow - Capture a baseline: time-to-first-success and output quality - Version your config/skills so teammates stay in sync ### Guardrails Keep prompts strict: always ask for sources and limit pages. Web tools can explode context and cost if you let the agent wander. ### FAQ **Q: Do I need self-hosting?** A: No. The default mode uses the cloud API with `FIRECRAWL_API_KEY`. Self-hosting is optional if you run your own endpoint. **Q: Which clients does it work with?** A: Any MCP client that can run stdio command servers; the repo includes Cursor and VS Code configuration examples. **Q: How do I control cost?** A: Use the repo's credit monitoring env vars and keep tasks narrow (domain + query + output format) to reduce retries and page volume. --- ## Source & Thanks > GitHub: https://github.com/firecrawl/firecrawl-mcp-server > Owner avatar: https://avatars.githubusercontent.com/u/135057108?v=4 > License (SPDX): MIT > GitHub stars (verified via `api.github.com/repos/firecrawl/firecrawl-mcp-server`): 6,276 --- # Firecrawl MCP——网页搜索与抓取工具 > 把 Firecrawl MCP 接入 Claude Code/Cursor:用 API Key 即可做网页搜索、抓取、结构化提取与自动化操作。npx 一条命令启动,适合调研、比价、资料汇总。 ## 快速使用 1)安装 ```bash export FIRECRAWL_API_KEY=fc-YOUR_API_KEY && npx -y firecrawl-mcp ``` 2)运行 ```bash Ask your agent: "Search the web for X and summarize the key points with sources." ``` 3)验证 ```bash Confirm the MCP server starts and your client lists Firecrawl tools ``` --- ## 简介 **适合谁**:需要可靠网页数据(搜索+抓取)的 AI 代理/团队,希望用一个 MCP 工具面替代零散的 curl + 复制粘贴 **适用环境**:支持 MCP 的客户端(Cursor、Claude Code、VS Code 等)、Node.js + npx、Firecrawl API Key **安装耗时**:6 分钟 ### 量化信息 - 可用 `npx -y firecrawl-mcp` 启动(仓库) - 装机约 6 分钟 --- ## 用好它的方式 先用 Firecrawl 完成一个明确任务:搜索 → 抓取 3–5 个 URL → 输出结构化要点。输出稳定后,再把它固化为可复用的 agent 指令。 ### 推广清单 - 先选一个真实任务,小范围试跑 - 记录基线:首次成功耗时与输出质量 - 配置/技能要版本化,避免团队漂移 ### 风险与护栏 提示词要严格:要求带来源、限制页数。网页工具一旦放任探索,很容易造成上下文与成本爆炸。 ### FAQ **Q: 必须自建吗?** A: 不必。默认用云 API + `FIRECRAWL_API_KEY` 即可运行;只有在你需要自定义端点时才考虑自建。 **Q: 哪些客户端能用?** A: 只要是能运行 stdio 命令服务器的 MCP 客户端都可以;仓库提供了 Cursor/VS Code 的配置示例。 **Q: 怎么控制费用?** A: 用仓库的 credit 监控环境变量,并把任务拆小(域名+查询+输出格式),降低重试和抓取页数。 --- ## 来源与感谢 > GitHub:https://github.com/firecrawl/firecrawl-mcp-server > Owner avatar:https://avatars.githubusercontent.com/u/135057108?v=4 > 许可证(SPDX):MIT > GitHub stars(已通过 `api.github.com/repos/firecrawl/firecrawl-mcp-server` 核验):6,276 --- Source: https://tokrepo.com/en/workflows/firecrawl-mcp-web-search-scrape-tools Author: MCP Hub