MCP ConfigsApr 6, 2026·2 min read

Firecrawl MCP — Web Scraping Server for AI Agents

Official Firecrawl MCP server for AI agents to scrape, crawl, and extract structured data from any website. Supports batch scraping, search, and markdown extraction. 15,000+ stars.

TL;DR
Firecrawl MCP server gives AI agents the ability to scrape and extract structured web data.
§01

What it is

Firecrawl MCP is the official Model Context Protocol server for Firecrawl, a web scraping and crawling platform. It exposes Firecrawl capabilities as MCP tools that AI agents can call directly. Agents can scrape single pages, crawl entire sites, extract structured data, and search the web without custom scraping code.

This server targets developers building AI agents that need real-time web data. Whether your agent researches competitors, monitors prices, or gathers training data, Firecrawl MCP handles the scraping layer.

§02

How it saves time or tokens

Without Firecrawl MCP, an agent must parse raw HTML, handle JavaScript rendering, manage rate limits, and deal with anti-bot measures. Firecrawl abstracts all of that into clean markdown or structured JSON output. The estimated token cost is around 2,500 tokens per session, but the time saved on manual scraping setup is significant. Batch scraping handles multiple URLs in one call.

§03

How to use

  1. Get a Firecrawl API key from firecrawl.dev.
  2. Configure the MCP server in your Claude Desktop or agent config.
  3. The agent can now call scraping tools like scrape_url, crawl_url, search, and extract.
{
  "mcpServers": {
    "firecrawl": {
      "command": "npx",
      "args": ["-y", "firecrawl-mcp"],
      "env": {
        "FIRECRAWL_API_KEY": "your-api-key"
      }
    }
  }
}
§04

Example

Once configured, an agent can scrape a page:

# Agent calls the scrape_url tool
result = await mcp.call('firecrawl', 'scrape_url', {
    'url': 'https://example.com/pricing',
    'formats': ['markdown', 'structured_data']
})

# Result contains clean markdown and extracted data
print(result['markdown'])  # Clean text without HTML noise
print(result['structured_data'])  # Extracted pricing tables

Batch scraping multiple URLs:

result = await mcp.call('firecrawl', 'batch_scrape', {
    'urls': [
        'https://example.com/page1',
        'https://example.com/page2',
        'https://example.com/page3'
    ],
    'formats': ['markdown']
})
§05

Related on TokRepo

§06

Common pitfalls

  • A Firecrawl API key is required. The free tier has rate limits that may not suit high-volume crawling.
  • JavaScript-heavy single-page applications may require the wait_for parameter to ensure content loads before extraction.
  • Batch scraping is asynchronous. For large batches, you need to poll for completion rather than expecting immediate results.
  • Structured data extraction works best when you provide a schema describing what fields to extract.
  • Some websites block automated access. Firecrawl handles many anti-bot measures, but heavily protected sites may still return incomplete data.
  • Review the official documentation before deploying to production to ensure compatibility with your specific environment and requirements.

Frequently Asked Questions

What output formats does Firecrawl MCP support?+

Firecrawl MCP can return content as clean markdown, raw HTML, structured data (JSON), or screenshots. Markdown is the most token-efficient format for feeding content to LLMs. Structured data extraction lets you define a schema for specific fields.

Do I need a paid Firecrawl plan?+

Firecrawl offers a free tier with limited credits. For production use or high-volume scraping, a paid plan removes rate limits and adds features like priority processing and higher concurrency.

Can Firecrawl MCP crawl entire websites?+

Yes. The crawl_url tool follows links from a starting URL and scrapes multiple pages. You can set depth limits, URL filters, and maximum page counts to control the crawl scope.

How does it handle JavaScript-rendered pages?+

Firecrawl uses headless browsers to render JavaScript before extracting content. This means single-page applications and dynamically loaded content are captured correctly, unlike simple HTTP scrapers.

Is Firecrawl MCP compatible with Claude Desktop?+

Yes. Add the MCP server configuration to your Claude Desktop settings file. Once configured, Claude can call Firecrawl tools directly during conversations to fetch and analyze web content.

Citations (3)
🙏

Source & Thanks

Created by Mendable. Licensed under AGPL-3.0.

firecrawl — ⭐ 15,000+

Thanks to the Firecrawl team for bringing web scraping superpowers to AI agents.

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.