ScriptsApr 14, 2026·3 min read

Scrapy — Fast High-Level Web Crawling Framework for Python

Scrapy is the most battle-tested web scraping framework for Python. It handles concurrency, retries, throttling, cookies, and export pipelines — letting you write spiders that scale from one page to millions with the same code.

TL;DR
Scrapy handles concurrent scraping, retries, and data pipelines so you write just the spider.
§01

What it is

Scrapy is the most battle-tested web scraping framework for Python. It handles concurrency, retries, throttling, cookies, and export pipelines, letting you write spiders that scale from one page to millions with the same code. You define how to follow links and extract data, and Scrapy handles the rest: scheduling, deduplication, middleware, and output formatting.

Scrapy targets data engineers, researchers, and developers who need structured data from websites. It is an asynchronous framework built on Twisted, capable of handling thousands of concurrent requests while respecting rate limits and site policies.

§02

Why it saves time or tokens

Building a web scraper from scratch requires handling HTTP connections, retries, rate limiting, cookie management, and data storage. Scrapy provides all of this as configuration. You focus exclusively on the extraction logic. When using AI assistants to build scrapers, Scrapy's well-defined Spider class and Item/Pipeline pattern produce consistent, working code because the framework constraints reduce ambiguity.

§03

How to use

  1. Install Scrapy: pip install scrapy
  2. Create a project: scrapy startproject myproject
  3. Create a spider: scrapy genspider example example.com and define the parse method
§04

Example

import scrapy

class ProductSpider(scrapy.Spider):
    name = 'products'
    start_urls = ['https://example.com/products']

    def parse(self, response):
        for product in response.css('div.product'):
            yield {
                'name': product.css('h2::text').get(),
                'price': product.css('span.price::text').get(),
                'url': product.css('a::attr(href)').get(),
            }

        next_page = response.css('a.next::attr(href)').get()
        if next_page:
            yield response.follow(next_page, self.parse)
ComponentPurpose
SpiderDefine crawl logic and extraction
ItemStructured data container
PipelineProcess, validate, store items
MiddlewareModify requests/responses
SettingsConfigure concurrency, delays
§05

Related on TokRepo

§06

Common pitfalls

  • Scrapy runs asynchronously; using blocking libraries (requests, time.sleep) inside spiders deadlocks the event loop
  • Websites change their HTML structure; selectors break silently and return empty data rather than errors, so add validation in pipelines
  • Aggressive crawling gets your IP blocked; always configure DOWNLOAD_DELAY and CONCURRENT_REQUESTS_PER_DOMAIN in settings

Frequently Asked Questions

Can Scrapy handle JavaScript-rendered pages?+

Scrapy alone does not execute JavaScript. For JS-rendered pages, integrate Scrapy with Splash (a headless browser) or Playwright via scrapy-playwright. These middleware solutions render JavaScript before Scrapy extracts data, though they add overhead compared to plain HTTP scraping.

How does Scrapy handle rate limiting?+

Scrapy has built-in settings for DOWNLOAD_DELAY (seconds between requests), CONCURRENT_REQUESTS (total parallel requests), and CONCURRENT_REQUESTS_PER_DOMAIN. The AutoThrottle extension dynamically adjusts delays based on server response times, automatically slowing down when the target site is overloaded.

What output formats does Scrapy support?+

Scrapy exports data to JSON, JSON Lines, CSV, XML, and custom formats through Feed Exports. You configure the output format and destination in settings or on the command line. For databases, write a custom Pipeline that inserts items into PostgreSQL, MongoDB, or any other store.

How does Scrapy compare to BeautifulSoup?+

BeautifulSoup is a parsing library that extracts data from HTML. Scrapy is a complete framework that handles crawling, scheduling, concurrency, and data pipelines. BeautifulSoup is simpler for one-off page parsing. Scrapy is better for large-scale crawling with many pages, retries, and structured output.

Can Scrapy respect robots.txt?+

Yes. Scrapy respects robots.txt by default through the ROBOTSTXT_OBEY setting (True by default). It downloads and parses the robots.txt file before crawling and skips disallowed URLs. You can disable this for legitimate use cases, but always check the site's terms of service.

Citations (3)

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets