Cette page est affichée en anglais. Une traduction française est en cours.
ConfigsMay 1, 2026·3 min de lecture

RQ — Simple Python Job Queues Backed by Redis

RQ (Redis Queue) is a lightweight Python library for queueing and processing background jobs. It uses Redis as a broker and provides a minimal API for enqueuing functions with their arguments.

Introduction

RQ (Redis Queue) is a Python library that makes it simple to offload work to background workers. You enqueue any Python function with its arguments, and RQ serializes the call, stores it in Redis, and lets worker processes pick it up. It is designed for simplicity over features, making it a natural choice for projects that do not need Celery's full complexity.

What RQ Does

  • Enqueues any Python callable with its arguments for background execution
  • Processes jobs in separate worker processes that poll Redis for work
  • Tracks job status (queued, started, finished, failed) with result storage
  • Retries failed jobs with configurable retry counts and intervals
  • Provides a CLI for starting workers, monitoring queues, and inspecting jobs

Architecture Overview

RQ stores jobs as serialized Python objects (via pickle) in Redis lists. Each queue is a Redis list, and workers use BLPOP to wait for new jobs. When a worker picks up a job, it forks a child process to execute it, isolating each job from crashes. Results and exceptions are stored back in Redis with configurable TTLs. The architecture is intentionally simple: no routing, no task graphs, just queues and workers.

Self-Hosting & Configuration

  • Install via pip and ensure Redis is running locally or accessible over the network
  • Enqueue jobs from any Python code that can reach the same Redis instance
  • Start workers with rq worker specifying queue names and concurrency
  • Use rq-dashboard for a web-based monitoring UI
  • Configure job timeouts, result TTL, and retry behavior per job or per queue

Key Features

  • Minimal API that requires no boilerplate, decorators, or configuration files
  • Fork-based job isolation that protects workers from job-level crashes
  • Built-in job dependencies for simple workflow chains
  • Scheduled job execution with enqueue_at and enqueue_in
  • Lightweight footprint with Redis as the only external dependency

Comparison with Similar Tools

  • Celery — full-featured distributed task queue with routing and canvas; RQ trades features for simplicity
  • Dramatiq — modern Python task queue with built-in rate limiting; RQ has a simpler API
  • Huey — lightweight alternative that also supports SQLite as a backend; RQ is Redis-only
  • Sidekiq — Ruby background job processor; RQ is the Python equivalent in spirit
  • ARQ — async Python job queue; RQ uses traditional forking rather than asyncio

FAQ

Q: Does RQ support periodic or cron-like jobs? A: Not natively. Use rq-scheduler or an external cron to enqueue jobs on a schedule.

Q: Can RQ handle thousands of jobs per second? A: RQ handles moderate throughput well. For very high volumes, consider Celery or Dramatiq which offer more concurrency options.

Q: Is RQ safe for production? A: Yes. RQ is used in production by many organizations. Job isolation via forking provides resilience against crashes.

Q: How do I monitor RQ? A: Use rq info on the command line or install rq-dashboard for a web-based view of queues, workers, and failed jobs.

Sources

Fil de discussion

Connectez-vous pour rejoindre la discussion.
Aucun commentaire pour l'instant. Soyez le premier à partager votre avis.

Actifs similaires