# APScheduler — Advanced Python Scheduler for Background Jobs > APScheduler is a Python library for scheduling jobs to run at specified intervals, cron expressions, or specific dates, with support for persistent job stores and multiple execution backends. ## Install Save in your project root: # APScheduler — Advanced Python Scheduler for Background Jobs ## Quick Use ```bash pip install apscheduler ``` ```python from apscheduler.schedulers.blocking import BlockingScheduler def my_job(): print("Job executed") scheduler = BlockingScheduler() scheduler.add_job(my_job, "interval", seconds=30) scheduler.start() ``` ## Introduction APScheduler (Advanced Python Scheduler) is a Python library that lets you schedule Python functions to run periodically or at specific times. It works as an in-process scheduler that can persist jobs across restarts using database-backed job stores. ## What APScheduler Does - Schedules Python callables to run on intervals, cron expressions, or fixed dates - Persists scheduled jobs across process restarts using SQL, MongoDB, or Redis stores - Runs jobs in threads, processes, or async event loops depending on the backend - Supports adding, modifying, pausing, and removing jobs at runtime - Prevents duplicate execution with configurable misfire grace periods ## Architecture Overview APScheduler consists of four component types: triggers (when to run), job stores (where to persist), executors (how to run), and schedulers (the coordinator). The scheduler polls job stores for due jobs, dispatches them to executors, and updates the next run time. Triggers can be date-based, interval-based, or cron-based. The modular design lets you swap any component without changing application code. ## Self-Hosting & Configuration - Install with `pip install apscheduler` and choose your scheduler type - Use `BackgroundScheduler` for integration into existing applications - Use `AsyncIOScheduler` for async/await applications with asyncio - Configure a `SQLAlchemyJobStore` to persist jobs in PostgreSQL or SQLite - Set `max_instances` per job to prevent overlapping executions ## Key Features - Three trigger types: date (one-shot), interval (periodic), and cron (complex schedules) - Multiple concurrent job stores allow different persistence strategies per job - Thread pool executor runs jobs without blocking the main application - Event listeners notify your code when jobs execute, fail, or miss their window - Timezone-aware scheduling handles DST transitions correctly ## Comparison with Similar Tools - **Celery Beat** — distributed task scheduling tied to the Celery ecosystem - **schedule** — simpler API but no persistence, no cron expressions, single-threaded - **Dramatiq** — task queue focused on reliability, not in-process scheduling - **Huey** — lightweight task queue with Redis, simpler but fewer scheduling options - **cron** — OS-level scheduler, no Python integration or dynamic job management ## FAQ **Q: Can APScheduler replace Celery?** A: APScheduler handles scheduling but not distributed task queuing. For single-process periodic jobs it is simpler than Celery. For distributed workloads across multiple workers, Celery is the better fit. **Q: How do I prevent a job from running twice if the previous run is still going?** A: Set `max_instances=1` when adding the job. APScheduler will skip or queue subsequent triggers until the running instance completes. **Q: Does APScheduler survive process restarts?** A: Yes, if you configure a persistent job store like `SQLAlchemyJobStore` or `MongoDBJobStore`. Jobs and their next run times are saved to the database and restored on startup. **Q: Can I use APScheduler with FastAPI or Django?** A: Yes. Use `BackgroundScheduler` or `AsyncIOScheduler` and start it in your application's startup hook. Ensure you shut it down gracefully on application exit. ## Sources - https://github.com/agronholm/apscheduler - https://apscheduler.readthedocs.io/en/latest/ --- Source: https://tokrepo.com/en/workflows/2a490a36-442c-11f1-9bc6-00163e2b0d79 Author: AI Open Source