# TimescaleDB — Time-Series Database Built on PostgreSQL > TimescaleDB is a PostgreSQL extension that turns Postgres into a purpose-built time-series database. You get hyper-fast ingestion, automatic partitioning via hypertables, continuous aggregates, and full SQL — without leaving the Postgres ecosystem. ## Install Save as a script file and run: # TimescaleDB — Time-Series on PostgreSQL ## Quick Use ```bash # Docker (full Postgres + Timescale) docker run -d --name tsdb -p 5432:5432 -e POSTGRES_PASSWORD=pw timescale/timescaledb-ha:pg16 # Connect psql -h localhost -U postgres ``` ```sql -- Create extension in your database CREATE EXTENSION IF NOT EXISTS timescaledb; -- Regular Postgres table CREATE TABLE metrics ( time TIMESTAMPTZ NOT NULL, host TEXT NOT NULL, metric TEXT NOT NULL, value DOUBLE PRECISION ); -- Turn it into a hypertable (auto-partitioned by time) SELECT create_hypertable('metrics', 'time'); CREATE INDEX ON metrics (host, time DESC); -- Insert millions of rows with normal SQL INSERT INTO metrics(time, host, metric, value) VALUES (NOW(), 'web1', 'cpu', 0.37); ``` ## Introduction TimescaleDB is what happens when a team asks "what if time-series had all of PostgreSQL behind it?" The answer is an extension that transforms Postgres into a first-class time-series database — with chunking, compression, continuous aggregates, retention policies, and Postgres features (JSONB, full-text search, PostGIS) all in one engine. With over 22,000 GitHub stars, TimescaleDB is used by Bloomberg, Walmart, Toyota, and thousands of IoT/observability teams. It's the path of least friction for anyone who already knows SQL and doesn't want to stand up a dedicated TSDB. ## What TimescaleDB Does TimescaleDB adds **hypertables** — logical tables auto-partitioned into chunks by time (and optionally space). Queries look exactly like regular SQL; under the hood, chunks are skipped via time-range pruning. Continuous aggregates materialize pre-computed rollups. Columnar compression (`ALTER TABLE ... SET (timescaledb.compress)`) shrinks old chunks by 90%+. ## Architecture Overview ``` [PostgreSQL] + TimescaleDB extension | [Hypertable abstraction] one table name, N chunks chunk_1 (Jan 1–7) chunk_2 (Jan 8–14) chunk_3 (Jan 15–21) <-- compressed, columnar chunk_4 (Jan 22–28) <-- hot, row-store | [Query Planner] time-range pruning + chunk exclusion | [Continuous Aggregates] materialized views of rollups refresh_policy keeps them fresh | [Retention + compression policies] drop chunks > 30 days, compress > 7 days ``` ## Self-Hosting & Configuration ```sql -- Continuous aggregate: hourly avg from raw metrics CREATE MATERIALIZED VIEW metrics_hourly WITH (timescaledb.continuous) AS SELECT time_bucket('1 hour', time) AS bucket, host, metric, AVG(value) AS avg_value, MAX(value) AS max_value FROM metrics GROUP BY bucket, host, metric; SELECT add_continuous_aggregate_policy('metrics_hourly', start_offset => INTERVAL '1 day', end_offset => INTERVAL '1 hour', schedule_interval => INTERVAL '15 minutes' ); -- Compression policy: compress chunks older than 7 days ALTER TABLE metrics SET (timescaledb.compress, timescaledb.compress_segmentby = 'host, metric', timescaledb.compress_orderby = 'time DESC' ); SELECT add_compression_policy('metrics', INTERVAL '7 days'); -- Retention policy: drop chunks older than 1 year SELECT add_retention_policy('metrics', INTERVAL '1 year'); ``` ## Key Features - **PostgreSQL superset** — all Postgres features, drivers, tools work - **Hypertables** — auto-partitioning by time (and space) - **Continuous aggregates** — incrementally maintained rollups - **Columnar compression** — 90%+ size reduction for old data - **Retention policies** — drop old chunks automatically - **Hyperfunctions** — percentile_approx, time_bucket, gap filling - **PostGIS + JSONB** — mix time-series with geo and document data - **Replication + HA** — Patroni, Timescale Cloud, logical replication ## Comparison with Similar Tools | Feature | TimescaleDB | InfluxDB | VictoriaMetrics | ClickHouse | QuestDB | |---|---|---|---|---|---| | Language | PostgreSQL SQL | Flux / InfluxQL | PromQL / MetricsQL | SQL | SQL | | Ingestion rate | High | Very High | Very High | Extremely High | Very High | | Query complexity | Full SQL + joins | Limited | Metrics-oriented | Full SQL | SQL (subset) | | Compression | 90%+ | Good | Excellent | Best | Good | | Ecosystem | Postgres (huge) | Own | Prometheus | Own | Own | | Best For | Relational + time-series | IoT/metrics-only | Prometheus replacement | Analytics at scale | Fast simple ingest | ## FAQ **Q: TimescaleDB vs InfluxDB — which should I use?** A: TimescaleDB if you value SQL, joins with relational data, and the Postgres ecosystem (drivers, BI tools, Postgres extensions). InfluxDB if you want a purpose-built TSDB with its own query language and are happy outside Postgres. **Q: Does Timescale slow down regular Postgres queries?** A: No. Non-hypertables use Postgres's regular planner. Hypertables add overhead only for time-series tables, and typically speed up time-range queries significantly via chunk exclusion. **Q: Can I use Timescale on managed Postgres (RDS, Cloud SQL)?** A: AWS RDS and GCP Cloud SQL do not offer the Timescale extension. Use Timescale Cloud (managed) or self-host on EC2/GCE. Or pick a managed Postgres that includes Timescale (e.g., Aiven). **Q: Is TimescaleDB open source?** A: Yes. Apache-2.0 core ("Community Edition") plus some features (multi-node, gapfilling, etc.) under TSL (source-available). Community Edition is sufficient for most teams. ## Sources - GitHub: https://github.com/timescale/timescaledb - Docs: https://docs.timescale.com - Company: Timescale - License: Apache-2.0 (core) + TSL (extras) --- Source: https://tokrepo.com/en/workflows/08aa28bb-37d2-11f1-9bc6-00163e2b0d79 Author: Script Depot