ScriptsApr 9, 2026·2 min read

Great Expectations — Data Validation for AI Pipelines

Test your data like you test code. Validate data quality in AI/ML pipelines with expressive assertions, auto-profiling, and data docs. Apache-2.0, 11,400+ stars.

TL;DR
Great Expectations tests your data like you test code with expressive assertions, profiling, and pipeline integration.
§01

What it is

Great Expectations is a Python library for validating, documenting, and profiling data quality. It provides expressive expectation assertions (like 'expect column values to be between 0 and 100'), automatic data profiling, and human-readable data documentation. Great Expectations integrates with data pipelines (Airflow, dbt, Spark) to catch data quality issues before they corrupt ML models or analytics.

Great Expectations is designed for data engineers, ML engineers, and analytics teams who need to ensure data quality in production pipelines.

§02

How it saves time or tokens

Bad data silently corrupts ML models and analytics dashboards. Debugging data quality issues after the fact is time-consuming and expensive. Great Expectations catches problems at ingestion time: null values where there should be none, values outside expected ranges, unexpected schema changes, and duplicate records. Automated profiling generates baseline expectations from existing data, so you do not have to write every assertion manually.

§03

How to use

  1. Install Great Expectations:
pip install great_expectations
  1. Initialize in your project:
great_expectations init
  1. Create expectations and validate:
import great_expectations as gx

context = gx.get_context()
ds = context.sources.add_pandas('my_source')
asset = ds.add_csv_asset('orders', filepath_or_buffer='orders.csv')
batch = asset.get_batch()

batch.expect_column_values_to_not_be_null('order_id')
batch.expect_column_values_to_be_between('amount', min_value=0, max_value=10000)
batch.expect_column_values_to_be_unique('order_id')

results = batch.validate()
print(f'Success: {results.success}')
§04

Example

Integrating validation into a data pipeline:

import great_expectations as gx
import pandas as pd

def validate_orders(df: pd.DataFrame) -> bool:
    context = gx.get_context()
    suite = context.add_expectation_suite('orders_suite')

    # Define expectations
    suite.add_expectation(
        gx.expectations.ExpectColumnValuesToNotBeNull(column='order_id')
    )
    suite.add_expectation(
        gx.expectations.ExpectColumnValuesToBeBetween(
            column='amount', min_value=0, max_value=50000
        )
    )

    results = context.run_validation(batch=df, suite=suite)
    if not results.success:
        print(f'Validation failed: {results.statistics}')
    return results.success
§05

Related on TokRepo

§06

Common pitfalls

  • Writing expectations that are too specific to current data. Hard-coding exact row counts or precise value distributions makes expectations brittle. Use range-based expectations that accommodate normal data growth.
  • Not integrating validation into the pipeline. Running Great Expectations manually provides one-time insight. Integrate it into Airflow/dbt/Spark to catch issues automatically on every pipeline run.
  • Ignoring the data docs feature. Great Expectations generates HTML documentation of your data quality. Share it with stakeholders so they can see data health without running code.
  • Starting with an overly complex configuration instead of defaults. Begin with the minimal setup, verify it works, then customize incrementally. This approach catches configuration errors early and keeps troubleshooting straightforward.

For teams evaluating this tool, the time saved on initial setup alone justifies the adoption. The well-documented API and active community mean most common questions have already been answered, reducing the learning curve and the number of tokens spent explaining basic usage to AI assistants.

Frequently Asked Questions

What is an expectation in Great Expectations?+

An expectation is a declarative assertion about your data. For example, expect_column_values_to_not_be_null asserts that a column has no null values. Great Expectations provides over 300 built-in expectations covering completeness, uniqueness, ranges, patterns, and more.

Can Great Expectations profile data automatically?+

Yes. The auto-profiler analyzes a sample of your data and generates a baseline set of expectations automatically. This gives you a starting point that you can refine based on your domain knowledge.

Does Great Expectations work with Spark?+

Yes. Great Expectations supports Pandas, Spark, and SQL backends. You can validate data in Spark DataFrames using the same expectation syntax as Pandas.

How does Great Expectations integrate with dbt?+

Great Expectations provides a dbt integration that runs expectations as part of your dbt pipeline. Validation results can gate downstream models, ensuring bad data does not propagate.

Is Great Expectations free?+

Yes. The core library is open source under the Apache 2.0 license. Great Expectations also offers GX Cloud, a managed platform with collaboration features, at paid tiers.

Citations (3)
🙏

Source & Thanks

Created by Great Expectations. Licensed under Apache-2.0.

great_expectations — ⭐ 11,400+

Thanks to the Great Expectations team for bringing software engineering rigor to data quality.

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets