Hasura — Instant GraphQL & REST APIs on Your Database
Hasura generates instant, real-time GraphQL and REST APIs on PostgreSQL, MySQL, SQL Server, and MongoDB with fine-grained access control, event triggers, and remote schemas.
What it is
Hasura generates instant, real-time GraphQL and REST APIs on top of PostgreSQL, MySQL, SQL Server, and MongoDB. Point it at your database, and it creates a complete API with queries, mutations, subscriptions, and fine-grained access control. No backend code required for standard CRUD operations. Event triggers, remote schemas, and actions extend functionality when needed.
This tool is for backend developers who want to skip writing boilerplate API code. Frontend developers can use it to get a fully functional API without waiting for backend implementation.
How it saves time or tokens
Hasura eliminates the need to write API endpoints, resolvers, and data-fetching code for standard database operations. What takes days with a custom backend takes minutes with Hasura. Real-time subscriptions are built in, avoiding WebSocket infrastructure setup. The permission system handles authorization at the API level.
How to use
- Deploy Hasura via Docker.
- Connect your database.
- Track tables and relationships.
- Access the auto-generated API.
# Start Hasura with Docker
docker run -d -p 8080:8080 \
-e HASURA_GRAPHQL_DATABASE_URL='postgresql://user:pass@host:5432/db' \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
-e HASURA_GRAPHQL_ADMIN_SECRET='mysecret' \
hasura/graphql-engine:latest
# Access console at http://localhost:8080/console
Example
After connecting a database with a users table:
# Auto-generated query
query {
users(where: { active: { _eq: true } }, order_by: { created_at: desc }, limit: 10) {
id
name
email
created_at
orders {
id
total
}
}
}
# Real-time subscription
subscription {
users(where: { role: { _eq: "admin" } }) {
id
name
last_seen
}
}
# Mutation
mutation {
insert_users_one(object: { name: "Alice", email: "alice@example.com" }) {
id
}
}
All generated from your database schema. No code written.
Related on TokRepo
- Database tools — Database management and APIs
- AI coding tools — Backend development tools
Common pitfalls
- Hasura exposes your database schema as an API. Design your database schema with the API surface in mind.
- Complex business logic does not belong in Hasura permissions. Use actions or event triggers to call custom backend functions.
- The permission system is powerful but complex. Test permissions thoroughly to avoid data exposure.
- Real-time subscriptions create persistent connections. Plan for connection limits on your database.
- Hasura adds a dependency between your database schema and API consumers. Schema migrations need coordination.
- Review the official documentation before deploying to production to ensure compatibility with your specific environment and requirements.
Frequently Asked Questions
No for standard CRUD operations. Hasura generates queries, mutations, and subscriptions automatically. For custom business logic, use Hasura Actions (call external APIs) or Event Triggers (react to database changes).
Hasura uses a role-based permission system. Define row-level and column-level permissions per role. Permissions are enforced at the database query level, ensuring they cannot be bypassed.
Yes. Hasura can generate REST endpoints from your GraphQL queries. Define a REST endpoint that maps to a specific GraphQL query or mutation for teams that prefer REST.
Yes. Connect Hasura to your existing database and track the tables you want to expose. Hasura reads your schema and generates the API without modifying your data.
Yes. Hasura is used in production by many companies. The cloud offering provides high availability, monitoring, and support. Self-hosted deployments need proper scaling and security configuration.
Citations (3)
- Hasura GitHub— Hasura generates instant GraphQL APIs on databases
- Hasura Docs— Hasura documentation and tutorials
- GraphQL Specification— GraphQL specification and best practices
Related on TokRepo
Discussion
Related Assets
Hugging Face Tokenizers — Fast Text Tokenization for ML Pipelines
Hugging Face Tokenizers is a Rust-powered tokenization library with Python bindings that implements BPE, WordPiece, Unigram, and SentencePiece tokenizers with training and encoding speeds of gigabytes per second, used as the backbone for Transformers model tokenization.
Cleanlab — Find and Fix Label Errors in Any ML Dataset
Cleanlab is a data-centric AI Python library that automatically detects label errors, outliers, and data quality issues in classification and regression datasets, helping improve model accuracy by cleaning training data rather than tuning models.
Hugging Face Datasets — Access and Process ML Datasets at Scale
Hugging Face Datasets is a Python library for efficiently loading, processing, and sharing machine learning datasets with Apache Arrow-backed memory mapping, streaming support, and access to thousands of community datasets on the Hub.