Million.js — Make React 70% Faster with a Compiler-Driven Virtual DOM
Million.js compiles React components into an optimized block-based virtual DOM. By analyzing your JSX at build time, it avoids unnecessary diffs and delivers up to 70% faster renders on list-heavy UIs — with zero changes to most components.
What it is
Million.js is a compiler that optimizes React components by replacing the standard virtual DOM diffing with a block-based approach. It analyzes your JSX at build time, identifies static and dynamic parts, and generates code that skips unnecessary diffs. The result is up to 70% faster renders on list-heavy UIs with zero changes to most components.
React developers working on data-intensive dashboards, large tables, and list-heavy interfaces benefit most from Million.js. It plugs into existing React projects as a compiler plugin without requiring you to rewrite components.
How it saves time or tokens
Million.js is a build-time optimization that requires no runtime overhead and no code changes for most components. You add it as a plugin to your bundler configuration, and it automatically optimizes eligible components. This saves engineering time that would otherwise go into manual performance tuning, memoization, and virtualization.
How to use
- Install Million.js and the compiler plugin for your bundler (Vite, Next.js, or webpack).
- Add the plugin to your build configuration.
- Optionally annotate performance-critical components with the
block()higher-order function for maximum optimization.
Example
npm install million
// vite.config.js
import million from 'million/compiler';
import react from '@vitejs/plugin-react';
export default {
plugins: [million.vite({ auto: true }), react()]
};
// Automatic optimization - no code changes needed
function UserList({ users }) {
return (
<ul>
{users.map(user => (
<li key={user.id}>{user.name} - {user.role}</li>
))}
</ul>
);
}
// Million.js automatically optimizes this at build time
Related on TokRepo
- AI Tools for Coding — Frontend development tools and frameworks
- Featured Workflows — Discover performance optimization tools and more
Common pitfalls
- Expecting Million.js to optimize components with heavy side effects or non-deterministic rendering patterns.
- Using the block() HOC on components that already use React.memo without understanding how they interact.
- Assuming all components benefit equally. Million.js provides the biggest gains on list rendering and data-grid components.
Frequently Asked Questions
Yes. Million.js provides a Next.js plugin that integrates with the build pipeline. You add it to next.config.js and it optimizes eligible components during the build process.
No. With auto mode enabled, Million.js analyzes and optimizes components at build time without code changes. You can optionally use the block() function to manually mark components for maximum optimization.
Instead of diffing the entire component tree on every render, Million.js identifies static parts of your JSX at compile time and only tracks the dynamic expressions. On re-render, it updates only the changed values directly in the DOM.
Yes. Million.js is used in production React applications. The compiler generates standard JavaScript that works with React's reconciler, so it is compatible with React DevTools and existing testing infrastructure.
Performance gains depend on your component structure. List-heavy components and data grids see the largest improvements, with benchmarks showing up to 70% faster renders. Simple components with few dynamic parts see smaller gains.
Citations (3)
- Million.js GitHub— Up to 70% faster renders with block-based virtual DOM
- Million.js Documentation— Compiler-driven React optimization
- React Documentation— React virtual DOM diffing overhead
Related on TokRepo
Discussion
Related Assets
Hugging Face Tokenizers — Fast Text Tokenization for ML Pipelines
Hugging Face Tokenizers is a Rust-powered tokenization library with Python bindings that implements BPE, WordPiece, Unigram, and SentencePiece tokenizers with training and encoding speeds of gigabytes per second, used as the backbone for Transformers model tokenization.
Cleanlab — Find and Fix Label Errors in Any ML Dataset
Cleanlab is a data-centric AI Python library that automatically detects label errors, outliers, and data quality issues in classification and regression datasets, helping improve model accuracy by cleaning training data rather than tuning models.
Hugging Face Datasets — Access and Process ML Datasets at Scale
Hugging Face Datasets is a Python library for efficiently loading, processing, and sharing machine learning datasets with Apache Arrow-backed memory mapping, streaming support, and access to thousands of community datasets on the Hub.