Introduction
Manifest is a smart LLM router with 4,200+ GitHub stars. It sits between your app and LLM providers, scores each request across 23 dimensions (<2ms), and automatically routes to the cheapest capable model — saving up to 70% on cost. Supports 300+ models across 13+ providers, with automatic failover and budget controls. Ideal for teams running production LLM apps that want to optimize API spend automatically.
Manifest — Smart LLM Cost Optimization
How It Works
- Request arrives → 23-dimension complexity scoring (<2ms)
- Pick the cheapest sufficiently-capable model
- Route to the chosen provider
- On failure, automatically switch to a fallback model
Core Features
- 300+ models across 13+ providers
- 23-dimension scoring, <2ms latency
- Up to 70% cost savings
- Automatic failover
- Budget controls
- Transparent decision dashboard
FAQ
Q: What is Manifest? A: A smart LLM router that scores requests across 23 dimensions and routes to the cheapest capable model — saving up to 70% on API cost.
Q: Is it free? A: The core router is open source (MIT) and self-hostable.