Manifest — Smart LLM Router That Cuts Costs 70%
Intelligent LLM routing that scores requests across 23 dimensions in under 2ms. Routes to the cheapest capable model among 300+ options from 13+ providers. MIT, 4,200+ stars.
What it is
Manifest is an intelligent LLM routing layer that sits between your application and LLM providers. It analyzes each incoming request across 23 complexity dimensions and routes it to the cheapest model that can handle it from a pool of 300+ models across 13+ providers including OpenAI, Anthropic, Google, and DeepSeek.
This tool targets teams running production LLM applications who want to reduce API spending without sacrificing output quality. It works as a drop-in proxy that requires minimal code changes.
How it saves time or tokens
Many LLM requests are simple enough for smaller, cheaper models. Manifest identifies these automatically. Simple classification or extraction tasks go to fast models. Complex reasoning tasks go to powerful ones. The routing decision adds under 2ms of latency. Teams report up to 70% cost reduction by avoiding overpowered models for routine requests.
How to use
- Install Manifest via OpenClaw plugin or run locally with Docker.
- Open the dashboard at
http://127.0.0.1:2099and add your provider API keys. - Point your application to the Manifest proxy endpoint instead of calling providers directly.
# Docker install
docker pull mnfst/manifest
docker run -p 2099:2099 mnfst/manifest
# Or via OpenClaw
openclaw plugins install manifest
Example
Manifest works as a transparent proxy. Your existing API calls route through it:
import openai
# Point to Manifest proxy instead of OpenAI directly
client = openai.OpenAI(
base_url='http://localhost:2099/v1',
api_key='your-manifest-key'
)
# Manifest picks the cheapest capable model automatically
response = client.chat.completions.create(
model='auto', # let Manifest decide
messages=[{'role': 'user', 'content': 'Summarize this paragraph...'}]
)
Related on TokRepo
- AI Gateway Providers — Compare LiteLLM and other gateway solutions for multi-provider routing
- AI Tools for API Management — Tools for managing and optimizing API integrations
Common pitfalls
- The 23-dimension scoring is tuned for English text. Non-English requests may get misrouted to weaker models that handle the language poorly.
- Budget controls need careful thresholds. Setting cost caps too aggressively can force all traffic to the cheapest models and degrade quality.
- Automatic fallbacks can mask provider outages. Monitor per-provider success rates separately to catch issues early.
Frequently Asked Questions
Manifest scores each request across 23 dimensions including complexity, length, domain, and required capabilities. This scoring happens in under 2ms. It then matches the request profile against known model capabilities and selects the cheapest model that meets the quality threshold.
Yes. Manifest proxies streaming responses from the selected provider transparently. Your application receives Server-Sent Events exactly as it would from the original provider, with no buffering or modification.
Manifest includes automatic fallback logic. If the selected model returns an error or times out, it escalates to the next cheapest capable model. You can configure fallback chains and retry policies in the dashboard.
Yes. You can specify a model name directly instead of using the auto routing. You can also create routing rules in the dashboard that pin specific request patterns to specific models.
Yes. Manifest is MIT licensed with 4,200+ GitHub stars. You can self-host it with Docker or run it as an OpenClaw plugin. The source code and documentation are available on GitHub.
Citations (3)
- Manifest GitHub— Manifest intelligent LLM router with 4,200+ stars
- Anthropic Docs— Model routing and cost optimization for LLM applications
- OpenAI Pricing— LLM API cost comparison across providers
Related on TokRepo
Source & Thanks
Discussion
Related Assets
Cucumber.js — BDD Testing with Plain Language Scenarios
Cucumber.js is a JavaScript implementation of Cucumber that runs automated tests written in Gherkin plain language.
WireMock — Flexible API Mocking for Java and Beyond
WireMock is an HTTP mock server for stubbing and verifying API calls in integration tests and development.
Google Benchmark — Microbenchmark Library for C++
Google Benchmark is a library for measuring and reporting the performance of C++ code with statistical rigor.