# Manifest — Smart LLM Router That Cuts Costs 70% > Intelligent LLM routing that scores requests across 23 dimensions in under 2ms. Routes to the cheapest capable model among 300+ options from 13+ providers. MIT, 4,200+ stars. ## Install Save as a script file and run: ## Quick Use 1. Install via OpenClaw: ```bash openclaw plugins install manifest ``` 2. Or run locally with Docker: ```bash docker pull mnfst/manifest docker run -p 2099:2099 mnfst/manifest ``` 3. Open dashboard at `http://127.0.0.1:2099` and configure your API keys. --- ## Intro Manifest is a smart LLM router with 4,200+ GitHub stars that sits between your application and LLM providers. It scores each request across 23 dimensions in under 2ms and routes it to the cheapest model that can handle it — cutting costs up to 70% without quality loss. Supports 300+ models from 13+ providers (OpenAI, Anthropic, Google, DeepSeek, etc.) with automatic fallbacks and budget controls. Best for teams running production LLM apps who want to optimize API spending automatically. See also: [AI developer tools on TokRepo](https://tokrepo.com/en/@AI%20Open%20Source). --- ## Manifest — Intelligent LLM Cost Optimization ### The Problem Different LLM tasks have different complexity levels. Sending every request to GPT-4o or Claude Opus wastes money — many requests could be handled by cheaper models just as well. ### The Solution Manifest analyzes each request's complexity and routes it to the cheapest model that meets the quality threshold. Simple tasks go to fast, cheap models. Complex tasks go to powerful ones. ### How It Works 1. **Request arrives** from your application 2. **23-dimension scoring** analyzes complexity (under 2ms latency) 3. **Model selection** picks the cheapest capable model 4. **Routing** sends to the selected provider 5. **Fallback** automatically retries with a different model if the first fails ### Key Features - **300+ models** from 13+ providers - **23-dimension scoring** in under 2ms - **Up to 70% cost reduction** without quality loss - **Automatic fallbacks** when models fail - **Budget controls** — set spending limits per model, team, or project - **Transparent decisions** — dashboard shows why each request was routed where - **Direct provider access** — your API keys, no middleman markup ### Supported Providers OpenAI, Anthropic (Claude), Google (Gemini), DeepSeek, Mistral, Groq, Together AI, Fireworks, Cerebras, and more. ### Deployment Options | Option | Command | |--------|---------| | **Cloud** | Visit app.manifest.build | | **Local** | `openclaw plugins install manifest` | | **Docker** | `docker run -p 2099:2099 mnfst/manifest` | ### Cost Savings Example | Scenario | Without Manifest | With Manifest | Savings | |----------|-----------------|---------------|---------| | Customer support bot | $500/mo (all GPT-4o) | $150/mo (mixed routing) | 70% | | Code review agent | $800/mo (all Claude Opus) | $320/mo (mixed routing) | 60% | | Data extraction pipeline | $300/mo (all GPT-4) | $90/mo (mixed routing) | 70% | ### FAQ **Q: What is Manifest?** A: A smart LLM router that scores requests across 23 dimensions and routes them to the cheapest capable model, cutting LLM API costs up to 70% without quality degradation. **Q: Is Manifest free?** A: The core router is open-source under MIT. Self-host for free or use the cloud version. **Q: Does Manifest add latency?** A: The routing decision takes under 2ms. Total added latency is negligible compared to LLM response times. --- ## Source & Thanks > Created by [mnfst](https://github.com/mnfst). Licensed under MIT. > > [Manifest](https://github.com/mnfst/manifest) — ⭐ 4,200+ Thanks to the Manifest team for making LLM cost optimization accessible. --- ## 快速使用 1. 安装: ```bash openclaw plugins install manifest ``` 2. 或 Docker 部署: ```bash docker run -p 2099:2099 mnfst/manifest ``` 3. 打开 `http://127.0.0.1:2099` 配置 API 密钥。 --- ## 简介 Manifest 是一个智能 LLM 路由器,GitHub 4,200+ star。在你的应用和 LLM 提供商之间工作,对每个请求进行 23 维评分(<2ms),自动路由到最便宜且能胜任的模型 — 最多节省 70% 成本。支持 13+ 提供商的 300+ 模型,自动故障转移和预算控制。适合运行生产级 LLM 应用、想自动优化 API 开支的团队。 --- ## Manifest — 智能 LLM 成本优化 ### 工作原理 1. 请求到达 → 23 维复杂度评分(<2ms) 2. 选择最便宜且能力足够的模型 3. 路由到选定的提供商 4. 失败自动切换到备选模型 ### 核心功能 - 300+ 模型,13+ 提供商 - 23 维评分,<2ms 延迟 - 最高 70% 成本节省 - 自动故障转移 - 预算控制 - 透明决策仪表盘 ### FAQ **Q: Manifest 是什么?** A: 智能 LLM 路由器,23 维评分后路由到最便宜且能胜任的模型,最多节省 70% API 成本。 **Q: 免费吗?** A: 核心路由器开源(MIT),可自托管。 --- ## 来源与感谢 > Created by [mnfst](https://github.com/mnfst). Licensed under MIT. > > [Manifest](https://github.com/mnfst/manifest) — ⭐ 4,200+ --- Source: https://tokrepo.com/en/workflows/15266cba-33d7-11f1-9bc6-00163e2b0d79 Author: AI Open Source