ScriptsApr 9, 2026·3 min read

Manifest — Smart LLM Router That Cuts Costs 70%

Intelligent LLM routing that scores requests across 23 dimensions in under 2ms. Routes to the cheapest capable model among 300+ options from 13+ providers. MIT, 4,200+ stars.

TL;DR
Manifest scores each LLM request across 23 dimensions in under 2ms and routes it to the cheapest capable model.
§01

What it is

Manifest is an intelligent LLM routing layer that sits between your application and LLM providers. It analyzes each incoming request across 23 complexity dimensions and routes it to the cheapest model that can handle it from a pool of 300+ models across 13+ providers including OpenAI, Anthropic, Google, and DeepSeek.

This tool targets teams running production LLM applications who want to reduce API spending without sacrificing output quality. It works as a drop-in proxy that requires minimal code changes.

§02

How it saves time or tokens

Many LLM requests are simple enough for smaller, cheaper models. Manifest identifies these automatically. Simple classification or extraction tasks go to fast models. Complex reasoning tasks go to powerful ones. The routing decision adds under 2ms of latency. Teams report up to 70% cost reduction by avoiding overpowered models for routine requests.

§03

How to use

  1. Install Manifest via OpenClaw plugin or run locally with Docker.
  2. Open the dashboard at http://127.0.0.1:2099 and add your provider API keys.
  3. Point your application to the Manifest proxy endpoint instead of calling providers directly.
# Docker install
docker pull mnfst/manifest
docker run -p 2099:2099 mnfst/manifest

# Or via OpenClaw
openclaw plugins install manifest
§04

Example

Manifest works as a transparent proxy. Your existing API calls route through it:

import openai

# Point to Manifest proxy instead of OpenAI directly
client = openai.OpenAI(
    base_url='http://localhost:2099/v1',
    api_key='your-manifest-key'
)

# Manifest picks the cheapest capable model automatically
response = client.chat.completions.create(
    model='auto',  # let Manifest decide
    messages=[{'role': 'user', 'content': 'Summarize this paragraph...'}]
)
§05

Related on TokRepo

§06

Common pitfalls

  • The 23-dimension scoring is tuned for English text. Non-English requests may get misrouted to weaker models that handle the language poorly.
  • Budget controls need careful thresholds. Setting cost caps too aggressively can force all traffic to the cheapest models and degrade quality.
  • Automatic fallbacks can mask provider outages. Monitor per-provider success rates separately to catch issues early.

Frequently Asked Questions

How does Manifest decide which model to use?+

Manifest scores each request across 23 dimensions including complexity, length, domain, and required capabilities. This scoring happens in under 2ms. It then matches the request profile against known model capabilities and selects the cheapest model that meets the quality threshold.

Does Manifest support streaming responses?+

Yes. Manifest proxies streaming responses from the selected provider transparently. Your application receives Server-Sent Events exactly as it would from the original provider, with no buffering or modification.

What happens if the chosen model fails?+

Manifest includes automatic fallback logic. If the selected model returns an error or times out, it escalates to the next cheapest capable model. You can configure fallback chains and retry policies in the dashboard.

Can I force a specific model for certain requests?+

Yes. You can specify a model name directly instead of using the auto routing. You can also create routing rules in the dashboard that pin specific request patterns to specific models.

Is Manifest open source?+

Yes. Manifest is MIT licensed with 4,200+ GitHub stars. You can self-host it with Docker or run it as an OpenClaw plugin. The source code and documentation are available on GitHub.

Citations (3)
🙏

Source & Thanks

Created by mnfst. Licensed under MIT.

Manifest — ⭐ 4,200+

Thanks to the Manifest team for making LLM cost optimization accessible.

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets