Quick Use
- Sign up at openrouter.ai → API key (starts
sk-or-v1-) - Add the JSON snippet below to your MCP config
- Restart your MCP host —
openrouter_chatand friends are now available
Intro
OpenRouter MCP wraps OpenRouter's universal LLM gateway as a Model Context Protocol server. Inside Claude Code, Cursor, or Codex CLI, you can call openrouter_chat and pick any of 300+ models per task — switch from Claude to GPT-4o to Llama 3.3 mid-session without changing your host. Best for: agents that want to delegate sub-tasks to cheaper or specialized models. Works with: any MCP host. Setup time: 3 minutes.
MCP config
{
"mcpServers": {
"openrouter": {
"command": "npx",
"args": ["-y", "openrouter-mcp"],
"env": {
"OPENROUTER_API_KEY": "sk-or-v1-...",
"OPENROUTER_DEFAULT_MODEL": "anthropic/claude-3.5-haiku"
}
}
}
}Tools exposed
| Tool | Use |
|---|---|
openrouter_chat |
Send a chat completion via OpenRouter |
openrouter_list_models |
Filter the 300+ catalog by capability / price / context |
openrouter_get_model |
Cost, context, capabilities for one model |
openrouter_set_default |
Switch the default model for this session |
Common patterns
> use the openrouter_chat tool with model "deepseek/deepseek-chat" to summarize
this 50-page PDF (cheap model for bulk text)
> use openrouter_list_models to find models with <$0.50/1M input cost and
>100K context window
> use openrouter_chat with model "openai/o1" for the hardest reasoning step,
then back to claude-haiku for the boilerplateWhy use OpenRouter MCP vs direct provider MCPs
A separate MCP per provider (Anthropic MCP, OpenAI MCP, …) means N config blocks, N keys, N tool prefixes. OpenRouter MCP gives you one config / one key / one tool prefix for all of them. Trade-off: you depend on OpenRouter's availability for everything (their uptime is good but it's a single point of failure).
FAQ
Q: Is OpenRouter MCP official?
A: OpenRouter has multiple community-built MCP wrappers (search npm for 'openrouter-mcp'). They all use OpenRouter's REST API. The most active is openrouter-mcp by the community. OpenRouter itself doesn't ship an official one yet.
Q: Can I cap spend per MCP call?
A: Yes — set max_tokens in each call, and use OpenRouter's account-level spend limit. Combined with PostHog observability you can also enforce per-feature budgets.
Q: Does this work for non-coding agents? A: Yes — OpenRouter's tools work for any text task. The MCP just exposes them to MCP hosts. Same patterns work for chat apps, content generation, data extraction agents.
Source & Thanks
Built by community / OpenRouter. MIT-licensed wrappers.
openrouter.ai — OpenRouter platform