Portkey AI Gateway — Route to 250+ LLMs
Portkey AI Gateway routes to 250+ LLMs with sub-1ms latency, 40+ guardrails, retries, fallbacks, and caching. 11.1K+ stars. Apache 2.0.
What it is
Portkey AI Gateway is an open-source API gateway that sits between your application and LLM providers. It routes requests to over 250 LLMs with sub-1ms latency overhead. Built-in features include 40+ guardrails, automatic retries, provider fallbacks, semantic caching, and request logging. It has over 11.1K GitHub stars and is Apache 2.0 licensed.
Portkey targets teams running production LLM applications who need reliability, cost control, and provider flexibility without vendor lock-in.
How it saves time or tokens
Portkey's semantic caching returns cached responses for similar queries, saving both tokens and latency. Automatic fallbacks switch to backup providers when the primary is down, avoiding downtime. The unified API means you switch providers by changing a config, not your code.
How to use
- Run locally:
npx @portkey-ai/gateway
- Or use the Python SDK:
pip install portkey-ai
- Route requests through the gateway:
from portkey_ai import Portkey
client = Portkey(api_key='your-key')
response = client.chat.completions.create(
model='gpt-4o',
messages=[{'role': 'user', 'content': 'Hello'}]
)
Example
# Run the gateway locally
npx @portkey-ai/gateway
# API at http://localhost:8787/v1
# Use with curl
curl http://localhost:8787/v1/chat/completions \
-H 'Content-Type: application/json' \
-H 'x-portkey-provider: openai' \
-H 'Authorization: Bearer sk-...' \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'
Related on TokRepo
- AI Gateway Providers — Portkey deep-dive and comparisons
- AI Tools for API — API management and LLM routing tools
Key considerations
When evaluating Portkey AI Gateway for your workflow, consider the following factors. First, assess whether your team has the technical prerequisites to adopt this tool effectively. Second, evaluate the maintenance burden against the productivity gains. Third, check community activity and documentation quality to ensure long-term viability. Integration with your existing toolchain matters more than feature count alone. Start with a small pilot project before rolling out across the organization. Monitor resource usage during the initial adoption phase to identify bottlenecks early. Document your configuration decisions so team members can onboard independently.
Common pitfalls
- Self-hosted gateway requires proper network configuration to reach all LLM provider endpoints.
- Caching aggressive prompts may return stale responses; configure cache TTL based on your use case.
- Guardrails add latency; profile your specific guardrail combination to ensure acceptable response times.
Frequently Asked Questions
Portkey routes to over 250 LLMs from providers including OpenAI, Anthropic, Google, Mistral, Cohere, and many more. The provider list is updated regularly.
Semantic caching returns cached responses for queries that are semantically similar (not just identical). This saves tokens and latency for repeated or similar requests without requiring exact prompt matching.
Configure a primary and backup provider. If the primary returns an error or times out, Portkey automatically retries with the backup provider. You define the fallback chain in your configuration.
The gateway is open-source under Apache 2.0 and free to self-host. Portkey also offers a managed cloud service with additional features like analytics dashboards and team management.
The gateway adds sub-1ms latency overhead for routing. Guardrails and caching may add additional time depending on configuration. For most applications, the overhead is negligible compared to LLM inference time.
Citations (3)
- Portkey Gateway GitHub— Routes to 250+ LLMs with sub-1ms latency
- Portkey Official Site— 40+ guardrails, retries, fallbacks, and caching
- Portkey GitHub— 11.1K+ stars, Apache 2.0 licensed
Related on TokRepo
Source & Thanks
Portkey-AI/gateway — 11,100+ GitHub stars
Discussion
Related Assets
Conda — Cross-Platform Package and Environment Manager
Install, update, and manage packages and isolated environments for Python, R, C/C++, and hundreds of other languages from a single tool.
Sphinx — Python Documentation Generator
Generate professional documentation from reStructuredText and Markdown with cross-references, API autodoc, and multiple output formats.
Neutralinojs — Lightweight Cross-Platform Desktop Apps
Build desktop applications with HTML, CSS, and JavaScript using a tiny native runtime instead of bundling Chromium.