ConfigsApr 1, 2026·1 min read

Portkey AI Gateway — Route to 250+ LLMs

Portkey AI Gateway routes to 250+ LLMs with sub-1ms latency, 40+ guardrails, retries, fallbacks, and caching. 11.1K+ stars. Apache 2.0.

TL;DR
Portkey routes requests to 250+ LLMs with sub-1ms overhead, built-in guardrails, retries, fallbacks, and response caching.
§01

What it is

Portkey AI Gateway is an open-source API gateway that sits between your application and LLM providers. It routes requests to over 250 LLMs with sub-1ms latency overhead. Built-in features include 40+ guardrails, automatic retries, provider fallbacks, semantic caching, and request logging. It has over 11.1K GitHub stars and is Apache 2.0 licensed.

Portkey targets teams running production LLM applications who need reliability, cost control, and provider flexibility without vendor lock-in.

§02

How it saves time or tokens

Portkey's semantic caching returns cached responses for similar queries, saving both tokens and latency. Automatic fallbacks switch to backup providers when the primary is down, avoiding downtime. The unified API means you switch providers by changing a config, not your code.

§03

How to use

  1. Run locally:
npx @portkey-ai/gateway
  1. Or use the Python SDK:
pip install portkey-ai
  1. Route requests through the gateway:
from portkey_ai import Portkey

client = Portkey(api_key='your-key')
response = client.chat.completions.create(
    model='gpt-4o',
    messages=[{'role': 'user', 'content': 'Hello'}]
)
§04

Example

# Run the gateway locally
npx @portkey-ai/gateway
# API at http://localhost:8787/v1

# Use with curl
curl http://localhost:8787/v1/chat/completions \
  -H 'Content-Type: application/json' \
  -H 'x-portkey-provider: openai' \
  -H 'Authorization: Bearer sk-...' \
  -d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'
§05

Related on TokRepo

Key considerations

When evaluating Portkey AI Gateway for your workflow, consider the following factors. First, assess whether your team has the technical prerequisites to adopt this tool effectively. Second, evaluate the maintenance burden against the productivity gains. Third, check community activity and documentation quality to ensure long-term viability. Integration with your existing toolchain matters more than feature count alone. Start with a small pilot project before rolling out across the organization. Monitor resource usage during the initial adoption phase to identify bottlenecks early. Document your configuration decisions so team members can onboard independently.

§06

Common pitfalls

  • Self-hosted gateway requires proper network configuration to reach all LLM provider endpoints.
  • Caching aggressive prompts may return stale responses; configure cache TTL based on your use case.
  • Guardrails add latency; profile your specific guardrail combination to ensure acceptable response times.

Frequently Asked Questions

How many LLM providers does Portkey support?+

Portkey routes to over 250 LLMs from providers including OpenAI, Anthropic, Google, Mistral, Cohere, and many more. The provider list is updated regularly.

What is semantic caching?+

Semantic caching returns cached responses for queries that are semantically similar (not just identical). This saves tokens and latency for repeated or similar requests without requiring exact prompt matching.

How do fallbacks work?+

Configure a primary and backup provider. If the primary returns an error or times out, Portkey automatically retries with the backup provider. You define the fallback chain in your configuration.

Is Portkey free?+

The gateway is open-source under Apache 2.0 and free to self-host. Portkey also offers a managed cloud service with additional features like analytics dashboards and team management.

Does Portkey add latency?+

The gateway adds sub-1ms latency overhead for routing. Guardrails and caching may add additional time depending on configuration. For most applications, the overhead is negligible compared to LLM inference time.

Citations (3)
🙏

Source & Thanks

Portkey-AI/gateway — 11,100+ GitHub stars

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets