# LiteLLM — Universal LLM API Gateway, 100+ Providers > Unified API proxy for 100+ LLM providers including OpenAI, Anthropic, Bedrock, Azure, and Vertex AI. Drop-in OpenAI replacement with load balancing and spend tracking. 18,000+ GitHub stars. ## Install Save in your project root: ## Quick Use 1. Install: `pip install litellm` 2. Set API key: `export ANTHROPIC_API_KEY=sk-ant-...` 3. Call any model with the same interface: ```python from litellm import completion # OpenAI response = completion(model="gpt-4", messages=[{"role": "user", "content": "Hello"}]) # Anthropic — same interface! response = completion(model="claude-sonnet-4-20250514", messages=[{"role": "user", "content": "Hello"}]) # Any of 100+ providers response = completion(model="bedrock/anthropic.claude-3", messages=[...]) ``` --- ## Intro LiteLLM is a unified Python SDK and proxy server that provides a single OpenAI-compatible API for 100+ LLM providers with 18,000+ GitHub stars. Switch between OpenAI, Anthropic, Azure, Bedrock, Vertex AI, Cohere, Replicate, and dozens more without changing code. Best for teams running multi-model strategies or building LLM apps that need provider flexibility. Works with: any OpenAI-compatible client, LangChain, LlamaIndex. Setup time: under 1 minute. --- ## Key Capabilities ### Drop-In OpenAI Replacement ```python import litellm litellm.completion(model="gpt-4o", messages=[...]) litellm.completion(model="claude-sonnet-4-20250514", messages=[...]) litellm.completion(model="gemini/gemini-pro", messages=[...]) litellm.completion(model="bedrock/meta.llama3-70b", messages=[...]) ``` ### Proxy Server (Gateway Mode) ```bash litellm --model gpt-4 --port 8000 ``` Point any OpenAI SDK client to `http://localhost:8000` — works with Cursor, Continue, and other AI tools. ### Load Balancing & Fallbacks ```python from litellm import Router router = Router(model_list=[ {"model_name": "gpt-4", "litellm_params": {"model": "azure/gpt-4", "api_key": "..."}}, {"model_name": "gpt-4", "litellm_params": {"model": "openai/gpt-4", "api_key": "..."}}, ]) response = router.completion(model="gpt-4", messages=[...]) # auto-failover ``` ### Spend Tracking & Budget Controls Track token usage and costs per user, team, or API key with built-in PostgreSQL logging. ### Key Stats - 18,000+ GitHub stars - 100+ supported LLM providers - 500+ supported models - OpenAI-compatible API spec - Used by 5,000+ companies ### FAQ **Q: What is LiteLLM?** A: LiteLLM is a unified Python SDK and proxy server that lets you call 100+ LLM providers using a single OpenAI-compatible API interface. **Q: Is LiteLLM free?** A: Yes, the open-source version is free under MIT license. Enterprise features available through BerriAI. **Q: Can LiteLLM replace my OpenAI SDK calls?** A: Yes, it is a drop-in replacement. Change one import and switch between any LLM provider without modifying code. --- ## Source & Thanks > Created by [BerriAI](https://github.com/BerriAI). Licensed under MIT. > > [litellm](https://github.com/BerriAI/litellm) — ⭐ 18,000+ Thanks to the BerriAI team for making multi-model LLM development painless. --- ## 快速使用 1. 安装: `pip install litellm` 2. 设置密钥: `export ANTHROPIC_API_KEY=sk-ant-...` 3. 用统一接口调用任何模型: ```python from litellm import completion response = completion(model="claude-sonnet-4-20250514", messages=[{"role": "user", "content": "你好"}]) ``` --- ## 简介 LiteLLM 是一个统一的 Python SDK 和代理服务器,为 100+ LLM 提供商提供单一 OpenAI 兼容 API,GitHub 18,000+ stars。在 OpenAI、Anthropic、Azure、Bedrock 等之间无缝切换,无需修改代码。适合运行多模型策略的团队。 --- ## 来源与感谢 > Created by [BerriAI](https://github.com/BerriAI). Licensed under MIT. > > [litellm](https://github.com/BerriAI/litellm) — ⭐ 18,000+ --- Source: https://tokrepo.com/en/workflows/b414da1d-84e5-45c5-9cc5-a15b2e2de01a Author: AI Open Source