# LiteLLM Cost Tracking — Per-Project LLM Spend Dashboard > LiteLLM ships a built-in cost dashboard. Track LLM spend by project, user, model, tag. Hard budgets that block at the proxy. SOC2 / SSO via Pro tier. ## Install Copy the content below into your project: ## Quick Use 1. Have LiteLLM Proxy running with a Postgres database_url 2. Visit `http://localhost:4000/ui` and log in with `UI_USERNAME` / `UI_PASSWORD` 3. Generate a team + user with budgets via the API snippets below --- ## Intro LiteLLM's cost-tracking layer attributes every LLM call to a project, user, and tag, then surfaces it in a built-in dashboard. Set hard budgets per team — when the budget is hit, the proxy returns 429 instead of forwarding the request. Best for: any team where 'who's burning our LLM budget' is a recurring question. Works with: LiteLLM Proxy (self-host), LiteLLM Cloud (managed). Setup time: 5 minutes (Postgres + Redis + .env). --- ### Enable in proxy config ```yaml # config.yaml general_settings: master_key: sk-master database_url: postgresql://litellm:pass@db:5432/litellm store_model_in_db: true spend_logs_max_age: "90d" # auto-prune litellm_settings: callbacks: ["langfuse", "prometheus"] # optional, ship to your observability ``` ```bash docker compose up -d # spins up proxy + Postgres ``` ### Generate keys with budgets ```bash # Per-team curl -X POST http://localhost:4000/team/new \ -H "Authorization: Bearer sk-master" \ -d '{"team_alias": "frontend-team", "max_budget": 1000, "budget_duration": "30d"}' # Per-user (within a team) curl -X POST http://localhost:4000/user/new \ -H "Authorization: Bearer sk-master" \ -d '{"user_id": "alice@acme.com", "team_id": "frontend-team", "max_budget": 50}' # Generate a key for that user curl -X POST http://localhost:4000/key/generate \ -H "Authorization: Bearer sk-master" \ -d '{"user_id": "alice@acme.com", "max_budget": 50}' ``` When `alice@acme.com` hits $50, all subsequent calls return 429. ### Tag every request ```python client.chat.completions.create( model="claude-3-5-sonnet", messages=[...], extra_body={ "tags": ["feature:onboarding", "env:prod", "user-tier:enterprise"], }, ) ``` The dashboard then shows spend grouped by any tag combination — "how much did onboarding cost last week?" "What's enterprise vs free spend?". ### Built-in dashboard Visit `http://localhost:4000/ui` (default password from `UI_USERNAME` / `UI_PASSWORD`). Tabs: - **Spend** — by team, user, model, tag, date - **Keys** — generate, rotate, revoke - **Models** — health status, RPM/TPM consumed - **Logs** — every call with prompt + response (configurable retention) ### Export to your warehouse ```yaml litellm_settings: callbacks: ["s3"] s3_callback_params: s3_bucket_name: my-llm-logs s3_region_name: us-east-1 ``` Each call lands as a JSON line in S3 — query with Athena / DuckDB. --- ### FAQ **Q: Does this require LiteLLM Pro?** A: No — cost tracking, the dashboard, per-team budgets, and S3 export are all in the open-source proxy. Pro adds SOC2 attestation, SSO/SAML, and managed hosting. **Q: How accurate is the cost tracking?** A: LiteLLM uses each provider's official token-count + per-model pricing table to calculate cost per call. For models without published prices (e.g. local Ollama), set `input_cost_per_token` / `output_cost_per_token` in config.yaml or it logs as $0. **Q: Can I block PII before sending to providers?** A: Yes — LiteLLM has a Guardrails layer (regex-based or via Presidio / Lakera) that runs before the request goes upstream. Combined with cost tracking, you can also block specific tags from going to specific providers. --- ## Source & Thanks > Built by [BerriAI](https://github.com/BerriAI). Licensed under MIT. > > [BerriAI/litellm — Spend Tracking](https://docs.litellm.ai/docs/proxy/cost_tracking) — ⭐ 17,000+ --- ## 快速使用 1. LiteLLM Proxy 已经跑起来,配好 Postgres database_url 2. 访问 `http://localhost:4000/ui` 用 `UI_USERNAME` / `UI_PASSWORD` 登录 3. 用下面的 API 片段建带预算的团队和用户 --- ## 简介 LiteLLM 的成本跟踪层把每次 LLM 调用归属到项目、用户、tag,在自带仪表盘里露出。给每个团队设硬预算 —— 超预算 proxy 直接返 429,不再转发。适合任何「谁在烧我们 LLM 预算」反复出现的团队。需要 LiteLLM Proxy(自托管)或 LiteLLM Cloud(托管)。装机时间 5 分钟(Postgres + Redis + .env)。 --- ### 在 proxy 配置里启用 ```yaml # config.yaml general_settings: master_key: sk-master database_url: postgresql://litellm:pass@db:5432/litellm store_model_in_db: true spend_logs_max_age: "90d" # 自动清理 litellm_settings: callbacks: ["langfuse", "prometheus"] # 可选,发到你的可观测性 ``` ```bash docker compose up -d # 起 proxy + Postgres ``` ### 生成带预算的 key ```bash # 团队级 curl -X POST http://localhost:4000/team/new \ -H "Authorization: Bearer sk-master" \ -d '{"team_alias": "frontend-team", "max_budget": 1000, "budget_duration": "30d"}' # 用户级(团队内) curl -X POST http://localhost:4000/user/new \ -H "Authorization: Bearer sk-master" \ -d '{"user_id": "alice@acme.com", "team_id": "frontend-team", "max_budget": 50}' # 给该用户生成 key curl -X POST http://localhost:4000/key/generate \ -H "Authorization: Bearer sk-master" \ -d '{"user_id": "alice@acme.com", "max_budget": 50}' ``` `alice@acme.com` 烧到 $50 之后,所有后续调用返 429。 ### 给每个请求打 tag ```python client.chat.completions.create( model="claude-3-5-sonnet", messages=[...], extra_body={ "tags": ["feature:onboarding", "env:prod", "user-tier:enterprise"], }, ) ``` 仪表盘就能按任意 tag 组合分组看花费 —— "上周 onboarding 花了多少?" "Enterprise 跟 free 花费比?" ### 自带仪表盘 访问 `http://localhost:4000/ui`(默认密码取自 `UI_USERNAME` / `UI_PASSWORD`)。Tab: - **Spend** —— 按团队 / 用户 / 模型 / tag / 日期 - **Keys** —— 生成、轮换、撤销 - **Models** —— 健康状态、消耗的 RPM/TPM - **Logs** —— 每次调用带 prompt + 响应(可配置保留期) ### 导出到数仓 ```yaml litellm_settings: callbacks: ["s3"] s3_callback_params: s3_bucket_name: my-llm-logs s3_region_name: us-east-1 ``` 每次调用作为一行 JSON 落 S3 —— 用 Athena / DuckDB 查。 --- ### FAQ **Q: 这功能需要 LiteLLM Pro 吗?** A: 不需要 —— 成本跟踪、仪表盘、团队级预算、S3 导出都在开源 proxy 里。Pro 加的是 SOC2 认证、SSO/SAML、托管部署。 **Q: 成本跟踪准吗?** A: LiteLLM 用每个 provider 官方的 token 计数 + 单模型价格表算每次调用成本。没公开价格的模型(比如本地 Ollama),在 config.yaml 里设 `input_cost_per_token` / `output_cost_per_token`,否则记成 $0。 **Q: 能在发到 provider 前拦 PII 吗?** A: 能 —— LiteLLM 有 Guardrails 层(正则或接 Presidio / Lakera),请求上游前先跑一遍。配合成本跟踪还能阻止特定 tag 去到特定 provider。 --- ## 来源与感谢 > Built by [BerriAI](https://github.com/BerriAI). Licensed under MIT. > > [BerriAI/litellm — Spend Tracking](https://docs.litellm.ai/docs/proxy/cost_tracking) — ⭐ 17,000+ --- Source: https://tokrepo.com/en/workflows/litellm-cost-tracking-per-project-llm-spend-dashboard Author: LiteLLM (BerriAI)