# TokenCost — LLM Price Calculator for 400+ Models > Client-side token counting and USD cost estimation for 400+ LLMs. 3 lines of Python to track prompt and completion costs. Supports OpenAI, Anthropic, Mistral, AWS Bedrock. MIT, 2K+ stars. ## Install Save as a script file and run: ## Quick Use 1. Install: ```bash pip install tokencost ``` 2. Calculate costs in 3 lines: ```python from tokencost import calculate_prompt_cost, calculate_completion_cost cost = calculate_prompt_cost("Hello world", "gpt-4o") print(f"Prompt cost: ${cost}") ``` --- ## Intro TokenCost is a client-side token counting and price estimation library for 400+ LLM models with 2,000+ GitHub stars. It calculates the exact USD cost of prompts and completions using tiktoken, supporting OpenAI, Anthropic Claude, Google Gemini, Mistral, DeepSeek, Groq, and AWS Bedrock models. Perfect for AI agent developers who need to track and optimize API spending. Setup: `pip install tokencost`, 3 lines of code to get cost estimates. See also: [AI developer scripts on TokRepo](https://tokrepo.com/en/@Script%20Depot). --- ## TokenCost — Know What Your AI Costs ### The Problem LLM API costs add up fast. Different models have different pricing. Tracking costs across multiple providers and models is a pain — you often don't know the bill until it arrives. ### The Solution TokenCost gives you real-time, client-side cost estimation before and after API calls. No server needed, no API keys required for cost calculation. ### Supported Providers (400+ Models) | Provider | Models | |----------|--------| | OpenAI | GPT-4o, GPT-4, GPT-3.5-turbo, o1, o3, etc. | | Anthropic | Claude Opus, Sonnet, Haiku (all versions) | | Google | Gemini Pro, Flash, Ultra | | Mistral | Mistral Large, Medium, Small | | DeepSeek | DeepSeek Chat, Coder | | Groq | Llama, Mixtral on Groq | | AWS Bedrock | All Bedrock model pricing | ### Usage Examples ```python from tokencost import calculate_prompt_cost, calculate_completion_cost # Simple string cost model = "claude-sonnet-4-20250514" prompt_cost = calculate_prompt_cost("Explain quantum computing", model) print(f"Prompt: ${prompt_cost}") # Chat message format messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Write a haiku about coding"} ] cost = calculate_prompt_cost(messages, "gpt-4o") print(f"Conversation cost: ${cost}") # Completion cost completion = "Here is a haiku about coding:\nLines of logic flow\nBugs hide in the shadows deep\nTests bring peace of mind" comp_cost = calculate_completion_cost(completion, "gpt-4o") print(f"Completion: ${comp_cost}") # Total cost total = calculate_prompt_cost(messages, "gpt-4o") + calculate_completion_cost(completion, "gpt-4o") print(f"Total: ${total}") ``` ### Integration with AI Agents ```python # Track agent costs over time from tokencost import calculate_prompt_cost, calculate_completion_cost total_cost = 0.0 def track_cost(prompt, completion, model): global total_cost cost = (calculate_prompt_cost(prompt, model) + calculate_completion_cost(completion, model)) total_cost += cost return cost # After each agent step step_cost = track_cost(user_msg, agent_response, "claude-sonnet-4-20250514") print(f"Step cost: ${step_cost:.6f} | Running total: ${total_cost:.4f}") ``` ### FAQ **Q: What is TokenCost?** A: A Python library for client-side token counting and USD cost estimation across 400+ LLM models from OpenAI, Anthropic, Google, Mistral, and more. **Q: Is TokenCost free?** A: Yes, fully open-source under the MIT license. No API keys needed for cost calculation. **Q: How accurate is the cost estimation?** A: TokenCost uses tiktoken for token counting and regularly updated pricing data. Costs are estimates — actual billing may vary slightly due to provider-specific tokenization differences. --- ## Source & Thanks > Created by [AgentOps-AI](https://github.com/AgentOps-AI). Licensed under MIT. > > [tokencost](https://github.com/AgentOps-AI/tokencost) — ⭐ 2,000+ Thanks to the AgentOps team for making LLM cost tracking simple and accessible. --- ## 快速使用 1. 安装: ```bash pip install tokencost ``` 2. 三行代码计算成本: ```python from tokencost import calculate_prompt_cost, calculate_completion_cost cost = calculate_prompt_cost("Hello world", "gpt-4o") print(f"成本: ${cost}") ``` --- ## 简介 TokenCost 是一个客户端 token 计数和美元成本估算库,支持 400+ LLM 模型,GitHub 2,000+ star。使用 tiktoken 计算提示和完成的精确费用,支持 OpenAI、Anthropic Claude、Google Gemini、Mistral、DeepSeek、Groq 和 AWS Bedrock。适合需要追踪和优化 API 开支的 AI Agent 开发者。 --- ## TokenCost — 精确计算你的 AI 开支 ### 支持的提供商(400+ 模型) | 提供商 | 模型 | |--------|------| | OpenAI | GPT-4o, GPT-4, o1, o3 等 | | Anthropic | Claude Opus, Sonnet, Haiku | | Google | Gemini Pro, Flash, Ultra | | Mistral | Large, Medium, Small | | DeepSeek | Chat, Coder | ### 使用示例 ```python from tokencost import calculate_prompt_cost, calculate_completion_cost # 聊天消息格式 messages = [ {"role": "user", "content": "写一首关于编程的俳句"} ] cost = calculate_prompt_cost(messages, "gpt-4o") print(f"对话成本: ${cost}") ``` ### FAQ **Q: TokenCost 是什么?** A: 一个 Python 库,客户端 token 计数和美元成本估算,支持 400+ LLM 模型。 **Q: 免费吗?** A: 完全免费开源,MIT 许可。不需要 API 密钥即可计算成本。 --- ## 来源与感谢 > Created by [AgentOps-AI](https://github.com/AgentOps-AI). Licensed under MIT. > > [tokencost](https://github.com/AgentOps-AI/tokencost) — ⭐ 2,000+ --- Source: https://tokrepo.com/en/workflows/43b26691-33ce-11f1-9bc6-00163e2b0d79 Author: Script Depot