Introduction
TokenCost is a client-side token counting and dollar-cost estimation library supporting 400+ LLM models, with 2,000+ GitHub stars. It uses tiktoken to calculate precise prompt and completion costs across OpenAI, Anthropic Claude, Google Gemini, Mistral, DeepSeek, Groq, and AWS Bedrock. Ideal for AI agent developers who need to track and optimize API spend.
TokenCost — Calculate Your AI Spend Precisely
Supported Providers (400+ models)
| Provider | Models |
|---|---|
| OpenAI | GPT-4o, GPT-4, o1, o3, and more |
| Anthropic | Claude Opus, Sonnet, Haiku |
| Gemini Pro, Flash, Ultra | |
| Mistral | Large, Medium, Small |
| DeepSeek | Chat, Coder |
Usage Example
from tokencost import calculate_prompt_cost, calculate_completion_cost
# Chat message format
messages = [
{"role": "user", "content": "Write a haiku about programming"}
]
cost = calculate_prompt_cost(messages, "gpt-4o")
print(f"Conversation cost: ${cost}")FAQ
Q: What is TokenCost? A: A Python library for client-side token counting and dollar-cost estimation across 400+ LLM models.
Q: Is it free? A: Completely free and open source under MIT. No API key required to calculate costs.