Scripts2026年4月9日·1 分钟阅读

TokenCost — LLM Price Calculator for 400+ Models

Client-side token counting and USD cost estimation for 400+ LLMs. 3 lines of Python to track prompt and completion costs. Supports OpenAI, Anthropic, Mistral, AWS Bedrock. MIT, 2K+ stars.

Introduction

TokenCost is a client-side token counting and dollar-cost estimation library supporting 400+ LLM models, with 2,000+ GitHub stars. It uses tiktoken to calculate precise prompt and completion costs across OpenAI, Anthropic Claude, Google Gemini, Mistral, DeepSeek, Groq, and AWS Bedrock. Ideal for AI agent developers who need to track and optimize API spend.


TokenCost — Calculate Your AI Spend Precisely

Supported Providers (400+ models)

Provider Models
OpenAI GPT-4o, GPT-4, o1, o3, and more
Anthropic Claude Opus, Sonnet, Haiku
Google Gemini Pro, Flash, Ultra
Mistral Large, Medium, Small
DeepSeek Chat, Coder

Usage Example

from tokencost import calculate_prompt_cost, calculate_completion_cost

# Chat message format
messages = [
    {"role": "user", "content": "Write a haiku about programming"}
]
cost = calculate_prompt_cost(messages, "gpt-4o")
print(f"Conversation cost: ${cost}")

FAQ

Q: What is TokenCost? A: A Python library for client-side token counting and dollar-cost estimation across 400+ LLM models.

Q: Is it free? A: Completely free and open source under MIT. No API key required to calculate costs.


🙏

来源与感谢

Created by AgentOps-AI. Licensed under MIT.

tokencost — ⭐ 2,000+

讨论

登录后参与讨论。
还没有评论,来写第一条吧。

相关资产