Tokscale brings visibility to the often-opaque world of AI coding agent costs. As developers adopt tools like Claude Code, Codex, Gemini CLI, and Cursor, token consumption can escalate quickly without clear insight into where the spend is going. Tokscale reads JSONL conversation logs from these tools and breaks down usage into input, output, cache read, cache write, and reasoning tokens with precise cost calculations using LiteLLM pricing data.
The tool is built with a native Rust core that delivers processing speeds roughly ten times faster than pure JavaScript alternatives, making it practical even for developers with months of accumulated conversation logs. A web-based visualization layer renders interactive contribution graphs in both 2D and 3D, along with filterable dashboards that help identify the most expensive sessions, models, and time periods. JSON export supports integration with spreadsheets or BI tools for deeper analysis.
Beyond individual tracking, Tokscale includes community features like a global leaderboard where developers can compare token usage and a profile system with contribution statistics. The tool supports flexible time-period filtering covering all-time, monthly, and weekly views. For teams and individuals looking to optimize their AI tooling budgets, Tokscale provides the data foundation needed to make informed decisions about which models and agents deliver the best value for their workflows.