OpenLLMetry extends the OpenTelemetry standard to LLM and AI agent applications, providing automatic instrumentation that captures detailed traces of every model call. Each trace includes latency measurements, token counts, cost calculations, model parameters, and error information. By building on the established OpenTelemetry framework, the library ensures compatibility with the entire observability ecosystem without vendor lock-in.
Getting started requires minimal code changes: a few lines of initialization in Node.js or Python applications automatically instrument calls to OpenAI, Anthropic, Cohere, and dozens of other LLM providers. The data exports to any OpenTelemetry collector, meaning teams can use their existing APM tools like Grafana, Datadog, New Relic, or Jaeger for AI-specific monitoring alongside traditional application metrics.
With 7,000+ stars and 907 forks on GitHub, OpenLLMetry is the most widely adopted open-source LLM observability library. The Apache 2.0 licensed project maintains rapid release cycles with multiple updates per month tracking changes to OpenTelemetry GenAI semantic conventions. The npm package for OpenAI instrumentation sees approximately 94,000 weekly downloads. Traceloop also offers a cloud platform for teams wanting managed observability without self-hosting infrastructure.