OpenLLMetry is the open-source standard for LLM observability built on top of OpenTelemetry. Created by Traceloop (co-founded by Nir Gazit, former Google ML engineer and Fiverr chief architect, and Gal Kleinman, Fiverr ML group leader), the project extends the CNCF's OpenTelemetry protocol with AI-specific instrumentations for LLM providers, vector databases, and agent frameworks. With 7,000+ GitHub stars and Apache 2.0 licensing, it has become the go-to choice for teams that want LLM observability without vendor lock-in. Traceloop raised a $6.1 million seed round backed by Y Combinator, Samsung NEXT, and Grand Ventures.
The core insight behind OpenLLMetry is elegant: agent execution is structurally similar to a distributed trace. Each step in an LLM pipeline — prompt construction, retrieval, API call, response processing — maps naturally to spans in a trace. By building on OpenTelemetry rather than inventing a proprietary protocol, OpenLLMetry lets you plug LLM observability into your existing monitoring stack. If you already use Datadog, New Relic, Sentry, Honeycomb, Grafana, or any OpenTelemetry-compatible backend, OpenLLMetry sends data there with no additional platform required.
Setup requires just two lines of code. Import the SDK, call Traceloop.init() with your app name, and all LLM calls are automatically instrumented. The SDK provides decorators for marking workflows, tasks, and agents, giving you structured traces that show exactly how each request flows through your LLM application. SDKs are available for Python, TypeScript, Go, and Ruby, covering the primary languages used in LLM application development. The non-intrusive instrumentation approach means you can add observability to an existing application without restructuring your code.
Provider coverage is comprehensive. OpenLLMetry instruments calls to OpenAI, Anthropic, Cohere, Google Gemini, AWS Bedrock, Ollama, and more than 20 other LLM providers. Vector database instrumentations cover Pinecone, Chroma, Weaviate, and others. Framework support includes LangChain, LlamaIndex, CrewAI, Haystack, and additional agent orchestration tools. Each instrumentation automatically captures prompts, responses, token usage, latency, model parameters, and error details — the complete set of signals needed to debug and optimize LLM applications.
The three observability signals are supported: traces (enabled by default), metrics, and logs (which can be enabled with a single parameter). This goes beyond what most LLM-specific tools provide, which typically only capture traces. Having metrics and logs in the same OpenTelemetry pipeline means you can correlate LLM behavior with system-level performance, create dashboards that combine token costs with infrastructure metrics, and set alerts based on any combination of signals.
The vendor-neutral approach is OpenLLMetry's greatest strategic advantage. Proprietary LLM observability platforms like LangSmith or Galileo require using their SDK, their platform, and their data format. If you want to switch providers or consolidate monitoring, you face a migration. OpenLLMetry uses the OpenTelemetry protocol, which means your observability data flows through standard collectors and can be routed to any compatible backend. You can send the same traces to multiple destinations simultaneously or switch backends without changing instrumentation code.