What Pydantic Logfire Does
Pydantic Logfire is an OpenTelemetry-native observability platform built by the team behind Pydantic and Pydantic AI. It collects structured logs, distributed traces, and LLM-aware spans from Python services and renders them in a single dashboard tuned for Python ergonomics — async stack frames, Pydantic model rendering, FastAPI request lifecycles, and agent tool-call chains. The pitch is that you should not need to assemble four separate vendors to know what your LLM application is doing in production, and that the assembly cost is exactly what stops most small teams from shipping observability at all.
OpenTelemetry Foundation and Vendor Lock-in
Logfire is built on OpenTelemetry from the ground up, which means every trace and metric it ingests uses the OTEL data model and can be exported to other backends with a config change rather than a rewrite. In practice that lowers the migration cost from Logfire to Datadog, Honeycomb, or a self-hosted Jaeger setup if you outgrow the managed service or hit a pricing cliff. The instrumentation libraries Logfire ships are standard OTEL instrumentations — using them does not bind you to Pydantic's backend, and a team can adopt Logfire as the first observability stop knowing the data is portable.
That said, OpenTelemetry-native does not mean OpenTelemetry-only. The most useful parts of Logfire are the renderings on top of OTEL data: collapsible Pydantic model views in spans, agent run timelines that group LLM calls and tool invocations, and the way it surfaces validation errors as first-class events rather than buried strings. Those features rely on Logfire-specific span attributes that other backends will see as opaque fields. Migration is realistic; one-for-one feature parity is not. Plan accordingly when choosing where to invest your dashboard time.
LLM and Agent Tracing in Practice
Where Logfire genuinely separates from generic APM tools is LLM span enrichment. Each model call captures the provider, model name, prompt and response payloads (with optional PII masking), token counts in and out, and cost. Multi-step agent runs collapse into a single trace with each tool call and sub-LLM call nested underneath, making it easy to see where latency or token spend concentrates. For Pydantic AI users the integration is automatic; for LangChain, OpenAI SDK, and Anthropic SDK users it is one import.
The agent-trace view is the feature most likely to pay for itself within a week. Production failures in agentic systems usually come from a specific tool call returning unexpected shape or a model picking the wrong branch in a multi-step chain. Logfire's trace timeline shows exactly which step diverged and what the upstream context looked like — the same investigation in raw logs would take an order of magnitude longer. PII masking is configurable per-attribute, which matters when traces will be shared with engineers who do not have data access by default.