TruLens provides systematic evaluation for LLM applications through feedback functions — automated assessments that score model outputs on dimensions like relevance, groundedness, and harmlessness. The RAG Triad framework specifically targets retrieval-augmented generation: it measures whether retrieved context is relevant to the question, whether the answer is grounded in that context, and whether the answer actually addresses the question. These three metrics together catch the most common RAG failure modes.
The tracking system records every LLM interaction with its inputs, outputs, feedback scores, and metadata, creating a searchable history of experiments. The dashboard visualizes score distributions, tracks metric trends over time, and helps identify which prompt versions or retrieval strategies perform best. The unified Metric API in v2.7 standardizes how evaluations are defined and composed across different application types.
TruLens is MIT licensed with 3,200+ GitHub stars and 71+ contributors. The Snowflake partnership enables enterprise teams to run evaluations at scale within their existing data infrastructure. Compared to DeepEval (which focuses on pytest-style testing) or RAGAs (which focuses on RAG-specific metrics), TruLens provides broader evaluation coverage with stronger experiment tracking and visualization capabilities.