Ell reframes prompt engineering from string manipulation to software engineering. Every prompt is a decorated Python function with @ell.simple or @ell.complex decorators. The framework automatically versions each prompt by hashing its content and dependencies — when you change a prompt's wording, model, or any function it calls, Ell creates a new version and tracks the lineage. This gives you Git-like history for prompts without any manual versioning effort.
Ell Studio is a local web interface (similar to TensorBoard) that visualizes your prompt versions, their outputs, token usage, and performance over time. You can compare outputs across prompt versions side-by-side, trace which version produced which result, and understand how prompt changes affect quality. The studio reads from a local SQLite store, requiring no cloud service or external dependencies.
The library supports structured outputs via Pydantic models, multi-modal prompts with image inputs, and tool calling. It works with OpenAI, Anthropic, and other providers through a unified interface. With 5,800+ GitHub stars and MIT license, Ell fills a unique niche: while DSPy optimizes prompts algorithmically and BAML focuses on structured extraction, Ell focuses on the human prompt engineering workflow — versioning, visualization, and iterative refinement.