DSPy is a declarative framework from Stanford University for programming language models rather than prompting them, enabling developers to build modular AI software using structured code instead of brittle prompt strings. It solves the fundamental challenge of prompt engineering by treating LLM interactions as programmable modules with defined input-output signatures, then using optimization algorithms to automatically compile these modules into effective prompts or fine-tuned weights. DSPy shifts the paradigm from manually crafting prompts to declaring what you want and letting the framework figure out how to achieve it through systematic optimization.
DSPy applications are built using three core components: language models, signatures that declare program inputs and outputs, and modules that define the prompting technique. The framework provides optimizers that automatically improve pipelines by tuning prompts, adjusting instructions, adding optimal few-shot examples, or fine-tuning the model weights to maximize performance on specified metrics. DSPy supports building everything from simple classifiers to sophisticated RAG pipelines and agent loops, with composable modules that can be combined with different models, inference strategies, or learning algorithms for maximum flexibility.
DSPy is designed for AI researchers, machine learning engineers, and developers building LLM-powered applications who want to move beyond manual prompt engineering to systematic, reproducible optimization of their AI pipelines. It integrates with major model providers and can be used alongside other frameworks for retrieval, evaluation, and deployment. The framework is particularly valuable for teams working on production systems where prompt reliability and performance consistency are critical, as DSPy optimizers can automatically discover prompt configurations that outperform hand-tuned alternatives.