Rig is an open-source Rust library for building scalable, modular, and ergonomic LLM-powered applications. Developed by 0xPlaygrounds, it sits in the same niche as LangChain or LlamaIndex for the Rust ecosystem, giving developers a unified interface over more than 20 model providers — OpenAI, Anthropic, Google, Mistral, DeepSeek, Cohere, Groq, local Ollama, and more — plus 10+ vector-store integrations such as MongoDB Atlas Vector, LanceDB, Qdrant, SQLite, and in-memory stores. Rig supports LLM completion and embedding workflows as first-class concepts and is fully WASM-compatible in its core library.
The library is designed for production agentic workflows rather than hello-world demos. It exposes building blocks for multi-turn streaming, tool use, prompt pipelines, and retrieval-augmented generation, and ships full GenAI Semantic Convention compatibility so traces interoperate cleanly with OpenTelemetry-based observability stacks. Beyond text, Rig covers transcription, audio generation, and image generation capabilities, which lets teams stand up multimodal agents without bolting together three SDKs. Because everything runs over a common trait-based abstraction, swapping a model or vector store is a one-line change in most applications.
Rig is MIT-licensed and already used in real products — St. Jude's genomics chatbot, Coral Protocol's Rust SDK, the VT Code terminal coding agent, the Con terminal emulator's agent harness, and the Dria decentralized AI network all cite Rig as their LLM abstraction layer. The project is evolving quickly and the maintainers explicitly warn that breaking changes arrive with new features, so teams choosing Rig should track releases closely. For Rust-first engineering orgs building performance-sensitive agents, RAG services, or on-device AI where Python is not an option, Rig is the most complete and actively developed option in 2026.
