Smart LLM router that cuts inference costs up to 70%
Manifest is an open-source smart model router that intelligently routes LLM requests to the cheapest capable model, reducing inference costs by up to 70% without sacrificing output quality. It uses a 23-dimension scoring algorithm to evaluate 300+ models across providers including OpenAI, Anthropic, Google, and DeepSeek, with automatic fallbacks and budget controls. Manifest can be deployed as a cloud service, local plugin, or self-hosted Docker container with transparent routing logic.