Manifest is an open-source intelligent routing layer for LLM API calls that automatically selects the most cost-effective model capable of handling each request. Rather than hardcoding model selections or building custom routing logic, developers point their API calls through Manifest and let its 23-dimension scoring algorithm match requests to the optimal model from a pool of over 300 options across OpenAI, Anthropic, Google, DeepSeek, and other providers. The system analyzes request complexity, required capabilities, and cost constraints to make routing decisions in real time.
The routing engine goes beyond simple load balancing by evaluating models across dimensions including reasoning depth, code generation ability, multilingual support, context window requirements, and latency sensitivity. Automatic fallback chains ensure reliability when a primary model is unavailable or rate-limited, while budget controls let teams set spending limits per project, per user, or per time period. All routing decisions are fully transparent, with detailed logs showing why each model was selected and how much was saved compared to default routing.
Built in TypeScript with over 4,200 commits, Manifest offers flexible deployment options including a hosted cloud dashboard, a local OpenClaw plugin for IDE integration, and self-hosted Docker containers for teams that need to keep API keys and traffic within their infrastructure. The MIT-licensed project has attracted 4,300 GitHub stars and serves teams looking to reduce LLM costs without the engineering overhead of building and maintaining their own model selection infrastructure across multiple providers.