New API builds on the One API project to create a comprehensive multi-tenant gateway for routing LLM requests across providers. Organizations running multiple AI models from different vendors often face the challenge of managing separate API keys, rate limits, billing, and request formats for each provider. New API centralizes all of this behind a single endpoint that automatically translates requests into the target provider's format, whether that is OpenAI, Anthropic Claude, Google Gemini, or dozens of other services.
The gateway includes a channel management system where administrators define upstream API connections with priority, weight, and failover rules. Token-based authentication controls user access and spending limits, while the built-in billing engine tracks per-request costs across all providers in a unified dashboard. Request routing can be configured for load balancing, automatic failover when a provider returns errors, or model-specific channel pinning to ensure consistent behavior for production workloads.
Deployment is straightforward with Docker Compose, using either SQLite for single-node setups or MySQL for multi-instance clusters. The web-based admin panel provides real-time visibility into request volumes, error rates, token consumption, and per-user spending. For teams and platforms that need to offer LLM access to internal users or external customers while maintaining cost control and provider flexibility, New API provides the infrastructure layer that eliminates per-provider integration work.