LLM security requires both proactive vulnerability discovery and runtime protection. Lakera and garak address these two sides of the same coin — garak finds vulnerabilities before deployment, Lakera blocks attacks in production.
Lakera Guard is a real-time API that screens prompts and responses for security threats with under 2 milliseconds latency. It detects prompt injection attempts, jailbreak patterns, PII leakage, toxic content, and data extraction attacks. Trained on the world's largest prompt injection dataset from Gandalf, a public red-teaming game with millions of attack attempts. Deploys as an API proxy between users and LLM providers. Lakera is for teams running customer-facing AI applications that need always-on protection.
garak is NVIDIA's open-source LLM vulnerability scanner for offensive security testing. It runs automated attack sequences against any LLM endpoint, probing for prompt injection, data leakage, hallucination, toxicity, encoding-based bypasses, and dozens of other vulnerability categories. The modular probe/detector architecture makes it extensible with custom attack patterns. garak is for security teams doing pre-deployment red-teaming and compliance validation.
These tools are complementary rather than competitive. A mature LLM security posture uses garak during development and testing to discover vulnerabilities, then Lakera Guard in production to block real-time attacks. garak is free and open-source. Lakera offers a free tier (10K requests/month) with paid plans from $100/month.
Use garak to answer 'what vulnerabilities does our model have?' and Lakera to answer 'how do we protect our production application from attacks?' Together they cover the full LLM security lifecycle.