2 tools tagged
Showing 2 of 2 tools
LLM vulnerability scanner and red teaming kit
Agentic Security is an open-source vulnerability scanner for LLM agent workflows that tests AI systems against jailbreaks, fuzzing, and multimodal attacks. It probes weaknesses across text, image, and audio inputs through multi-step jailbreak simulations, randomized stress testing, and reinforcement learning-powered adaptive attacks. The toolkit connects directly to LLM APIs for high-volume real-world attack scenarios, helping developers identify and patch safety gaps before deployment.
AI agent safety SDK with guard, redact, and scan modules
Superagent is an open-source AI agent safety SDK that provides runtime protection through four modules: Guard for detecting prompt injections and unsafe tool calls, Redact for removing PII and secrets, Scan for analyzing repos against AI-targeted attacks, and Test for red-team evaluations. It works with any LLM provider and includes open-weight guard models from 0.6B to 4B parameters with 50-100ms latency for real-time protection.