AI-powered security tools for developers: SAST, DAST, secret detection, vulnerability scanning, model security, and automated remediation integrated into CI/CD pipelines.
Showing 24 of 66 tools
Google's application kernel for container sandboxing and security
gVisor is Google's open-source container runtime sandbox that provides an additional layer of isolation between containerized applications and the host kernel. It implements a user-space application kernel that intercepts system calls, preventing container escapes and limiting the attack surface. Used in Google Cloud Run, GKE Sandbox, and other Google Cloud services. Over 18,000 GitHub stars.
Enterprise software composition analysis for supply chain security
Sonatype Lifecycle is an enterprise software composition analysis platform that identifies vulnerabilities, license risks, and quality issues in open-source dependencies throughout the development lifecycle. It integrates with IDEs, CI/CD pipelines, and artifact repositories to block risky components before they enter the codebase. Backed by the largest vulnerability database with proprietary research beyond public CVE data.
Linux Foundation fork of HashiCorp Vault for secrets management
OpenBao is the Linux Foundation's community-driven fork of HashiCorp Vault created after Vault's license change from open-source to BSL. It provides secrets management, encryption as a service, dynamic credentials, and PKI certificate management. Maintains API compatibility with Vault while developing under truly open-source governance with over 5,700 GitHub stars.
Shift-left DAST platform built for CI/CD pipeline integration
StackHawk is a dynamic application security testing platform designed for CI/CD pipeline integration. It tests running web applications and APIs for OWASP Top 10 vulnerabilities including SQL injection, XSS, and authentication flaws during the development process. Built on ZAP with a developer-friendly CLI and YAML configuration, it provides actionable findings with reproducer requests and fix guidance.
AI-powered DAST platform specializing in API and GraphQL security
Escape is an AI-powered dynamic application security testing platform focused on API security including REST, GraphQL, and gRPC endpoints. It automatically discovers and tests API endpoints for vulnerabilities without requiring source code access. Features business logic testing that goes beyond OWASP patterns, CI/CD integration for shift-left security, and detailed remediation guidance for developers.
Enterprise middleware for securing AI applications against prompt attacks
Prompt Security provides enterprise security middleware that protects AI applications from prompt injection, data leakage, jailbreaks, and toxic content generation. It sits between users and LLM APIs to inspect, filter, and sanitize inputs and outputs in real-time. Supports deployment as a proxy, SDK integration, or browser extension with customizable security policies and compliance reporting.
CyberArk's open-source LLM fuzzing framework for AI security testing
FuzzyAI is CyberArk's open-source framework for fuzzing large language models to discover vulnerabilities like jailbreaks, prompt injection, guardrail bypasses, and harmful content generation. It systematically tests LLM deployments with over 20 attack techniques and generates detailed reports. Supports testing any model accessible via API including OpenAI, Anthropic, and self-hosted models.
Python toolkit for assessing and mitigating ML model fairness issues
Fairlearn is a Microsoft-backed open-source Python toolkit that helps developers assess and improve the fairness of machine learning models. It provides metrics for measuring disparity across groups defined by sensitive features, mitigation algorithms that reduce unfairness while maintaining model performance, and an interactive visualization dashboard for exploring fairness-accuracy trade-offs. Integrated with scikit-learn and Azure ML's Responsible AI dashboard.
Rust-based agent OS with built-in security, WASM sandboxing, and multi-agent runtime
OpenFang is an open-source agent operating system built in Rust that provides a secure multi-agent runtime with WASM sandboxing, auditability layers, and multi-channel communication. It goes beyond typical orchestration SDKs by treating agent security and operational isolation as first-class concerns, making it suitable for teams deploying agents in environments where trust boundaries and audit trails matter.
Hunt down social media accounts by username across 400+ platforms
Sherlock is a Python CLI tool that searches for a given username across 400+ social networks and websites simultaneously. It is widely used in OSINT investigations, security audits, red teaming exercises, and digital footprint analysis. Sherlock is included in Kali Linux and Parrot Security distributions and has over 76,000 GitHub stars, making it one of the most popular open-source security tools.
OpenTelemetry-native observability for LLM applications with evals and GPU monitoring
OpenLIT is an open-source AI engineering platform that provides OpenTelemetry-native observability for LLM applications. It combines distributed tracing, evaluation, prompt management, a secrets vault, and GPU telemetry in a single self-hostable stack. With 50+ integrations across LLM providers and frameworks, it lets teams monitor AI applications using their existing observability backends like Grafana, Datadog, or Jaeger.
Open-source microVMs for secure serverless and AI agent sandboxing
Firecracker is an open-source virtual machine monitor that creates lightweight microVMs with sub-150ms cold starts, originally built by AWS for Lambda and Fargate. With 28,000+ GitHub stars, it provides kernel-level isolation for running untrusted code safely and powers the sandboxing infrastructure behind AI coding agents like Devin and E2B.
Trusted runtime environments for AI agents in production infrastructure
Teleport Beams provides cryptographically verified, policy-gated access for AI agents to interact with production infrastructure including servers, Kubernetes clusters, and databases. Launched at KubeCon EU 2026, Beams extends Teleport's zero-trust access platform with agent-specific runtime controls, audit trails, and policy enforcement to ensure AI agents operate within defined boundaries when deployed in production environments.
Sandbox any command with file, network, and credential controls
Zerobox is a security-focused command sandboxing tool that isolates command execution with fine-grained controls over file system access, network connectivity, and credential exposure. It wraps any shell command in a secure container that enforces policy restrictions, preventing unauthorized file reads, network calls, or environment variable leaks during execution.
Static linter that catches production bugs in AI-generated code
prodlint is a zero-config static analysis tool with 52 rules targeting production bugs that AI coding tools consistently produce. It catches hallucinated npm imports, missing authentication checks, Prisma writes outside transactions, exposed secrets via NEXT_PUBLIC prefixes, and other patterns specific to code generated by Cursor, Claude Code, Bolt, and v0. Runs in one second via npx with no configuration needed.
Google's vulnerability scanner using the OSV database
OSV-Scanner is Google's official open-source vulnerability scanner that checks your project's dependencies against the OSV.dev database — the largest open vulnerability database covering all major ecosystems. Written in Go, it supports lockfiles from npm, pip, Maven, Cargo, Go modules, and more, providing actionable remediation guidance and CI/CD integration for automated security scanning.
Open-source SOAR platform with AI-powered playbooks
Tracecat is a YC S24-backed open-source SOAR (Security Orchestration, Automation and Response) platform that lets security teams build AI-powered playbooks for automated incident response. It provides visual workflow builders for creating response procedures, integrates with common security tools, and handles alert triage, enrichment, and remediation — positioned as an open-source alternative to Tines and Splunk SOAR.
Secure sandboxed runtime for AI agent execution
NVIDIA OpenShell provides kernel-level isolation for AI agent workloads with Landlock, seccomp, and network namespace sandboxing. Announced at GTC 2026 with 17 enterprise partners including Adobe, Atlassian, SAP, and Salesforce, it offers declarative YAML policy enforcement, L7 HTTP inspection, and GPU passthrough — purpose-built to contain the blast radius when autonomous coding agents interact with filesystems and networks.
Autonomous AI pentester for web apps and APIs
Shannon is an autonomous AI-powered penetration testing tool that achieves a 96.15% success rate on the XBOW benchmark — significantly above the industry average. Using a multi-agent pipeline built on Anthropic's Agent SDK and Playwright, it performs reconnaissance, vulnerability analysis, exploitation, and reporting on web applications and APIs, having discovered 7 zero-day vulnerabilities in production software.
Open-source LLM red-teaming framework with 40+ attack types
DeepTeam is an open-source red-teaming framework for systematically testing LLM applications against 40+ adversarial attack types. It covers OWASP Top 10 for LLMs including jailbreaks, prompt injection, PII leakage, and hallucination attacks. Built as the sister project of DeepEval for security testing alongside evaluation. Apache-2.0 licensed.
Security scanner for MCP servers against tool poisoning attacks
MCP-Scan is a security tool that scans MCP servers for vulnerabilities including tool poisoning, prompt injection, cross-origin escalation, and rug pull attacks. Acquired by Snyk in 2026, it is the first dedicated security scanner for the MCP ecosystem. It analyzes tool descriptions, permissions, and behavior patterns to detect malicious or compromised MCP servers before they can exploit AI agents.
Meta's open-source LLM security suite with Llama Guard and CodeShield
PurpleLlama is Meta's open-source suite of tools for evaluating and improving LLM safety. It includes Llama Guard models for input/output content safety classification, LlamaFirewall for multi-layer defense, CodeShield for insecure code detection, and CyberSecEval benchmarks for measuring LLM security. Llama Guard 4 supports multimodal safety across text and images. 4,100+ GitHub stars, backed by Meta AI with 44+ contributors.
Security scanner for AI agentic workflows and MCP servers
Agentic Radar is an open-source CLI security scanner that maps attack surfaces in agentic AI workflows. It detects MCP servers, visualizes agent tool chains, and validates against OWASP LLM Top 10 vulnerabilities including prompt injection and excessive agency. Supports scanning CrewAI, LangGraph, AutoGen, and Semantic Kernel pipelines. Built by SPLX AI with active development and MCP-specific detection capabilities added for the growing MCP ecosystem.
Open-source Kubernetes security platform for risk analysis and compliance
Kubescape is a CNCF-backed open-source Kubernetes security platform that scans clusters, manifests, and container images for vulnerabilities, misconfigurations, and compliance violations. It checks against NSA-CISA, MITRE ATT&CK, and CIS benchmarks, integrates into CI/CD pipelines, and provides runtime threat detection via eBPF. Supports SBOM generation and vulnerability scanning. Used by ARMO with growing enterprise adoption in cloud-native security.