Protect AI focuses on securing the machine learning supply chain — the complex pipeline of data, code, models, and dependencies that flows from training environments to production inference. The company recognizes that ML models are executable artifacts that can contain hidden payloads, and that the common practice of downloading pre-trained models from public repositories like Hugging Face introduces supply chain risks similar to those in traditional software. Their open-source ModelScan tool detects unsafe code patterns in model files across formats including pickle, H5, SavedModel, and ONNX.
Beyond scanning, Protect AI's Guardian product provides policy-based governance for model repositories, enforcing security rules before models can be promoted to production. NB Defense scans Jupyter notebooks for security issues including credential leaks, PII exposure, and unsafe package installations. The platform integrates with ML pipelines built on MLflow, Kubeflow, and SageMaker, providing security gates at each stage of the model lifecycle from experimentation through deployment.
Protect AI is YC-backed and advocates for applying software supply chain security principles like SLSA (Supply-chain Levels for Software Artifacts) to machine learning workflows. The company contributes to open-source ML security tools and publishes research on emerging AI threat vectors. For organizations building ML pipelines that ingest external models, datasets, or packages, Protect AI provides the security infrastructure needed to verify integrity and provenance before untrusted artifacts enter production environments.