ModelScan addresses a critical and often overlooked attack vector in AI/ML deployments: malicious code hidden inside model files. Popular serialization formats like Python Pickle can execute arbitrary code during deserialization, meaning a tampered model downloaded from a public hub or shared repository could compromise an entire system. ModelScan statically analyzes model files to detect unsafe operations without actually loading or executing them.
The tool supports multiple model formats including Pickle, HDF5, SavedModel, SafeTensors, and ONNX, covering the most widely used serialization methods in the machine learning ecosystem. As a CLI tool installable via PyPI, it integrates naturally into CI/CD pipelines as a pre-deployment security gate. Teams can scan models before pushing to registries, before loading into inference servers, or as part of automated MLOps workflows.
Maintained by Protect AI with approximately 670 GitHub stars and active releases through March 2026, ModelScan fills a gap that traditional application security scanners completely miss. As organizations rapidly deploy AI capabilities, the model supply chain becomes an increasingly attractive target. The tool is free and open-source, with the latest release including CVE correlation improvements and expanded format support.