Endor Labs Drives Safe Open Source AI Model Adoption with Expansive Hugging Face Scoring
Endor Labs, the leader in open source software security, is debuting a new capability—Endor Scores for AI Models—that enables organizations to more easily assess the most secure open source AI models on Hugging Face. To alleviate the challenges of identifying hidden risks associated with open source AI models, Endor Scores for AI Models evaluates and scores models based on four attributes—security, popularity, quality, and activity.
While AI poses a wealth of opportunity for enterprises in every vertical, open source AI models—though beneficial in democratizing AI’s usage—may harbor exploitable vectors or potentially risky interdependencies on other models, according to Endor Labs. Whether containing malicious code, indirect, “transitive” dependencies, or even licensing complexity, open source AI models require robust governance strategies to operate safely.
Endor Scores for AI Models aims to empower developers with the most secure, appropriate AI models for their unique needs. Developers can ask an array of questions—such as, “What models can I use to classify sentiments? What are the most popular models from Meta? What is a popular model for voice in Hugging Face?”—to begin finding a model that’s right for them.
“It’s always been our mission to secure everything your code depends on, and AI models are the next great frontier in that critical task,” said Varun Badhwar, co-founder and CEO of Endor Labs. “Every organization is experimenting with AI models, whether to power particular applications or build entire AI-based businesses. Security has to keep pace, and there’s a rare opportunity here to start clean, and avoid risks and high maintenance costs down the road.”
With 50 out-of-the-box metrics, Endor Scores for AI Models evaluates models and surfaces a range of information, including the following attributes:
- Privacy of model weights
- The presence of dataset information and performance data
- Incomplete information regarding model’s training steps, provenance, lineage, and prompt format
- Whether model has linked repos, which may be malicious
- Whether model file contains binary, which can hide malware
- Number of likes and downloads, as well as level of engagement
Endor Scores for AI Models is now available for existing Endor customers. A 30-day trial can be accessed here.
To learn more about Endor Scores for AI Models, https://www.endorlabs.com/.