Endor Labs Empowers Organizations to Discover and Govern Open Source Artificial Intelligence Models Used in Applications
Endor Labs Empowers Organizations to Discover and Govern Open Source Artificial Intelligence Models Used in Applications Endor Labs AI Model Discovery enables application security professionals to discover pre-trained AI models being used in their applications, then evaluate risks and enforce policies for use - Product Reviews
Endor Labs announced a brand new feature in the company's signature platform enabling organizations to discover the AI models already in use across their applications, and to set and enforce security policies over which models are permitted. Endor Labs AI Model Discovery directly addresses three critical use cases: It enables application security professionals to discover local open source AI models being used in their application code; evaluate risks from those models; and enforce organization-wide policies related to AI model curation and usage. It even goes a step further with automated detection, warning developers about policy violations, and blocking high-risk models from entering production. This latest effort to help organizations govern AI code further cements Endor Labs as a leader in addressing emerging AI risks for application security programs.
The new set of capabilities perfectly complements Endor Scores for AI Models, the recent release that uses 50 out-of-the-box metrics to score all AI models available on Hugging Face (the popular platform for sharing open source AI models and datasets) across four dimensions for security, popularity, quality and activity.
Training new AI models is costly and time consuming, so most developers use open source AI models from Hugging Face and adapt them for their specific purpose. These AI models function as critical application dependencies, and standard vulnerability scanners can't accurately analyze them, presenting risk. There are more than 1 million open source AI models and datasets available today through Hugging Face. Endor Labs spots these AI models, runs them through 50 risk checks, and allows security teams to set critical guardrails, all within existing developer workflows. This gives security teams the same level of visibility and control over AI models that they currently expect with other open source dependencies.
Most users enjoying the benefits of the latest AI advances in the applications they use every day will be unaware of the dangers that may exist in the software development lifecycle. With these advances from Endor Labs, developers can safely adopt the latest open source AI models when developing the next generation of applications.
Endor Labs AI Model Discovery provides the following capabilities:
1. Discover – scan for and find local AI models already used within your Python applications, build a complete inventory of these AI models, and track which teams and applications use them. Today, Endor Labs can identify all AI models from Hugging Face.
2. Evaluate – analyze AI models based on known risk factors using Endor Scores for security, quality, activity, and popularity, and identify models with questionable sources, practices, or licenses.
3. Enforce – set guardrails for the use of local, open source AI models across the organization based on your risk tolerance. Warn developers about policy violations, and block high-risk models from being used within your applications.
Endor Labs AI Model Discovery is available now for existing customers.