-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

Zenity Launches Security Framework Efforts for GenAI Systems, Copilots, and Agents 

Zenity, a leader in securing enterprise copilots and low-code development, is debuting a new security framework, the GenAI Attacks Matrix, focusing on attacks that target the users of various GenAI systems to examine how AI systems interact with and on behalf of their users, and vice versa. 

The open-source project is inspired by MITRE ATLAS and spearheaded by Zenity with help from many of the world’s leading security researchers, according to the vendor.

This project’s scope includes any system that uses GenAI, allows for GenAI to make decisions, and interfaces with or is operated by users (or on their behalf, in the case of agentic AI) and is built toward helping security practitioners understand and contextualize their risk.

This explicitly includes licensable AI systems such as ChatGPT Enterprise, GitHub Copilot or Microsoft 365 Copilot, extensions and agents anyone can build with low-code/no-code tools, and custom AI applications built for specific use cases. 

“What we’re hoping to do here is bring the leading AI security researchers together in order to take a focused approach to GenAI systems. Our aim is to collectively document discovered attack techniques in order to clarify the threats to help enterprises devise corresponding mitigation and risk management strategies. AI changes every day, and it is critical that we share information about potential attacks as soon as they are discovered, before they are observed in the wild. I am proud to announce this project and look forward to collaborating with the security community,” said Zenity co-founder and CTO Michael Bargury.

By letting GenAI act on behalf of business users, enterprises have unwillingly opened up new attack pathways for adversaries to target powerful systems that inherently contain access to loads of corporate and sensitive data and are curious by nature. Attackers are exploiting these systems with promptware, which is content with hidden malicious instructions that gets picked up and acted on by AI apps, according to the company.

 This project aspires to lay the foundation for security teams that need to adopt a defense-in-depth approach focused on malicious behavior rather than malicious static content. The primary goal of this project is to document and share knowledge of those behaviors and to look beyond prompt injection at the entire lifecycle of a promptware attack.

For more information about joining and contributing to this project, check out the GitHub repository or www.zenity.io.

EAIWorld Cover
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues