Microsoft Debuts a Series of New Security Capabilities Inspired by DeepSeek’s Growing Popularity
Microsoft is announcing several threat protection, posture management, data security, compliance, and governance capabilities designed to help enterprises successfully secure their AI applications. These new features come as the DeepSeek R1 model—which is now available in the model catalog on Azure AI Foundry and GitHub—increases in popularity and enterprises are looking to secure its implementations.
Microsoft Security’s latest capabilities serve to help enterprises secure and govern their AI applications—including those built with the DeepSeek R1 model. With the availability of DeepSeek R1 on Azure AI Foundry and GitHub, Microsoft is already making strides in ensuring DeepSeek R1’s safety, having put the model through rigorous red teaming and safety evaluations.
Now, when developers build AI workloads with DeepSeek R1 or other AI models, they benefit from Microsoft Defender for Cloud’s AI security posture management. This drives increased transparency across AI workloads, allowing teams to more easily discover AI cyberattack surfaces, vulnerabilities, and cyberattack paths. Defender for Cloud also offers recommendations for strengthening security posture against cyberthreats, achieved by mapping out AI workloads and synthesizing security insights.
Defender for Cloud helps create a comprehensive view of unusual and harmful activity for DeepSeek AI applications. Continuously monitoring for threat behaviors and providing supporting evidence for security alerts, Defender for Cloud helps security operations center (SOC) analysts better understand and respond to user behaviors.
Another key component of this release is Microsoft Security’s new features for the DeepSeek app—an environment hosted on DeepSeek’s local servers whose data collection and cybersecurity procedures may not align with an enterprise’s requirements. Microsoft Security offers the ability to discover third-party AI apps in your enterprise—which now includes the DeepSeek app—paired with new controls for protecting and governing their usage. Enterprises can determine the security, compliance, and legal risks associated with a slew of generative AI (GenAI) apps and tag them as unsanctioned or block user access.
Additionally, updates to Microsoft Purview Data Security Posture Management (DSPM) for AI delivers insights for these third-party apps based on data security and compliance risks. Now, users of the DeepSeek app can better understand what sensitive data is entering the app and create and fine-tune their data security policies appropriately. This update also allows organizations to prevent users from pasting or uploading sensitive data and content into GenAI apps, and create adaptive, priority-based policies for dynamic restrictions.
To learn more about Microsoft Security’s latest capabilities, please visit https://www.microsoft.com/en-us/security.