Microsoft on Tuesday inaugurated Security Co-pilot in limited preview, marking its continued attempt to embed AI-oriented features in a bid to offer “all-round defense at machine speed and scale”.
Powered by OpenAI’s GPT-4 generative AI and its own security-specific model, it’s referred to as security analysis tool which enables cybersecurity analysts to quickly respond to threats, process signals, and assess risk exposure.
To that end, it gathers insights and data from products such as Microsoft Sentinel, Defender, and Intune to help security teams better understand their environment; determine whether they are vulnerable to known vulnerabilities and exploits; identify ongoing attacks, their scale, and receive remedial instructions; and summarizing incidents.
Users, for example, could ask Security Copilot about suspicious user logins over a period of time, or even use it to create a PowerPoint presentation outlining the incident and its chain of attack. It can also accept files, URLs, and code snippets for analysis.
Redmond says its security-specific model is informed by more than 65 trillion daily signals, stressing that the tool is privacy-compliant and customer data “is not used to train a basic AI model.”
“Today the odds remain stacked against cybersecurity professionals,” Vasu Jakkal, Microsoft corporate vice president of Security, Compliance, Identity, and Management, show.
“Too often, they wage asymmetrical combat against productive, relentless, sophisticated attackers. To protect their organizations, defenders must respond to threats that are often hidden in the noise.”
Security Co-pilot is latest AI boost from Microsoft, which has continued to integrate generative AI features into its software offerings over the last two months, incl bing, Edge browser, GitHub, LinkedInAnd Skype.