Microsoft announced its AI Security Copilot, a GPT-4 implementation that brings generative AI capabilities to its in-house security suite, and features a host of new visualisation and analysis functions.
AI Security Copilot’s basic interface is similar to the chatbot functionality familiar to generative AI users. It can be used in the same way, to answer security questions in a natural manner, but the more impressive features stem from its tight integration with Microsoft’s existing security products, including Defender, Sentinel, Entra, Purview, Priva, and Intune.
Copilot can interpret data from all of those security products and provide automated, in-depth explanations (including visualisations), as well as suggested remedies.
Furthermore, the system will have an ability to take action against some kinds of threats – deleting email messages that contain malicious content identified by a previous analysis, for example.
Microsoft said that it has plans to expand Security Copilot’s connectivity options beyond the company’s own products, but did not offer any further details in a livestream and official blog post detailing the product.
Microsoft noted that, as a generative AI product, Security Copilot isn’t going to give correct answers 100% of the time, and that it will need additional training and input from early users to reach its full potential.
Automation one benefit of AI Security Copilot, but challenges remain
According to AI experts, it’s a powerful system, though it’s not quite as novel as Microsoft presented. Avivah Litan, distinguished vice president and analyst at Gartner, said that IBM’s had similar capabilities via it’s Watson AI for years.
“The AI here is faster and better, but the functionality is the same,” she said. “It’s a nice offering, but it doesn’t solve the problems that users have with generative AI.”
Regardless of those problems – the largest of which is Security Copilot’s admitted inability to provide accurate information in all cases – the potential upsides of the system are still impressive, according to IDC research vice president Chris Kissel.
“The big payoff here is that so much more stuff could be automated,” he said. “The idea that you have a ChatGPT writing something dynamically and the analytics to judge it in context, in the same layer, is compelling.”
Both analysts, however, were slightly skeptical about Microsoft’s professed policy on data sharing – essentially, that private data will not be used to train the foundational AI models and that all user information will stay under the user’s control.
The issue, they said, was that incident data is critical for training AI models like the one used for Security Copilot, and that the company hadn’t offered a lot of insight into how, precisely, such data would be handled.
“It is a concern,” said Kissel. “If you’re trying to do something involving, say, a specific piece of intellectual property, can there be safeguards that keep the data in place?”
“How do we know the data’s really protected if they don’t give the tools to look at it?” said Litan.
Microsoft did not announce an availability date for Security Copilot today, but said that “we look forward to sharing more soon.”