Security analysts may balk at Microsoft’s latest ‘copilot.’ Here’s why.

 

By Mark Sullivan

 

Microsoft first brought generative AI to bear in search, then in its productivity apps, and now it’s bringing the new technology to its security practice with Security Copilot.

 

The new offering follows Microsoft’s general strategy of bringing an AI natural language assistant to its main user interfaces. But security may be a dangerous place to deploy AI technology that “hallucinates.”

The Security Copilot is powered by OpenAI’s GPT-4 large language model and Microsoft’s own security-focused model, which contains its proprietary knowledge about security threats. Microsoft says its security model intakes 65 trillion signals from the threat environment daily. The Security Copilot service runs within Microsoft’s Azure cloud.

A security pro might encounter a suspicious-looking signal within the company’s systems, then call on the assistant for help in analyzing it and communicating a potential threat. They can quickly call up support materials, URLs, or code snippets about past exploits and ongoing vulnerabilities, and feed it to the assistant, or request information about incidents and alerts from other security tools. Any new information or analysis generated is stored for future investigations.

 

Microsoft says the security assistant can learn as it encounteres more threat information, developing new skills. This, the company says, might help a security analyst detect threats faster and respond faster.

Microsoft says high up in its blog post that Security Copilot “doesn’t always get everything right” and that it can generate mistakes. As you might expect, dropping an unpredictable generative AI technology into the exacting environment of a security team could be problematic. Generative AI models are notorious for “hallucinating” and generating fiction in the guise of facts. When a security analyst is responding to a perceived threat such as a DDOS or ransomware attack, every second counts, and they might not have time to sift through an AI-generated threat summary to see if it contains fictional information, says Gartner distinguished VP analyst Avivah Litan. 

“I was just on the phone with a major security operator and they said they’re going to push back on using these products until they can be assured that the models are generating accurate information,” she says. 

 

Litan adds that security pros may now need a new class of tools to police the accuracy of the content generated by tools like Security Copilot.

Microsoft says it built into the user interface of the Copilot a way for users to give feedback on the assistant’s responses, so that the company can continue working to make the tool more coherant and useful. But security environments may make bad sandboxes and security people may not have time to help Microsoft conduct R&D on its products. “Microsoft is just using the security domain to advance its plan to put generative AI into all its products,” Litan says.

Microsoft adds that the customer’s proprietary security knowledge base of security threats and responses remains with the customer and is not used to train the Microsoft AI models. The company says Copilot is also able to integrate with other Microsoft security products, and that in the future, it will connect with third-party security products, too.

 

As AI chatbots evolve, they will be given more access to the “ground truth” information contained in proprietary company databases and AI models. The large language models will likely be used to package this kind of data in an easily digestible natural language wrapper, but the LLM models likely will defer to the proprietary knowledge bases for factual information. As long as they’re allowed to hallucinate within serious business applications their reliability and usefulness may be limited.

 

Fast Company

(17)