Typically, security is late to the game with technology innovation. Before we get to see innovative technology, we have to wait for it to matter to security. This time, however, is different.

In January, we predicted how the announcement of ChatGPT could change cybersecurity, and today, our predictions were validated again with the announcement of Microsoft Security Copilot.

Microsoft Is The First To Deliver Legitimate AI For Security Operations

For all of the millions of dollars that security vendors spend talking about AI for cybersecurity, Microsoft is the first to make AI for security operations real with Security Copilot. The newly announced, and not yet generally available, Security Copilot is a natural continuation of Microsoft’s strategy to embed its investment in OpenAI into every product and service that it can. In this case, Microsoft is employing ChatGPT with a security-specific model. It uses generative AI to aid human analysts (but not to replace them) in investigation and response.

Security Copilot is a separate offering from the products that currently exist in the Microsoft Security portfolio. There is no word yet, however, on whether or not Security Copilot will help someone navigate the complexities of Microsoft enterprise licensing. Early phases of the offering:

    1. Give human-readable explanations of vulnerabilities, threats, and alerts from first- and third-party tools.
    2. Answer human-readable questions about the enterprise environment.
    3. Surface recommendations on next steps in incident analysis.
    4. Allow security pros to automate those steps. For example, it can be used to gather information on the environment, generate visualizations of an attack, execute a response, or, in certain cases, even be used to reverse-engineer malware.
    5. Generate PowerPoint documents based on an incident investigation.

Security Copilot is poised to become the connective tissue for all Microsoft security products. Importantly, it will integrate with third-party products, as well. This is a hard and fast requirement for an assistant that can provide comprehensive and consistent value.

Nobody, However, Likes A Back-Seat Driver

As with all useful things, there are aspects to be wary of that are made more acute by the breadth of information and ease of use that Security Copilot promises.

  • Adversaries abuse useful things. Much like adversaries use PowerShell to live off the land, or love to gain access to an Exchange email server to monitor the status of an investigation, Security Copilot could provide an enormous amount of information to adversaries when — not if — it’s compromised. As Security Copilot’s capabilities grow, it will be able to answer most questions about the environment — including ones from attackers. Expect to see after-action reports describing how Security Copilot helped adversaries enumerate vulnerable systems, write scripts to live off the land, create WMI filters and traps, and persist in interesting ways. Of course, sophisticated adversaries will likely just go right to the source and attempt to poison the models that Security Copilot is trained on.
  • Trust is earned. One of the first things demoed with Security Copilot is showing how it can be wrong — using an example where it misstated that Windows 9 existed … given adoption rates, it’s not surprising that Microsoft chose not to ask it about Windows 11. Aside from being a funny Easter egg, it’s also indicative of a broader problem: Trusting AI is hard. Trusting AI is harder when the first launch of the product shows it saying something wrong about the company that developed it. While the broader message of being able to correct the AI is productive, ensuring accurate results becomes incredibly important when leveraging the technology to train new analysts. If security analysts believe that the answers they receive are wrong, they will stop asking.
  • Antitrust is also earned. If you mention a word like antitrust within 90 miles of Redmond, a Microsoft corporate attorney will appear behind you, akin to Agent Smith in “The Matrix,” and start a monologue about how this definitely isn’t consumer harm. Security vendors, as competitors, likely do not see it that way, and many enterprise security leaders have valid concerns about concentration risk going all in with Microsoft Security, with which this AI won’t help things. While we doubt anything like the antitrust trials of the late 1990s will take place — at least in the US — the more Microsoft bundles, the more antitrust regulators will keep the company top of mind.
  • Copilot isn’t there yet, and your security program is a constraint. As exciting as this advancement is, it remains in development. It is not generally available, and the timeline for it to become GA remains unclear. The more valuable data that Security Copilot can take in, the faster it will learn and the more useful it will become. But its utility is also constrained by the same limitations that exist for security teams today: Poor visibility and bad situational awareness will limit its impact. If your program struggles to deploy patches, lacks comprehensive logging, and barely uses MFA, then Security Copilot might help accelerate investigations and provide some recommendations that you probably already knew about and couldn’t get to.
  • Novelty isn’t enough without results. The hype about AI hasn’t even crested yet — we are still on the upswing. While we agree with Satya Nadella that this is an iPhone-level moment in technology, it’s worth remembering that the first iPhone didn’t have an App Store, couldn’t multitask, and was locked to one carrier. Security leaders can’t sleep on technology like this because of our naturally ingrained — and constantly reinforced — technology skepticism. But if Copilot doesn’t make security pros faster or better, engagement rates will plummet once the novelty wears off.

AI Finally Does More Than Enhance Detection

AI and ML helped improve one of the most important and challenging tasks we have: detection. It never made its way much further than that, however.

Now, the security industry is filled with marketing messages of false promises such as the “autonomous SOC,” “AI helpers,” “AI analysts,” and more. Yet most of those tools make Clippy seem sophisticated. While other security vendors were marketing, Microsoft poured billions into OpenAI, locked the company in by offering it Azure compute credits, kept innovating itself, and will likely lean into its route to market via enterprise bundling. This is the first time that a product is poised to deliver true improvement to cybersecurity investigation and response with AI. With this announcement, we leave an era behind, when AI was relegated to detection, and enter one in which AI has the potential to improve one of the most important issues in security operations: analyst experience.

In 2021, Microsoft announced plans to spend $20 billion over the next five years on cybersecurity, a total that few competitors can match. Of course, things have changed since the free-capital days of 2021. Whether or not Microsoft reduces — or never reaches — the 2021 number, for the rest of the security industry, this is a painful reminder that Microsoft is continuing to eat their lunch — not just in terms of the enormous success of its security business but now with its innovation potential.

Forrester clients who want to learn more about the future of the SOC, AI for SecOps, and Microsoft Security Copilot can schedule an inquiry or guidance session with me.

Thanks to Rowan Curran for reviewing this blog post.