In Top Cybersecurity Threats In 2023 (client access only), we called out that security leaders needed to defend AI models because real threats to AI deployments existed already. We hope you didn’t think you had much time to prepare, given the announcements with generative AI.

On one side, the rise of SaaS LLMs (ChatGPT, GPT-4, Bing with AI, Bard) makes this a third-party risk management problem for security teams. And that’s great news, because it’s rare that third parties lead to breaches … ahem. Hope you caught the sarcasm there.

Security pros should expect their company to buy — or your existing vendors to integrate with — generalized models from big players such as Microsoft, Anthropic, Google, and more.

Short blog, problem solved, right? Well … no. While the hype certainly makes it seem like this is where all the action is, there’s another major problem for security leaders and their teams.

Fine-tuned models are where your sensitive and confidential data is most at risk. Your internal teams will build and customize fine-tuned models using corporate data that security teams are responsible and accountable for protecting. Unfortunately, the time horizon for this is not so much “soon” as it is “yesterday.” Forrester expects fine tuned-models to proliferate across enterprises, devices, and individuals, which will need protection.

You can’t read a blog about generative AI models and large language models (LLMs) without a mention of the leaked Google document, so here’s an obligatory link to “We have no moat, and neither does OpenAI.” It’s a fascinating read that captures the current state of advancement in this field and lays out a clear vision of where things are going. It’s also a phenomenal blueprint for cybersecurity practitioners who want to understand generative AI models and LLMs.

Most security teams will not welcome the news that they need to protect more of something (IoT says hello!), but there is a silver lining here. Many of these problems are conventional cybersecurity problems in a new wrapper. It will require new skills and new controls, but cybersecurity practitioners fundamentally understand the cycle of identify, protect, detect, respond, and recover. Today, practitioners can access excellent resources to enhance their skills in this domain, such as the Offensive AI Compilation. Here’s a high-level overview of potential attacks against the vulnerabilities present in AI and ML models and their implications:

  • Model theft. AI and generative AI models will become the basis of your business model and will generate new and preserve existing revenue or help cut costs by optimizing existing processes. For some businesses, this is already true (Anthropic considers the underlying model[s] that make up Claude a trade secret, I’m guessing), and for others, it will soon be a reality. Cybersecurity teams will need to help data scientists, MLOps, and developers to prevent extraction attacks. If I can train a model to produce the same output as yours, then I’ve effectively stolen yours — but I’ve also reduced or eliminated any competitive advantage granted by your model.
  • Inference attacks. Inference attacks are designed to gain information about a model that was not otherwise intended to be shared. Adversaries can identify the data used in training or the statistical characteristics of your model. These attacks can inadvertently cause your firm to leak sensitive data used in training, equivalent to many other data leakage scenarios your firm wants to prevent.
  • Data poisoning. Forrester started writing and presenting on issues related to data integrity all the way back in 2018, preparing for this eventuality. In this scenario, an attacker will introduce back doors or tamper with data such that your model produces inaccurate or unwanted results. If your models produce outputs that include automated activity, this kind of attack can cascade, leading to other failures as a result. While the attack did not involve ML or AI, Stuxnet is an excellent example of an attack that greatly utilized data poisoning by providing false feedback to the control layer of systems. This could also result in an evasion attack — a scenario that all security practitioners should worry about. Cybersecurity vendors rely on AI and ML extensively for detecting and attributing adversary activity. If an adversary poisons a security vendor’s detection models, causing it to misclassify an attack as a false negative, the adversary can now use that technique to bypass that security control in any customer of that vendor. This is a nightmare scenario for cybersecurity vendors … and the customers who rely on them.
  • Prompt injection. There’s an enormous amount of information related to prompt injection already available. The issue for security pros to consider here is that, historically, to attack an application or computer, you needed to talk to the computer in the language the computer understood: a programming language. Prompt injection changes this paradigm because now an attacker only needs to think about clever ways to structure and order queries to make an application using generative AI based on a large language model behave in unexpected, unintended, and undesired ways by its administrators. This lowers the barrier to entry, and generative AI producing code that can exploit a computer does not help matters.

These attacks tie together in a lifecycle, as well: 1) An adversary might start with an inference attack to harvest information about training data or statistical techniques used in the model; 2) harvested information is used as the basis of a copycat model in model theft; and 3) all the while, data poisoning happens to produce incorrect results in an existing model to further refine the copycat and sabotage your processes that rely on your existing model.

How To Defend Your Models

Note that there are specific techniques that the people building these models can use to increase their security, privacy, and resilience. We do not focus on those here, because those techniques require the practitioners building and implementing models to make those choices early — and often — in the process. It is also no small feat to add homomorphic encryption and differential privacy to an existing deployment. Given the nature of the problem and how rapidly the space will accelerate, this blog will focus on what security pros can control now. Here are some ways that we expect products to surface to help security practitioners solve these problems:

  • Bot management. These offerings already possess capabilities to send deceptive responses on repeated queries of applications, so we expect features like this to become part of protecting against inference attacks or prompt injection, given that both use repeated queries to exploit systems.
  • API security. Since many integrations and training scenarios will feature API-to-API connectivity, API security solutions will be one aspect of securing AI/ML models, especially as your models interact with external partners, providers, and applications.
  • AI/ML security tools. This new category has vendors offering solutions to directly secure your AI and ML models. HiddenLayer won RSA’s 2023 Innovation Sandbox and is joined in the space by CalypsoAI and Robust Intelligence. We expect several other model assurance, model stress testing, and model performance management vendors to add security capabilities to their offerings as the space evolves.
  • Prompt engineering. Your team will need to train up on this skill set or look to partners to acquire it. Understanding how generative AI prompts function will be a requirement, along with creativity. We expect penetration testers and red teams to add this to engagements to assess solutions incorporating large language models and generative AI.

And we’d be remiss not to mention that these technologies will also fundamentally change how we perform our jobs within the cybersecurity domain. Stay tuned for more on that soon.

In the meantime, Forrester clients can request guidance sessions or inquiries with me to discuss securing the business adoption of AI, securing AI/ML models, or threats using AI. My colleague Allie Mellen covers AI topics such as using AI in cybersecurity, especially for SecOps and automation.