Artificial intelligence is rapidly transforming software applications, but integrating AI models also introduces new risks. As highlighted in a recent analysis by NCC Group, AI-powered apps need an augmented threat model to fully understand and mitigate emerging attack vectors.

AI Models as Assets, Controls, and Threat Actors

Machine learning models occupy multiple roles within an application architecture:

  • Assets that contain valuable data and logic
  • Controls intended to make security decisions
  • Potential threat actors that could execute malicious actions

This unique positioning creates opportunities for novel attacks exploiting the AI components.

New Attack Vectors Targeting AI Systems

The analysis identifies numerous new attack vectors that threat actors could leverage:

  • Prompt injection - Attackers modify model behavior by injecting malicious instructions into prompts. For example, overriding safety clauses.
  • Oracle attacks - Blackbox models are queried to extract secrets one bit at a time, similar to padding oracle attacks on crypto.
  • Adversarial inputs - Slight perturbations to images, audio or textcause misclassifications. Used to evade spam filters or defeat facial recognition.
  • Format corruption - By manipulating model outputs, attackers can disrupt downstream data consumers that expect structured responses.
  • Water table attacks - Injecting malicious training data skews model behavior. Poisoning attacks have corrupted AI chatbots before.
  • Persistent world corruption - If models maintain state across users, attackers can manipulate that state to impact other users.
  • Glitch tokens - Models mishandle rare or adversarial tokens in unpredictable ways.
  • Many traditional vulnerabilities like CSRF also translate easily to AI contexts.

Architecting More Secure AI Applications

To mitigate risks, architects should design security controls around AI models as if they were untrusted components. Key recommendations:

  • Isolate models from confidential data and functionality
  • Validate and restrict model inputs/outputs strictly
  • Consider trust boundaries and segment architecture accordingly
  • Employ compensating controls alongside AI components


Even though it can be challenging, using a careful threat modeling approach helps organizations take advantage of AI capabilities while managing risk. As AI becomes more widespread, it's important to understand these unique threats and controls. Organizations that embrace threat modeling will be able to create safer AI-powered systems.
Pillar Security provide expert threat modeling services for AI solutions used in mission-critical business applications. Contact us to learn more.

Subscribe and get the latest security updates

Back to blog