AI systems have very different behaviors, use cases, and scope than much of the computing infrastructure we’ve seen in the past. The biggest difference is how much more flexible and contextual AI is compared with classic computing systems. This means many of the tool sets that have been developed and perfected for traditional security are not nearly as effective when applied to AI systems—and even less so when applied to emerging frameworks like agentic systems.
In classic computing, data and compute were siloed, so you could use traditional cybersecurity techniques to separate what’s attacking data from what’s attacking infrastructure. But in AI systems, the data and the computer are combined, so attacking one often means attacking both.
The emerging AI use cases and the complexity of the threat surface require rethinking what it means to secure computing systems. AI systems have become so good at merging in with standard traffic and looking like human information that many traditional detection strategies fail.
There’s a lot of excitement about going beyond language to vision, audio, and other multimodal systems. People engage with audio or video much more than [they do with] text, so they believe things more readily. The risk to stakeholders and to computing system infrastructure grows because of the broader modality set and this ability to engage with different modes.