Organizations that deploy artificial intelligence at scale are discovering its paradox: The same AI capabilities that help them be more competitive can also introduce new security risks. And to add to the paradox, they’re also recognizing that AI offers powerful new capabilities to counter the very vulnerabilities it creates.
Enterprises face multiple threats related to artificial intelligence, including shadow AI deployments, AI-accelerated attacks, and the intrinsic risks of AI systems.1 Yet even as AI drives new threat vectors, traditional cybersecurity principles remain constant and should be applied to autonomous systems that learn, adapt, and operate at machine speed. Many of these techniques will require significant adaptation because most cyber organizations were not designed to lean on digital intelligence.
The window for reactive security approaches is closing. Last year, many organizations focused on mobilizing AI and exploring its possibilities. Now, as they realize the risks of unchecked adoption, they’re cataloging emerging threats and implementing targeted governance frameworks that help balance innovation speed with security.
External threats persist. Deepfakes, synthetic personas, and AI-powered social engineering continue to evolve, as we discussed in Tech Trends 2024. But many of today’s most pressing AI-related risks originate inside the organization. Two such risks are shadow AI and inadequate controls on agentic AI governance.
Shadow AI, the unsanctioned AI deployment implemented by individual teams across enterprises, creates governance blind spots and introduces autonomous decision-making systems that can access sensitive data, make consequential choices, and interact with other systems.2 Each deployment represents a potential source of data leakage, model manipulation, model drift, or unauthorized access. Many of the approaches enterprises have developed over the last several years to respond to shadow IT deployments include monitoring the network to discover all applications and developing policies to ensure new deployments comply with privacy and security standards.3
Over the past year, enterprises have focused on effectively integrating AI into business process workflows. Now, as they scale AI use cases across operations, they’re discovering that AI adoption creates a new set of risks that have corresponding mitigation strategies.
AI security risks manifest across four domains: data, AI models, applications, and infrastructure. While organizations continue to discover the full scope of threats, the window for reactive approaches is closing. Many existing security practices can be adapted to address these AI-specific risks.
Data security risks: Large language models (LLMs) and other AI systems concentrate vast amounts of information in single locations and therefore require additional protection. Data security concerns encompass information handled by AI models during training, testing, validation, and inference after deployment (figure 1).
AI model security risks: Model security encompasses the model’s architecture and unique training parameters and training, testing, and validation processes (figure 2). Transparency requirements often differ by model type, creating important regulatory considerations.
Application security risks: These risks are related to the external layer hosting the model and sitting on infrastructure, acting as the user interface through which users and systems interact with AI capabilities (figure 3).
Infrastructure security risks: Infrastructure security encompasses the hardware and networking components used for developing and hosting AI systems, representing the foundational layer on which AI capabilities operate (figure 4).
Sanmi Koyejo is an assistant professor at Stanford University and the co-founder of Virtue AI, which develops enterprise solutions for AI safety and security. His research on AI evaluation, adversarial robustness, and safety assessment has been implemented in production systems at major technology companies.
Q: Compared with traditional computing systems, why are AI systems difficult to protect?
A: AI systems have very different behaviors, use cases, and scope than much of the computing infrastructure we’ve seen in the past.
The biggest difference is how much more flexible and contextual AI is compared with classic computing systems. This means many of the tool sets that had been developed and perfected for traditional security are not nearly as effective when applied to AI systems—and even less so when applied to emerging frameworks like agentic systems.
Also, in classic computing, data and compute were siloed, so you could use traditional cybersecurity techniques to separate what’s attacking data from what’s attacking infrastructure. But in AI systems, the data and the compute are combined, so attacking one often means attacking both.
The emerging AI use cases and the complexity of the threat surface require rethinking what it means to secure computing systems. AI systems have become so good at merging in with standard traffic and looking like human information that many traditional detection strategies fail.
There’s a lot of excitement about going beyond language to vision, audio, and other multimodal systems. People engage with audio or video much more than [they do with] text, so they believe things more readily. The risk to stakeholders and to computing system infrastructure grows because of the broader modality set and this ability to engage with different modes.
Q: In terms of security for AI, what approaches are emerging?
A: On the ecosystem front, something very interesting is happening. We’ve seen two main buckets of companies. They’re [taking] highly complementary but different approaches.
First, there are classic cybersecurity companies adding on AI—both in the security-for-AI space and the AI-for-security space. They’ve been investing in AI to help solve classic security issues, but most interestingly, they’re adding security-for-AI capabilities like data sanitation, guardrails, and AI firewalls against prompt injections and other agentic deployment issues.
The other bucket is native AI folks tackling security for AI. It’s a fascinating contrast. The approaches look different when you’re starting from security and asking how to scaffold to handle AI threats versus starting from native AI infrastructure, where you’ve built these systems, understand them deeply, understand their vulnerabilities, and exploit that knowledge to think about security infrastructure.
I believe AI-native approaches are likely much more effective for security-for-AI applications. Native AI companies have a special sauce because they understand the systems much better and can be much more targeted than traditional security companies.
Q: Looking two to four years out, what kinds of other attacks or attack vectors can we expect? Or is it impossible to predict because of how fast this is moving?
A: My experience and broad frame of reference is that risks and capabilities tend to go along with each other quite closely. The more the system can do, the more we give it access and agency, and the more we have new kinds of security surfaces to cover. So, if you want to see where risks are going, look at where capability is going—what people are trying to do with it, what use cases people are excited about, and where future investment is going.
With AI, the excitement about [the] technology leads us to ignore security and safety issues, focus on a capability, and then realize we left a gap. We should take another look and figure out what security risks might arise, and treat security and safety as model capabilities as well.4
Most of the practices needed to secure AI deployments aren’t new; they’re just being updated to address AI risks. Rich Baich, senior vice president and chief information security officer at AT&T, says his approach to mitigating AI risks leans heavily on existing cybersecurity leading practices. In particular, he focuses on enforcing strong software development life cycle approaches. Regardless of whether a tool is homegrown or vendor-supported, each should be tested, red-teamed (see the section “Advanced AI-native defense strategies”), meet architectural requirements, and have access controls in place. Baich says this approach allows his team to bring in the AI tools they need to innovate and push operations forward while ensuring they aren’t creating new problems.
“What we’re experiencing today is not much different than what we’ve experienced in the past,” Baich says. “The only difference with AI is speed and impact.”5
Accelerated attack timelines, shadow AI, and the complexity of managing autonomous agents mean that basic security table stakes—from data cataloging to agent monitoring—have become urgent requirements. But as we’ll demonstrate in the following section, AI is also playing a bigger role within cyber, risk, and compliance teams, helping them tackle some of these emerging challenges.
AI introduces new vulnerabilities, but it also provides powerful defensive capabilities. Leading organizations are exploring how AI can help them operate at machine speed and adapt to evolving threats in real time. AI-powered cybersecurity solutions help identify patterns humans miss, monitor the entire landscape, speed up threat response, anticipate attacker moves, and automate repetitive tasks. These capabilities are changing how organizations approach cyber risk management.
One area where cyber teams are taking advantage of AI is red teaming. This involves rigorous stress testing and challenging of AI systems by simulating adversarial attacks to identify vulnerabilities and weaknesses before adversaries can exploit them. This proactive approach helps organizations understand their AI systems’ failure modes and security boundaries.
Brazilian financial services firm Itau Unibanco has recruited agents for its red-teaming exercises. It employs a sophisticated approach in which human experts and AI test agents are deployed across the company. These “red agents” use an iterative process to identify and mitigate risks such as ethics, bias, and inappropriate content.
“Being a regulated industry, trust is our No. 1 concern,” says Roberto Frossard, head of emerging technologies at Itau Unibanco. “So that’s one of the things we spent a lot of time on—testing, retesting, and trying to simulate different ways to break the models.”6
AI is also playing a role in adversarial training. This machine learning technique trains models on adversarial examples—inputs designed to fool or attack the model—helping them recognize and resist manipulation attempts and making the systems more robust against attacks.
Enterprises using AI face new compliance requirements, particularly in health care and financial services, where they often need to explain the decision-making process.7 While this process is typically difficult to decipher, certain strategies can help ensure that AI deployments are compliant.
Some organizations are reassessing who oversees AI deployment. While boards of directors traditionally manage this area, there’s a growing trend to assign responsibility to the audit committee, which is well-positioned to continually review and assess AI-related activities.8
Governing cross-border AI implementations will remain important. The situation may call for data sovereignty efforts to ensure that data is handled locally in accordance with appropriate rules, as discussed in “The AI infrastructure reckoning.”
Agents operate with a high degree of autonomy by design. With agents proliferating across the organization, businesses will need sophisticated agent monitoring to analyze, in real time, agents’ decision-making patterns and communication between agents, and to automatically detect unusual agent behavior beyond basic activity logging. This monitoring enables security teams to identify compromised or misbehaving agents before they cause significant damage.
Dynamic privilege management is one aspect of agent governance. This approach allows teams to manage hundreds or even thousands of agents per user while maintaining security boundaries. Privilege management policies should balance agent autonomy with security requirements, adjusting privileges based on context and behavior.
Governance policies should incorporate life cycle management that controls agent creation, modification, deactivation, and succession planning—analogous to HR management for human employees but adapted for digital workers, as covered in “The agentic reality check.” This can help limit the problem of orphaned agents, bots that retain access to key systems even after they’ve been offboarded.
As AI agents become empowered to spin up their own agents, governance will grow more pressing for enterprises. This capability raises significant questions about managing privacy and security, as agents could become major targets for attackers, particularly if enterprises lack visibility into what these agents are doing and which systems they can access.
Many cyber organizations are using AI as a force multiplier to overcome complex threats. AI models can be layered on top of current security efforts as enhanced defense mechanisms.
AI can assist with risk scoring and prioritization, third-party risk management, automated policy review and orchestration, cybersecurity maturity assessments, and regulatory compliance support. When deployed in these areas, AI capabilities enable security teams to make faster, more informed decisions about resource allocation.
AI is also playing a role in controls testing and automation, secure code generation, vulnerability scanning capabilities, systems design optimization, and model code review processes. This accelerates the identification and remediation of security weaknesses.
Cybersecurity team operations weren’t designed for AI, but business efforts to implement AI throughout the organization create an opportunity to rethink current cyber practices. As businesses roll out AI (and agents in particular) across their operations, many are choosing to completely reshape the workforce, operating model, governance model, and technology architecture. While rearchitecting operations to take advantage of AI agents, organizations should build security considerations into foundational design rather than treating them as an afterthought. This proactive approach to heading off emerging cyber risks can prepare enterprises for today’s threats and position them well against dangers that are likely to hit two to five years down the road, which is the subject of the following section.
As we look ahead, emerging trends may challenge fundamental assumptions about cybersecurity, physical security, and even geopolitical stability. While some scenarios remain speculative, understanding potential futures enables organizations to prepare architectures and governance frameworks that can adapt as threats evolve.
As AI proliferates across every physical system—power grids, water treatment facilities, transportation networks, supply chains, health care delivery systems—and as AI capabilities improve, physical risks increase exponentially. The convergence of AI and physical infrastructure creates attack surfaces that could lead to unprecedented disruption.
Future threats may involve single attacks corrupting AI systems simultaneously across multiple sectors, including transportation, health care, and utilities. An adversary gaining access to interconnected AI systems could orchestrate cascading failures that compound across critical infrastructure sectors.
Sophisticated attacks could employ “boiling frog” tactics, where AI systems subtly degrade system performance over months, making detection difficult until significant damage has accumulated.
Organizations can prepare for AI–physical convergence risks through several approaches.
The evolution toward autonomous cyber warfare—AI-versus-AI combat with fully automated attack and defense systems operating at machine speed without human intervention—represents a paradigm shift in cybersecurity.
Future attack capabilities may include:
As AI security evolves, two emerging frontiers warrant urgent attention despite their nascent development: space-based infrastructure and quantum computing.
Space infrastructure vulnerability: The commercial space industry has opened new attack surfaces; every satellite is essentially a computer vulnerable to exploitation. As adversaries develop capabilities to infiltrate satellites, the potential for disruption extends to GPS, communications, weather monitoring, and a nation’s security systems.
Quantum communication channels: Quantum communication promises theoretically unbreakable encryption but also threatens to render current encryption methods obsolete. As discussed in Tech Trends 2025, organizations should prepare for this transition while securing quantum communication infrastructure against adversaries seeking to compromise or control these capabilities.
Organizations should simultaneously pursue both innovation and security through strategic frameworks that embed security into AI initiatives from inception.
Businesses can start by implementing fundamental security controls: data security, access management, model protection, and infrastructure hardening. Skipping these fundamentals in pursuit of rapid AI deployment can create vulnerabilities that may eventually compromise their competitive position.
From there, enterprises may consider investing in advanced AI-powered defense capabilities. Fighting AI threats requires AI-powered security systems that can operate at machine speed, identify subtle attack patterns, and adapt to evolving adversary tactics. The organizations that treat AI security as a force multiplier rather than a cost center are likely to build lasting defensive advantages.
Finally, preparing architectures and governance frameworks for emerging threats will become more important beyond the next few years. While autonomous cyber warfare and AI-physical convergence may seem distant, building adaptable security architectures today helps build organizational resilience tomorrow as the threat landscape evolves.
The AI dilemma is ultimately not a dilemma at all; it’s a call to action. Organizations that approach AI security strategically, implementing multiple defense layers while innovating rapidly, can better protect their assets and may be able to establish competitive differentiation through leading risk management capabilities. The future belongs to enterprises that master this balance, treating security as an enabler of AI adoption, not a constraint.