The financial services industry (FSI) certainly isn’t short on AI ambition. Contact centres are already using it to remove backlogs, interact with customers, and increase productivity. Payment teams are applying it to digitize and speed up processing. Capital markets have relied on machine learning and intelligence for years to optimize trading strategies.
Yet this rapid adoption comes with a risky reality: Institutions are deploying AI faster than security and governance can keep pace, resulting in fragmented ownership, inconsistent controls, and cyber teams playing catch-up to AI-led initiatives already in motion.
A Chief information security officer's (CISO) mandate has always been to protect value across systems, data, customers, and capital. But as agentic systems reason and act on their own, trust guardrails designed for human users struggle to keep up, leaving CISOs accountable for risks in environments they didn’t design. The result? A widening gap between how AI is being used and how it’s being governed.
The role of cyber is changing. Can your cyber function deliver the security and trust infrastructure needed to enable AI transformations at the pace FSI demands?
Across financial services, the business case for AI is clear. Early pilots and productivity programs are giving way to real deployment as generative AI enters frontline workflows and established machine learning models evolve with new generative and agentic capabilities.
However, security and governance frameworks are not always built in from the start. As AI use cases multiply, oversight often remains tied to legacy risk structures that were never designed to govern autonomous systems at machine speed, creating gaps in ownership and accountability. With cyber being engaged after the fact, teams are often left securing systems they had little role in shaping.
In an industry where decisions must be defensible and regulatory-ready, accountability and oversight can no longer be siloed. Cybersecurity, governance, architecture, and operating models must evolve together, starting with the foundations that support them.
Cyber teams are under pressure to keep pace with AI while managing cost, regulation, and complexity. These four no-regret steps will help FSI leaders strengthen oversight without slowing speed or ambition.
Automate the cyber function first: As AI reshapes financial services, traditional guardrails are reaching their limits. Cyber teams across FSI are already operating at full capacity, and adding more manual oversight or labour-heavy workflows won’t close the gap.
Before asking for more budget, cyber leaders should embed AI directly into their own operations, such as: automating repetitive controls, accelerating detection, and streamlining alert triage. By applying AI into your function first, you unlock the capacity and structural resilience required to secure emerging and more complex AI risks at enterprise scale.
Build AI literacy across the cyber teams: Cyber teams can’t secure what they don’t fully understand. AI literacy must evolve beyond legacy security frameworks into deep, practical understanding of how machine learning models are built, how generative AI systems behave, and how agentic AI can initiate actions independently. Elevating AI literacy through upskilling, targeted hiring, and establishing AI-focused security engineering roles ensures your team can evaluate AI systems with confidence and design controls that keep pace with rapid adoption.
Redesign cyber with AI at its core: Before securing AI at scale, cyber must evolve into an AI-driven trust operating system itself, moving beyond a collection of manual review points. Legacy security models were designed for human-led systems, not machine learning pipelines, generative models, or agentic workflows. Retrofitting those controls is rarely enough when AI is embedded directly into core platforms and decision-making processes.
Managing AI effectively requires a different operating model. Governance must be clarified through shared accountability across cyber, model risk, and the business. At the same time, security must be engineered into every stage of the AI lifecycle as a feature, not a bolt-on. That means moving from point-in-time approvals to continuous assurance across AI-driven delivery pipelines. Without redesigning the cyber function and its operating model, teams will struggle to keep pace with the CISO’s mandate and with AI’s expanding role.
Prepare for systems that act, not just advise: As AI moves from generating insights and content to initiating actions, the exposure profile changes fundamentally. Agentic AI introduces new considerations and requires a higher standard of governance, accountability, and auditability, especially where customer impact and regulatory defensibility matter most.
To maintain regulatory confidence and client trust, cyber leaders must understand how actions are triggered, what safeguards must exist, when humans intervene, and how decisions are recorded for auditability. Preparing for this shift demands stronger oversight frameworks designed with regulators in mind.
AI is embedded in the way FSI moves capital, manages risks, and engages customers. Whether organizations react or lead will depend on the strength of their foundations.
Deloitte’s Cyber for AI blueprint provides a clear, actionable framework to integrate security into AI end-to-end, identifying specific risks and defining the controls required to protect critical systems as adoption accelerates.