The year ahead will test how well financial services (FS) firms can balance ambition with robust guardrails in their use of Artificial Intelligence (AI). Our latest survey shows that appetite for AI remains strong: 94% of firms plan to increase investment in the next 12 months, with 39% expecting a significant rise.1
Boards rightly see AI as a potent force for transformation. However, the shift from experimentation to scaling AI use cases into full production, particularly in an outcome-based regulatory environment, remains a challenge. Establishing effective AI governance and staying within risk appetite, especially for more complex systems such as Generative AI, is a particular hurdle. Nearly a third of respondents cite managing AI risks (29%) and meeting regulatory obligations (28%) as the main obstacles to realising returns.
These pressures will intensify in in 2026, as AI moves into more critical processes and complex applications, including Agentic AI. In response, FS supervisors are looking to boards and senior managers to understand the risks and ensure that they are comfortable with the trade-offs between risks and rewards inherent in AI adoption.
The international regulatory environment for AI remains a mix of well-established and still-evolving frameworks. International industry standards, distinct from regulation, also play a key role in guiding good practices in governance and risk management.
On AI-specific rules, the UK and the EU are taking different paths. The UK has no dedicated AI legislation for FS and none is expected. In the EU, implementation of the AI Act remains in flux, with proposals under negotiations to delay compliance deadlines for high-risk AI systems. [See the Spotlight for further details]
Yet AI-specific regulation is only a small part of the story. In both jurisdictions, supervisors will continue to rely mainly on the existing full suite of technology-neutral FS frameworks and where personal data is used, data protection rules. This means that a number of AI use cases in FS – including credit risk models for capital calculations, transaction monitoring, trading algorithms and financial advice – will be assessed primarily through prudential and model risk management standards, conduct requirements, operational resilience and, if relevant, EU and UK General Data Protection Regulation (GDPR). Once in force, the AI Act will sit alongside these regimes in the EU, adding further requirements only for those AI systems that fall within its high-risk scope.
With this context set, the key question becomes: what do supervisors expect of firms now?
Effective AI governance and accountability will determine the pace and scale of AI adoption in FS. Supervisors in both the EU and the UK are consistent on one point: AI is a technological tool, and firms remain responsible for using it safely and in compliance with their regulatory obligations.
“As firms increasingly consider use of AI in higher impact areas of their businesses such as credit risk assessment, capital management and algorithmic trading, we should expect a stronger, more rigorous degree of oversight and challenge by their management and boards – in particular given AI’s autonomy, dynamism and lack of explainability.”
Sarah Breeden, Bank of England, Deputy Governor, Financial Stability2
Supervisors will not conduct a line-by-line review of the source code of AI models. Instead, they will assess whether firms can demonstrate that their AI governance and controls ensure decision-makers understand the risks of their models, can explain and manage uncertainty in their outputs, and can evidence reliable, fair and consistent outcomes. While regulators actively support responsible innovation, as evidenced by the UK Financial Conduct Authority (FCA)’s ‘Supercharged Sandbox’ and ‘Live Testing’ programmes, and the EU’s regulatory sandboxes, a tech-positive stance does not mean lighter scrutiny.
As AI becomes embedded in core activities and infrastructure, supervisory attention to accountability and effective oversight will intensify. In the UK, the Senior Managers & Certification Regime will be leveraged to review accountability. In the EU, the Capital Requirements Directive 6 moves banking closer to the UK model including through stronger fit-and-proper standards, clearer individual responsibilities, and wider supervisory powers over board members and senior managers. Across sectors, the European Supervisory Authorities (ESAs) have reinforced the need for clear, transparent accountability arrangements.
This raises expectations for boards and senior executives. They will need a clear, actionable risk appetite for AI, setting boundaries on where it can be used, acceptable levels of autonomy, and how outcomes are monitored and tested. Effective oversight depends on knowing where AI is deployed, the materiality of each use case, and how performance, incidents and limitations are reported. Boards will need reliable management information on AI performance and risks, and the ability to challenge assumptions, test management confidence, and ensure that AI remains within risk tolerances. They must also be confident that executives can act decisively when issues arise, with clear escalation routes, defined responsibilities, robust controls, and credible plans for pausing AI systems if necessary.
Source: Deloitte 2025 EMEA Model Risk Management Survey 3
These considerations are most acute in more complex systems such as Generative AI. Validation remains difficult, explainability is limited and outputs can vary by prompt, context or model release. This is attracting increased supervisory scrutiny.4 As a result, many firms now recognise that for high-stakes use cases - such as credit decisions, pricing or fraud detection - targeted statistical models or specialised AI tools may be more effective and better aligned with their risk appetite.
Responsibility for AI governance across FS remains uneven, with some exceptions. Oversight is often driven by first line functions or led by Chief Data Officers or Chief Information Officers, whose focus naturally leans towards technical performance. This can mean regulatory and risk considerations receive less attention than they should. More effective models distribute governance across technology, risk and compliance, ensuring a balanced view of performance, safety and regulatory obligations. While accountability may sit in different parts of the organisation depending on each firm’s structure, those accountable should ensure that AI risks are managed in line with regulatory expectations and agreed risk tolerances.
As AI scales, many firms will also need to move beyond pilot-stage, often division-level, oversight to a more standardised, and for some material AI use cases centralised, governance. This transition requires visible senior leadership support, with boards, risk committees and executives setting the right tone and practices for how AI risk is understood and managed.
In 2026, two supervisory priorities will become even more prominent: data governance and operational resilience.
Data governance is fundamental to effective AI deployment. High-quality, well-managed data underpins transparency, model validation and explainability, fairness and accountable oversight. It also supports cybersecurity, operational resilience and privacy protection.
Regulators across the EU and UK converge on this view. The ESAs have positioned data governance as a central pillar of AI risk management.5 6 In the UK, both the Prudential Regulation Authority and FCA have similarly elevated data governance as a priority, with the FCA linking ethical concerns over personal data use and algorithmic bias to the delivery of good consumer outcomes under the Consumer Duty.7 8
Yet for many firms, data governance remains a persistent challenge. Legacy systems, past acquisitions and fragmented architectures have left data inconsistent, low-quality, and siloed. This makes it harder to train and test AI models effectively, monitor AI-amplified risks or explain behaviour to supervisors, customers or boards.
The EU AI Act will add further expectations for high-risk systems. Even if compliance deadlines were to slip to 2027, firms should use the time to strengthen data foundations. This includes documenting data provenance, demonstrating that training, validation and testing data are relevant, representative and as free of error or distortion as possible, explaining how bias is identified and mitigated, and ensuring personal data use is compliant with EU GDPR. However, EU GDPR enforcement varies across Member States, with some jurisdictions applying markedly stricter interpretations than others. Firms therefore must choose between building to the highest bar or tailoring by market.
Although the AI Act data requirements apply only to high-risk systems, they are likely to become a wider benchmark. EU FS supervisors, who will oversee AI Act high-risk AI use cases in FS, are expected to use them to test data governance across all material use cases. In the UK, they can serve as a guide for meeting both UK GDPR requirements and the outcome-based expectations of FS supervisors.
Operational resilience is now central to AI supervision, driven by the FS industry’s reliance on a narrow set of technology providers for the AI stack. The Bank of England estimates the top three vendors supply about 75% of cloud, 45% of models, and 30% of data services to UK financial firms.9 This concentration creates significant systemic risk, as a single supplier failure or cyber-attack could cascade across the financial system.
In this setting, resilience frameworks are the first - and for now the most robust - line of defence against risks arising from a concentrated AI supply chain.
From 2026, supervision will tighten on two fronts. The first involves increasing firm-level scrutiny. With EU’s Digital Operational Resilience Act (DORA) and the UK’s operational resilience regime now embedded, supervisors will test resilience under these new rules as AI scales. Expect close attention to resilience testing, auditability, transparency, and credible business continuity and exit plans, treating concentration across cloud, compute, and foundation models as a core risk. Supervisors will demand clear mapping of where AI systems are used to deliver important business services, with proof that failovers work in practice.
The second track is direct oversight of critical vendors. Under DORA, the EU has already designated the first batch of critical Information and Communication Technology providers – including some of the biggest cloud and AI service providers – over which supervisory teams will have broad inspection powers.
In the UK, once His Majesty’s Treasury designates a third party as critical (with initial designations expected in 2026), regulators can impose rules directly on the vendor. These include obligations to provide information, undergo regulatory investigations, and undertake scenario exercises and incident-management drills that will involve FS clients.
The large cloud providers are likely to be captured in the UK as well, given the scale of their support to FS firms. Vendors whose standalone AI offerings might not, by themselves, meet the criteria for designation could still be captured if they deliver tightly bundled services that combine cloud infrastructure, models, and data in ways that are difficult to separate.
This does not dilute firms’ accountability. Boards will still own end-to-end resilience and remain responsible for regulatory compliance. However, vendor supervision can support greater transparency and stronger contractual standards, by requiring both vendors and FS firms to ensure contracts are aligned with regulatory requirements. This is likely to generate broader ripple effects for firms. These include increased supervisory interaction, often channelled through their vendors, involving requests for test records, joint drills, and evidence of remediation. Findings from vendor examinations may inform supervisors' views on individual firms and their AI resilience capabilities. For example, where supervisors prompt a model vendor to tighten controls or modify a foundation model, FS firms may need to re-validate outputs or rerun resilience tests.
Scaling AI safely, in line with risk appetite and regulatory expectations, requires some core capabilities to be put in place. Strong AI governance, with clear accountability at board and senior manager level, is essential. So too is a secure foundation of effective model risk management, data governance and operational resilience. Together they underpin any effective AI strategy. These are not just necessities to ensure compliance. Done well, AI governance is a strategic enabler to identify and prioritise use cases, direct investment accordingly, and allow firms to scale with confidence and realise value.