Skip to main content

Artificial Intelligence and data

Reality bites

Back to Regulatory Outlook 2026

The year ahead will test how well financial services (FS) firms can balance ambition with robust guardrails in their use of Artificial Intelligence (AI). Our latest survey shows that appetite for AI remains strong: 94% of firms plan to increase investment in the next 12 months, with 39% expecting a significant rise.

Boards rightly see AI as a potent force for transformation. However, the shift from experimentation to scaling AI use cases into full production, particularly in an outcome-based regulatory environment, remains a challenge. Establishing effective AI governance and staying within risk appetite, especially for more complex systems such as Generative AI, is a particular hurdle. Nearly a third of respondents cite managing AI risks (29%) and meeting regulatory obligations (28%) as the main obstacles to realising returns. 

These pressures will intensify in in 2026, as AI moves into more critical processes and complex applications, including Agentic AI. In response, FS supervisors are looking to boards and senior managers to understand the risks and ensure that they are comfortable with the trade-offs between risks and rewards inherent in AI adoption. 

Regulating AI – where are we? 

The international regulatory environment for AI remains a mix of well-established and still-evolving frameworks. International industry standards, distinct from regulation, also play a key role in guiding good practices in governance and risk management. 

On AI-specific rules, the UK and the EU are taking different paths. The UK has no dedicated AI legislation for FS and none is expected. In the EU, implementation of the AI Act remains in flux, with proposals under negotiations to delay compliance deadlines for high-risk AI systems. [See the Spotlight for further details]  

The EU AI Act became law in 2024, but its requirements apply in stages. The AI Act uses a risk-tiered approach to regulate AI. Prohibitions on AI systems posing unacceptable risks have applied since February 2025. However, the timeline for the application of requirements for 'high-risk' systems remains uncertain, reflecting the ongoing debate about their impact on innovation and delays in the finalisation of technical standards required for high-risk AI implementation. 

In late 2025, the Commission acknowledged that the high-risk framework would not be ready by its original August 2026 start date. This led the Commission to propose an AI Digital Omnibus, under which EU institutions are currently negotiating legislative changes to the AI Act. These changes could extend ‘high-risk’ implementation deadlines by up to 16 months, to a backstop compliance date of 2 December 2027. However, negotiations between the Commission, Parliament, and Council are likely to be protracted. 

If the AI Digital Omnibus proposals are ultimately adopted, high-risk obligations will apply six months after the Commission confirms (expected by H2 2027 at the latest) that the necessary standards and guidance are in place. If no such confirmation is issued, the aforementioned backstop compliance deadline of 2 December 2027 would apply. If the proposals are not adopted, the original AI Act timeline of 2 August 2026 will stand (see Figure 1). 

Figure 1: AI Act implementation timeline - original vs. AI Digital Omnibus proposal

EU AI Act original and new AI Omnibus timelines, visualising information from the article text.

Source: Deloitte ECRS analysis

Assuming the Omnibus proposals are adopted, high-risk AI systems used in FS – including credit scoring, health and life insurance risk assessment and pricing, and employment-related systems - will likely need to comply with the AI Act at some point between Q1 2027 and the end of 2027. This extension is no reason to down tools. Over the coming year, we expect a multitude of technical standards, guidance and supervisory clarifications to be issued, leaving limited time for implementation. Firms that wait for complete clarity may find themselves short of time. 

Yet AI-specific regulation is only a small part of the story. In both jurisdictions, supervisors will continue to rely mainly on the existing full suite of technology-neutral FS frameworks and where personal data is used, data protection rules. This means that a number of AI use cases in FS – including credit risk models for capital calculations, transaction monitoring, trading algorithms and financial advice – will be assessed primarily through prudential and model risk management standards, conduct requirements, operational resilience and, if relevant, EU and UK General Data Protection Regulation (GDPR). Once in force, the AI Act will sit alongside these regimes in the EU, adding further requirements only for those AI systems that fall within its high-risk scope. 

With this context set, the key question becomes: what do supervisors expect of firms now? 
 

AI governance, accountability and outcomes 

Effective AI governance and accountability will determine the pace and scale of AI adoption in FS. Supervisors in both the EU and the UK are consistent on one point: AI is a technological tool, and firms remain responsible for using it safely and in compliance with their regulatory obligations.  

“As firms increasingly consider use of AI in higher impact areas of their businesses such as credit risk assessment, capital management and algorithmic trading, we should expect a stronger, more rigorous degree of oversight and challenge by their management and boards – in particular given AI’s autonomy, dynamism and lack of explainability.” 

Sarah Breeden, Bank of England, Deputy Governor, Financial Stability2 

Supervisors will not conduct a line-by-line review of the source code of AI models. Instead, they will assess whether firms can demonstrate that their AI governance and controls ensure decision-makers understand the risks of their models, can explain and manage uncertainty in their outputs, and can evidence reliable, fair and consistent outcomes. While regulators actively support responsible innovation, as evidenced by the UK Financial Conduct Authority (FCA)’s ‘Supercharged Sandbox’ and ‘Live Testing’ programmes, and the EU’s regulatory sandboxes, a tech-positive stance does not mean lighter scrutiny. 

As AI becomes embedded in core activities and infrastructure, supervisory attention to accountability and effective oversight will intensify. In the UK, the Senior Managers & Certification Regime will be leveraged to review accountability. In the EU, the Capital Requirements Directive 6 moves banking closer to the UK model including through stronger fit-and-proper standards, clearer individual responsibilities, and wider supervisory powers over board members and senior managers. Across sectors, the European Supervisory Authorities (ESAs) have reinforced the need for clear, transparent accountability arrangements. 

This raises expectations for boards and senior executives. They will need a clear, actionable risk appetite for AI, setting boundaries on where it can be used, acceptable levels of autonomy, and how outcomes are monitored and tested. Effective oversight depends on knowing where AI is deployed, the materiality of each use case, and how performance, incidents and limitations are reported. Boards will need reliable management information on AI performance and risks, and the ability to challenge assumptions, test management confidence, and ensure that AI remains within risk tolerances. They must also be confident that executives can act decisively when issues arise, with clear escalation routes, defined responsibilities, robust controls, and credible plans for pausing AI systems if necessary. 

Figure 2: Significant challenges of using AI/Machine Learning models

Source: Deloitte 2025 EMEA Model Risk Management Survey

These considerations are most acute in more complex systems such as Generative AI. Validation remains difficult, explainability is limited and outputs can vary by prompt, context or model release. This is attracting increased supervisory scrutiny.4 As a result, many firms now recognise that for high-stakes use cases - such as credit decisions, pricing or fraud detection - targeted statistical models or specialised AI tools may be more effective and better aligned with their risk appetite.  

Responsibility for AI governance across FS remains uneven, with some exceptions. Oversight is often driven by first line functions or led by Chief Data Officers or Chief Information Officers, whose focus naturally leans towards technical performance. This can mean regulatory and risk considerations receive less attention than they should. More effective models distribute governance across technology, risk and compliance, ensuring a balanced view of performance, safety and regulatory obligations. While accountability may sit in different parts of the organisation depending on each firm’s structure, those accountable should ensure that AI risks are managed in line with regulatory expectations and agreed risk tolerances.  

As AI scales, many firms will also need to move beyond pilot-stage, often division-level, oversight to a more standardised, and for some material AI use cases centralised, governance. This transition requires visible senior leadership support, with boards, risk committees and executives setting the right tone and practices for how AI risk is understood and managed.

In 2026, two supervisory priorities will become even more prominent: data governance and operational resilience. 

Data quality and governance: the foundation that matters 

Data governance is fundamental to effective AI deployment. High-quality, well-managed data underpins transparency, model validation and explainability, fairness and accountable oversight. It also supports cybersecurity, operational resilience and privacy protection.  

Regulators across the EU and UK converge on this view. The ESAs have positioned data governance as a central pillar of AI risk management.5 6 In the UK, both the Prudential Regulation Authority and FCA have similarly elevated data governance as a priority, with the FCA linking ethical concerns over personal data use and algorithmic bias to the delivery of good consumer outcomes under the Consumer Duty.7 8 

Yet for many firms, data governance remains a persistent challenge. Legacy systems, past acquisitions and fragmented architectures have left data inconsistent, low-quality, and siloed. This makes it harder to train and test AI models effectively, monitor AI-amplified risks or explain behaviour to supervisors, customers or boards. 

The EU AI Act will add further expectations for high-risk systems. Even if compliance deadlines were to slip to 2027, firms should use the time to strengthen data foundations. This includes documenting data provenance, demonstrating that training, validation and testing data are relevant, representative and as free of error or distortion as possible, explaining how bias is identified and mitigated, and ensuring personal data use is compliant with EU GDPR. However, EU GDPR enforcement varies across Member States, with some jurisdictions applying markedly stricter interpretations than others. Firms therefore must choose between building to the highest bar or tailoring by market.  

Although the AI Act data requirements apply only to high-risk systems, they are likely to become a wider benchmark. EU FS supervisors, who will oversee AI Act high-risk AI use cases in FS, are expected to use them to test data governance across all material use cases. In the UK, they can serve as a guide for meeting both UK GDPR requirements and the outcome-based expectations of FS supervisors.  

Operational Resilience and Third-Party Risk: the concentration challenge 

Operational resilience is now central to AI supervision, driven by the FS industry’s reliance on a narrow set of technology providers for the AI stack. The Bank of England estimates the top three vendors supply about 75% of cloud, 45% of models, and 30% of data services to UK financial firms.9 This concentration creates significant systemic risk, as a single supplier failure or cyber-attack could cascade across the financial system. 

In this setting, resilience frameworks are the first - and for now the most robust - line of defence against risks arising from a concentrated AI supply chain. 

From 2026, supervision will tighten on two fronts. The first involves increasing firm-level scrutiny. With EU’s Digital Operational Resilience Act (DORA) and the UK’s operational resilience regime now embedded, supervisors will test resilience under these new rules as AI scales. Expect close attention to resilience testing, auditability, transparency, and credible business continuity and exit plans, treating concentration across cloud, compute, and foundation models as a core risk. Supervisors will demand clear mapping of where AI systems are used to deliver important business services, with proof that failovers work in practice.  

The second track is direct oversight of critical vendors. Under DORA, the EU has already designated the first batch of critical Information and Communication Technology providers – including some of the biggest cloud and AI service providers – over which supervisory teams will have broad inspection powers.  

In the UK, once His Majesty’s Treasury designates a third party as critical (with initial designations expected in 2026), regulators can impose rules directly on the vendor. These include obligations to provide information, undergo regulatory investigations, and undertake scenario exercises and incident-management drills that will involve FS clients.  

The large cloud providers are likely to be captured in the UK as well, given the scale of their support to FS firms. Vendors whose standalone AI offerings might not, by themselves, meet the criteria for designation could still be captured if they deliver tightly bundled services that combine cloud infrastructure, models, and data in ways that are difficult to separate. 

This does not dilute firms’ accountability. Boards will still own end-to-end resilience and remain responsible for regulatory compliance. However, vendor supervision can support greater transparency and stronger contractual standards, by requiring both vendors and FS firms to ensure contracts are aligned with regulatory requirements. This is likely to generate broader ripple effects for firms. These include increased supervisory interaction, often channelled through their vendors, involving requests for test records, joint drills, and evidence of remediation. Findings from vendor examinations may inform supervisors' views on individual firms and their AI resilience capabilities. For example, where supervisors prompt a model vendor to tighten controls or modify a foundation model, FS firms may need to re-validate outputs or rerun resilience tests. 

Final considerations 

Scaling AI safely, in line with risk appetite and regulatory expectations, requires some core capabilities to be put in place. Strong AI governance, with clear accountability at board and senior manager level, is essential. So too is a secure foundation of effective model risk management, data governance and operational resilience. Together they underpin any effective AI strategy. These are not just necessities to ensure compliance. Done well, AI governance is a strategic enabler to identify and prioritise use cases, direct investment accordingly, and allow firms to scale with confidence and realise value.  

While AI governance requires immediate attention, smart data and open finance frameworks are developing more slowly. EU negotiations on the Financial Data Access regulation are progressing slowly, due largely to a lack of consensus around scope and timelines, with an agreement now unlikely before late 2026. In the UK, progress is marginally faster following the passage of the Data (Use and Access) Act, though momentum remains measured. The FCA's Open Finance roadmap, due early 2026, will outline priorities, but the rulebook will not be finalised before 2027. In the interim, the FCA is running Open Finance TechSprints, with an initial focus on mortgages and Small and Medium-sized Enterprises lending use cases, suggesting a phased approach, building incrementally from the existing open banking framework.

  1. Deloitte, AI ROI: The paradox of rising investment and elusive returns, October 2025, available at: https://www.deloitte.com/uk/en/issues/generative-ai/ai-roi-the-paradox-of-rising-investment-and-elusive-returns.html.
  2. BoE, Engaging with the machine: AI and financial stability − speech by Sarah Breeden, October 2024, available at: https://www.bankofengland.co.uk/speech/2024/october/sarah-breeden-keynote-speech-at-the-hong-kong-monetary-authority.
  3. Deloitte, 2025 EMEA Model Risk Management Survey, December 2025, available at: https://www.deloitte.com/dk/en/services/consulting/research/2025-emea-model-risk-management-survey.html.
  4. ECB, Supervisory Priorities 2026-28, November 2025, available at: https://www.bankingsupervision.europa.eu/framework/priorities/html/ssm.supervisory_priorities202511.en.html.
  5. EIOPA, Opinion on AI governance and risk management, August 2025, https://www.eiopa.europa.eu/document/download/88342342-a17f-4f88-842f-bf62c93012d6_en?filename=Opinion%20on%20Artificial%20Intelligence%20governance%20and%20risk%20management.pdf.
  6. ESMA, Public Statement on AI and investment services, May 2024, available at: https://www.esma.europa.eu/document/public-statement-ai-and-investment-services; EBA, Rising application of AI in EU banking and payments sector, September 2025, available at: https://www.eba.europa.eu/sites/default/files/2025-09/146b3558-d026-47bf-a872-f05e93ed30d2/Rising%20application%20of%20AI%20in%20EU%20banking%20and%20payments%20sector.pdf.
  7. FCA, AI Update, April 2024, available at: https://www.fca.org.uk/publication/corporate/ai-update.pdf.
  8. BoE, UK Deposit Takers Supervision: 2025 priorities, January 2025, available at: https://www.bankofengland.co.uk/-/media/boe/files/prudential-regulation/letter/2025/uk-deposit-takers-2025-priorities.pdf.
  9. House of Commons Treasury Committee, Oral evidence: AI in financial services, HC 684, October 2025, available at: https://committees.parliament.uk/oralevidence/16748/html/.

Did you find this useful?

Thanks for your feedback

Our thinking