Skip to main content

Artificial intelligence and data

Walking the tightrope: openness and control in the age of data and AI

Back to Regulatory Outlook 2025

In 2025, FS firms will confront a regulatory landscape defined by the tension between increasingly open data ecosystems and intensified scrutiny of data-powered Artificial Intelligence (AI).

Open finance initiatives, alongside the EU Data Act and other UK Smart Data schemes, promise access to vast data troves to foster innovation and growth. Yet, they also bring new compliance demands for data-sharing governance and infrastructure. Simultaneously, looming EU AI Act deadlines and heightened oversight by financial and data protection regulators underscore the increasing regulatory focus on AI and data use. The evolving nature of these policies – and potential divergence within the EU and between the EU and the UK – adds further complexity. 

Firms face the challenge of scaling GenAI and data capabilities to drive efficiency and growth, all while managing novel risks and evolving regulatory expectations. Legacy systems, data quality concerns, and competition from digital disruptors – within and beyond FS – exacerbate these challenges. 

Success requires aligning data and AI strategies with a cohesive digital vision that incorporates evolving regulatory dynamics. This means integrating risk and compliance into all transformation efforts. Risk appetite, governance frameworks, operations, and investment decisions must reflect a regulatory-aware approach to expanding AI and data use. This integration strengthens strategic decision-making, ensures compliance, and fosters trust with consumers, markets, and regulators. By repositioning risk and compliance as enablers of value creation, rather than cost centres, firms can unlock competitive advantages, driving both profitability and sustainable growth.
 

Ticking clock, mounting stakes: navigating new AI regulation
 

Although AI is not new to FS, its growing scale of adoption, complexity, and strategic importance have placed it squarely in regulators’ sights. The EU AI Act, with multiple implementation deadlines extending to 2026, exemplifies this focus. 

The Act’s primary legislation is already in force. However, 2025 will bring a surge of guidance and secondary legislation from the EU AI Office, new AI Act National Competent Authoritiess (NCAs), and European Standardisation Organisations (ESOs) tasked with defining the technical standards needed to operationalise the Act. Additional measures tailored to financial services may also emerge from the European Supervisory Authorities and national sector regulators. These developments necessitate agile compliance strategies and close monitoring of regulatory changes.
 

Figure 1: EU AI Act implementation timeline: key milestones (non-exhaustive)

A "wait-and-see" approach is risky given the Act's complexity and tight timelines. Firms should proactively interpret its requirements, grounding their efforts in industry standards, leading practices, and ethical frameworks.

Establishing clear roles and responsibilities for AI systems, underpinned by a comprehensive inventory, are immediate "no-regret" actions. These foundational tools, beneficial across all jurisdictions, enable organisations to understand and navigate the impact of evolving regulatory regimes. A robust inventory should detail all AI systems, including those sourced from or used by third parties – a significant area of risk exposure. The inventory must reflect the Act’s broad AI definition and include systems in non-financial areas, such as HR or security. This is critical for identifying systems subject to the Act’s prohibitions from February 2025. Though these bans might seem peripheral to financial services, they still warrant careful consideration. For instance, the ban on AI emotion recognition in the workplace could affect employee compliance monitoring systems.

Ahead of the August 2026 compliance deadline for high-risk and transparency-risk AI systems, businesses must enhance their AI governance and risk management frameworks. Many firms recognise that current frameworks require bolstering to scale securely and sustainably – this is especially true, though not exclusively, for GenAI. Key challenges include ensuring transparency, explainability, bias mitigation, fairness, and data quality – all central to AI Act compliance.1 Effective AI governance will also demand more advanced technological controls, such as embedded code-level guardrails for bias mitigation, explainability or audit trails.

However, a further change in perspective is needed. The AI Act, and regulators more generally, are concerned not only with the inherent risks of AI models but also with those arising from the wider AI systems in which they operate. This encompasses elements such as user interface design, incentives for effective human oversight, and the quality of user training. As AI systems, particularly those powered by GenAI, become increasingly integrated into various business functions and accessible to a wider range of personnel, ensuring that risk management approaches address the interplay between models and systems is paramount. They should also give greater prominence to specific domains such as data protection, privacy, and impacts on individuals' fundamental rights, which may not be focal points in traditional model risk management frameworks.

Achieving this demands operational and cultural shifts to create effective collaboration across risk, legal, compliance, technology, and business teams. This should be underpinned by robust AI training programmes, not least to meet the AI Act's general requirement for AI literacy from February 2025.

The Act's implications extend beyond mere compliance. Its extraterritorial reach presents internationally active firms operating within the bloc with three strategic choices: adopt the Act as their global standard, implement tailored EU-specific solutions, or limit the use of high-risk AI systems. While the Act is currently regarded as a global benchmark, regulatory frameworks in the US and the UK may still evolve, influenced by factors such as geopolitical considerations and the race to attract capital and AI investment. Governance models that can anticipate and accommodate these cross-jurisdictional changes and differences will be essential.

The Act also imposes direct compliance obligations on EU-active AI vendors, creating both opportunities and challenges. Vendors will have to shoulder greater liability for their models and systems and boost transparency and access for downstream users. This will support financial services firms' compliance efforts and third-party risk management. However, these obligations may also prompt vendors to modify products, restrict access, adjust terms, or even exit certain markets. Compliance deadlines for providers of General Purpose AI (GPAI) models, which underpin many GenAI systems, are as early as August 2025. Proactive engagement with vendors to anticipate potential disruptions is critical. Exploring alternatives may include assessing different providers, considering more specialised AI solutions, or pursuing in-house development. Selecting the optimal approach requires a thorough evaluation of costs, functionality, internal capabilities, and the regulatory implications of becoming an AI developer under the Act.

Under the microscope: financial services regulators intensify scrutiny of AI
 

While the AI Act is significant, it is not the only regulatory show in town. Existing technology-neutral financial services frameworks in both the EU and UK – encompassing conduct, prudential and model risk management, operational resilience, and financial stability – remain critical for regulators assessing AI solutions. 

In the EU, the Act complements rather than replaces existing regulations. For example, the European Securities and Markets Authority (ESMA) has issued guidance on AI in investment services, an area not classified as high-risk under the Act.2 This guidance emphasises alignment with the second Markets in Financial Instruments Directive, setting expectations for governance, conduct, and prioritisation of clients’ best interests. For 2025, ESMA has prioritised ensuring investor protection and market integrity when firms use AI. Similarly, European Insurance and Occupational Pensions Authority (EIOPA) is developing an AI framework for insurance to support national supervisory efforts.3

In the UK, where AI adoption is surging – 75% of firms now utilise it, up from 58% in 2022 – policymakers opted against formally defining AI.4 This offers flexibility but places the onus on firms to establish their own definitions to ensure robust governance and risk management. Many are aligning with the EU AI Act's definition as a pragmatic starting point. As under the Act, maintaining inventories of AI systems and risk classifications based on materiality is critical, even if currently mandatory in the UK only for banks under the Prudential Regulation Authority Model Risk Management principles. These principles – expected to be extended to large insurers soon – along with the Consumer Duty, operational resilience frameworks, and the Senior Managers and Certification Regime, will form the bedrock of the UK's outcome-based approach to AI supervision.

Yet, both in the UK and the EU, industry understanding of what "good" looks like in an AI context under these financial services technology-neutral frameworks remains immature. In the absence of imminent regulatory guidance, interpreting these requirements should be tailored to individual AI use cases, risk appetite, in-house expertise, and overall AI maturity.

This is particularly pressing in relation to AI third-party risk, a policy area likely to dominate the regulatory and supervisory agenda of the EU, the UK, and international bodies such as the Financial Stability Board. Reliance on a limited number of AI, cloud, and data vendors significantly amplifies the impact of operational disruptions, raising concerns about potential financial stability risks.5 Moreover, firms’ use of third-party “black boxes” is a leading cause of their limited understanding of AI systems, compared to those developed in-house.6
 

Figure 2: percentage of all third-party providers for cloud, model, and data

Source: Bank of England / Financial Conduct Authority7

To mitigate escalating AI third-party risk, financial services firms must diligently vet suppliers and implement robust controls for testing, change management, and ongoing monitoring. Critically, contracts and risk management frameworks must evolve in lockstep with regulatory demands and supervisory expectations. This may necessitate a comprehensive review – and renegotiation – of existing agreements. 
 

Open finance: from a defensive stance to data-driven growth
 

The UK and EU are laying the groundwork for "open finance" regimes to foster competition, innovation, and inclusion. Central to this effort are the EU Financial Data Access regulation and the UK Data Use and Access (DUA) Bill, both expected to become law in 2025. These measures, alongside the other UK Smart Data schemes and the EU Data Act, will expand open datasets across numerous economic sectors.

This interconnected data landscape offers significant promise for growth. The UK open banking ecosystem is currently valued at over £4 billion.8 Open finance could magnify this impact. For instance, firms could integrate financial data with energy consumption to design personalised green finance products or combine property and financial data to improve mortgage assessments.

However, implementing open finance will be complex and take time. Establishing fair compensation for third-party data access – a principle enshrined in the EU and UK frameworks but absent in open banking – will be particularly contentious. While regulators have set high-level principles, specifics remain unclear, potentially necessitating further intervention. Ensuring interoperability across diverse data ecosystems and building the necessary data-sharing infrastructure will demand significant investment. 

The risk of fragmentation is significant, particularly as the EU is not mandating common Application Programming Interface standards. Businesses must also navigate differing implementation timelines. The EU currently favours commencing with consumer data for credit agreements, accounts, savings, and motor insurance, as early as 2027/28. The UK is expected to prioritise consumer propositions and use cases based on cost-benefit analyses, with lending to small and medium sized enterprises and consumer savings potential early priorities.9

A proactive approach, aligning open finance initiatives with broader digital transformation goals and leveraging synergies with cloud adoption and AI investments, unlocks greater long-term advantages and cost efficiencies. Developing a robust data strategy and prioritising high impact use cases will inform technology infrastructure design, analytical capabilities, and the target operating model necessary to maximise value.
 

The trust imperative: balancing innovation with data protection and ethics
 

The convergence of AI and personal data brings data governance, protection, and ethics into sharp focus. The EU AI Act exemplifies this, mandating General Data Protection Regulation compliance as a prerequisite for conformity. In the UK, firms perceive four of their top five AI risks as data-centric: privacy and protection, quality, security, and bias.10 This aligns with consumer sentiment. Deloitte research reveals that 66% of European consumers prioritise data privacy and security as paramount to trusting GenAI.
 

Figure 3: factors influencing trust in Generative AI – Deloitte survey

Source: Deloitte11

Looking ahead, both EU and UK data protection authorities will release further guidance in 2025, clarifying expectations for AI development and deployment under data protection laws. The DUA Bill will also make targeted amendments to the UK data protection regime, seeking to balance robust regulatory oversight with an environment conducive to innovation. Some FS authorities are also increasingly attentive to data ethics, as evidenced by recent publications from the Danish Financial Supervisory Authority and EIOPA.12,13 These highlight risks such as financial exclusion, and the need for ethical frameworks to ensure data use aligns both with regulations and societal – and firms’ own – values.

To navigate this evolving landscape successfully, firms should capitalise on existing data protection leading practices and tools, while engaging in effective horizon scanning. Robust data governance should be woven into the entire AI lifecycle. Prioritising data protection tenets – including data quality, privacy safeguards, transparency, clear communication, and user-friendly systems and dashboards that support individual data rights – is not only a compliance checkbox but a strategic imperative, particularly as firms embrace open finance. For banks, this aligns with BCBS 239 data management strategy, enhancing the strategic benefit and operational focus of remediation efforts. Finally, investing in Privacy Enhancing Technologies such as homomorphic encryption and federated learning can often bolster privacy and security while facilitating innovation.

Ultimately, trust will be a key commercial differentiator. Deloitte's research shows that high-trust firms significantly outperform peers, achieving up to four times the market value.14 Compliance thus evolves from a constraint into a catalyst for innovation, benefiting both society and the bottom line.

  1. Deloitte, 2023 EMEA Model Risk Management Survey, October 2023, available at:  https://www2.deloitte.com/dk/da/pages/risk/articles/EMEA-model-risk-management-survey1.html
  2. ESMA, Public statement on the use of Artificial Intelligence (AI) in the provision of retail investment services, May 2024, available at:  https://www.esma.europa.eu/sites/default/files/2024-05/ESMA35-335435667-5924__Public_Statement_on_AI_and_investment_services.pdf 
  3. EIOPA, Revised Single Programming Document 2025-2027, September 2024, available at:  https://www.eiopa.europa.eu/document/download/d12a299e-4892-4908-a452-b1523c3a4255_en?filename=EIOPA-revised-single-programming-document-2025-2027.pdf 
  4. BoE/FCA, Artificial Intelligence in UK financial services – 2024, November 2024, available at:  https://www.bankofengland.co.uk/report/2024/artificial-intelligence-in-uk-financial-services-2024
  5. FSB, The Financial Stability Implications of Artificial Intelligence, November 2024, available at: https://www.fsb.org/2024/11/the-financial-stability-implications-of-artificial-intelligence/ 
  6. Ibid 4.
  7. Ibid 4.
  8. Open Banking, CMA confirms full completion of Open Banking Roadmap, unlocking a new era of financial innovation, September 2024, available at: https://www.openbanking.org.uk/news/cma-confirms-full-completion-of-open-banking-roadmap-unlocking-a-new-era-of-financial-innovation/#:~:text=With%20an%20expanding%20user%20base,open%20banking%20on%20the%20economy
  9. FCA, FS21/7 Open Finance, March 2021, available at: https://www.fca.org.uk/publication/feedback/fs21-7.pdf 
  10. Ibid 4
  11. Deloitte, Digital Tranformation: Trust in Generative AI, available at: https://www2.deloitte.com/us/en/insights/topics/digital-transformation/trust-in-generative-ai-in-europe.html
  12. Danish FSA, Report on data ethics when using AI in the financial sector, September 2024, available at https://www.dfsa.dk/Media/638621484289878095/Report%20on%20data%20ethics%20using%20AI%20in%20the%20financial%20sector.pdf 
  13. EIOPA, Data as a driver of inclusion in insurance, May 2024, available at: https://www.eiopa.europa.eu/document/download/1b64ff3b-d200-4beb-bcb8-a6da156bf123_en?filename=Data%20as%20a%20driver%20of%20inclusion%20in%20insurance.pdf 
  14. Deloitte Digital, “TrustID™ Create competitive advantage for loyalty through trust”, accessed in December 2024, available at: https://www.deloittedigital.com/us/en/accelerators/trustid.html 

Did you find this useful?

Thanks for your feedback