Skip to main content

EU AI Act: forging a strategic response

At a glance:
 

  • The EU AI Act is now in force. While compliance will undoubtedly be a priority, it is equally critical for organisations involved in high-risk or general-purpose AI (GPAI) to reflect on the strategic implications of the AI Act. 
  • Using an illustrative case study this article examines four key areas for boards and senior leaders to consider: i) product strategy and governance, ii) compliance approaches, iii) procurement and target markets, and iv) horizon scanning and supervisory engagement.
  • Prioritising risk and compliance in AI design can reduce regulatory-driven delays and time to market. The Act’s extraterritorial reach could affect multinationals’ EU market strategies, consequently some firms have adopted it as an internal global standard to simplify compliance across their operational footprint.
  • The Act’s interplay with existing regulations creates a complex compliance challenge. Adopting a holistic strategy that leverages synergies and addresses dependencies can help streamline organisations’ responses to intersecting requirements and optimise compliance efforts.
  • AI procurement requires careful evaluation. Using third-party vendors can simplify compliance (but not reduce accountability) but can also limit flexibility. The optimal approach will balance regulatory factors, business needs, risk tolerance, and internal capabilities.
  • AI vendors also face an important decision in defining their target market: whether to permit use of their systems for downstream high-risk applications – accepting the associated regulatory responsibilities – and, if so, under what terms, and at what price.
  • Engaging early with the new, complex matrix of EU and national authorities overseeing the Act will be key to building positive supervisory relationships. Robust horizon-scanning will be essential to stay ahead of emerging secondary legislation and guidance.
  • The EU AI Act marks a new era for AI regulation, demanding a considered and comprehensive response to unlock the long-term value of AI. 


Introduction
 

The EU AI Act, the first comprehensive legislation specifically addressing AI, entered into force on 1st August 2024, after more than three years of intense debate. This landmark legislation, applicable to organisations using AI - across all sectors - in the EU, marks a significant shift in how AI is regulated. For many large global firms, in the absence of other regulation, it is often already being used as a benchmark for compliance.

With a two-year implementation period, culminating on 2nd August 20261,businesses face a tight deadline to adapt. Some key provisions, including prohibitions for select AI applications and requirements for GPAI, will come into play even earlier, at six2 and twelve3 months, respectively.

This pressing timeline highlights the urgent need for businesses to understand and address the Act's potential impact on their organisations. Beyond the substantial compliance efforts required, companies developing or using high-risk applications or GPAI models face critical decisions that will shape their AI strategies and, in some cases, business models. These decisions will reverberate across areas such as product governance and portfolio management, procurement strategies, target market selection, compliance frameworks, and approaches to supervisory engagement.
As the AI Act’s implementation begins in full swing, this article uses a case study to explore some of the key issues facing boards and senior leaders.

Note: The article provides some background context to aid the reader's understanding of its key points, but overall, it assumes a foundational understanding of the key elements of the AI Act. For a comprehensive overview of these elements, please consult our June 2024 blog, which examined the final text approved by the EU institutions.


The first step: identifying your role in the AI supply chain


AI's development, distribution, and deployment involve intricate supply chains. Rather than placing the entire regulatory responsibility on the final deployer, the AI Act defines specific roles and responsibilities for each of the entities involved. These are collectively known as "AI operators" (see Figure 1).

A critical first step for organisations is therefore to determine their role within the supply chain of GPAI and high-risk AI systems. This understanding is a pre-requisite to effectively identify and evaluate the potential implications of the applicable requirements in the AI Act.

Figure 1 – Key operators in the AI supply chain as defined in the AI Act

For clarity and to emphasise the core implications, our case study below focuses solely on organisations acting as AI providers and deployers.


A high-risk case study: AI for creditworthiness assessments


Globally, leading innovators are redefining AI as a key driver of future growth strategies, moving beyond its use as a mere accelerator of existing business processes. For example, in retail banking there is a growing emphasis on harnessing the potential of AI to elevate credit assessments. Some banks are using AI to dynamically adjust pricing, offering preferential lending rates to specific customer segments, based on a more comprehensive evaluation of their credit risk4

However, the AI Act classifies AI systems used for creditworthiness assessment as high-risk applications, subjecting them to stringent compliance requirements (see Figure 2).

Figure 2 – AI Act risk-based classification and regulation of AI systems and models

Our case study examines two fictional global organisations operating in the EU: CleverBank and DataMeld. CleverBank uses an AI-powered loan approval system, incorporating a GPAI model from DataMeld, a US company offering its AI models in the EU. We assume that DataMeld would be regulated as a GPAI provider under the AI act, while CleverBank would be regulated as both a downstream AI provider and an AI deployer (See Figure 3).

Figure 3 – Illustrative case study

Strategic considerations for boards and senior leaders


We have identified four focus areas for boards and senior leaders developing their organisations' AI strategies. It is important to recognise, however, that these areas are interconnected and will influence one another.


1. Product strategy and governance


Overview: 

  • Product governance – The AI Act’s compliance requirements, including establishing a comprehensive Quality Management System, will affect the entire AI product lifecycle – from the initial design to post-market monitoring. This will necessitate a review of product governance processes to incorporate early and regular risk and compliance assessments, ensuring alignment with the new requirements. Prioritising compliance-driven AI design and development from the outset will minimise potential delays, costly redesign, or regulatory roadblocks later. 
  • Time to market – The Act’s strict requirements, including the conformity assessments to be completed before a product is marketed or used, are nevertheless likely to extend the development timelines for AI systems. Organisations can stay ahead of the curve by integrating these regulatory factors into early-stage feasibility assessments and planning. 
  • Market access – Any company marketing or using AI in the EU must comply with the AI Act, regardless of where they are based. This extraterritorial reach presents multinational companies with three potential options. They can develop AI solutions specifically for the EU market; adopt the AI Act as a global standard (while navigating potential regulatory differences elsewhere); or restrict their high-risk and GPAI offerings within the EU. Each option carries different implications for market reach, development costs, and compliance, requiring careful evaluation to determine the optimal path.

 Case study – Examples only

2. Regulatory compliance approaches


Overview: 

  •  AI Act interplay with other regulations - The AI Act is designed to work alongside other existing regulations, such as data protection laws and sector-specific frameworks (e.g., customer protection requirements) for industries like financial services or online platforms. The specific requirements for an AI system vary depending on its intended use and design. Therefore, it is important that organisations do not just solve for one regulation in isolation. They should consider the AI Act together with all other applicable regulations. This holistic approach is essential to assess the aggregate impact of regulations and for developing an effective compliance strategy. Such a strategy should be supported by investment in appropriate skills, resources, and technology and should incentivise collaboration across regulatory, technology, and business teams.
  • Compliance dependencies and synergies - The AI Act's interaction with other regulations creates both dependencies and opportunities for streamlining compliance. For example, demonstrating compliance with the EU General Data Protection Regulation (GDPR) and copyrights law is a prerequisite for complying with the AI Act. However, organisations already subject to strict EU oversight, such as financial institutions, benefit from streamlined requirements in areas like risk management. Skilfully navigating intersecting regulatory requirements – especially identifying shared artefacts (e.g., data points or risk assessments) and ensuring consistent regulatory communications – is crucial to achieve both cost-effective and robust compliance.

Case study – Examples only

3. Procurement and target market selection
 

Overview

  • Procurement strategies - The decision to build or buy an AI system takes on a new strategic significance under the AI Act. Building offers greater control but subjects organisations to stricter rules for AI “providers”, including conformity declarations and registration in the EU AI database. Buying from a vendor as an AI “deployer” can shift much of the compliance requirements for providers, though not necessarily the reputational risk. However, it may limit flexibility, especially since substantial customisation would lead deploying firms to being reclassified as "providers”. The optimal approach requires balancing the Act's regulatory implications with business needs, risk appetite and internal capabilities. 
  • Defining target markets - The AI Act mandates close collaboration between upstream providers of AI systems, including GPAI, and downstream providers integrating these systems into high-risk applications. The collaboration should facilitate conformity assessments and responses to supervisory requests. Providers can mitigate their liability by prohibiting the use of their systems in downstream high-risk applications. This raises important questions about risk appetite and commercial strategy. For providers, determining whether to permit high-risk uses, and under what conditions becomes paramount. These conditions may encompass factors such as target market, contractual frameworks, and pricing strategies.

Case study – Examples only

4. Horizon scanning and supervisory engagement


Overview:

  • A new and complex supervisory framework - The AI Act introduces a complex, multi-layered oversight framework. At the EU level, the new AI Office will oversee GPAI providers. Member States are responsible for designating national authorities to supervise AI systems. Some Member States, like Spainwill establish new dedicated AI authorities. Others, like the Netherlands, will leverage existing bodies, particularly data protection authorities and agencies already tasked with upholding fundamental rights. These designations may vary based on the AI system's specific application (e.g., recruitment versus credit scoring). Closely monitoring national designations to identify the relevant authorities for their AI systems will therefore be essential. A proactive approach will also facilitate positive engagement with these supervisory bodies.
  • Horizon scanning capabilities – Although the AI Act's primary text is finalised, practical implementation depends on the development of substantial secondary legislation, guidance, and harmonised technical standards6. National AI Act authorities may also publish information on their supervisory approaches. EU authorities will also continue to issue clarifications on how the AI Act interacts with existing regulatory frameworks, whether data protection, copyright, or sector-specific legislation. Robust horizon-scanning capabilities to monitor these developments, will be key to ensuring timely strategic and operational responses, to ensure both compliance and the ability to capitalise on AI opportunities. 

 Case study – Examples only

Developing a strategic response: final reflections

 

The AI Act ushers in a new era of AI governance, demanding a deliberate and strategic response from organisations. The finer points of compliance are still being clarified through secondary legislation, guidance, and harmonised standards. However, early engagement with the Act's implications is crucial for organisations to navigate this new regulatory landscape successfully.

Fulfilling regulatory requirements alone does not guarantee the ethical development and use of AI. Organisations will still need robust ethical frameworks, particularly where regulations are open to interpretation or require balancing competing priorities, such as privacy and accuracy. Even where rules are clear, compliant policies may not always align with broader ethical considerations. For example, fully automated decisions or extensive use of personal data, even when legally permissible, might not always be ethically acceptable to customers, society or align with an organisation’s own position on privacy. 

Nevertheless, compliance must underpin any ethical framework. The AI Act, designed to foster trustworthy AI, offers a powerful blueprint for integrating ethics into every stage of the AI lifecycle. This begins with establishing clear ownership and accountability for each AI project and viewing risk and compliance as enablers of a sustainable and responsible AI strategy.

Creating and maintaining comprehensive inventories of AI models and systems is the immediate priority. These inventories will form the basis for initial risk assessments, mapping each system against the AI Act's requirements and identifying areas requiring immediate attention. More fundamentally, the AI Act underscores the need for investment in people, skills, and technology across technical, business, risk, and compliance teams. 

Upskilling, including a thorough understanding of the key elements and implications of the AI Act, is also crucial for boards and senior management. This knowledge will empower them to scrutinise their organisations' AI strategies more effectively, ensuring compliance while also fostering a culture of responsible AI. Such a culture, along with the right skills and capabilities, will be pivotal in unlocking the long-term value of AI transformation. 

______________________________________________________________________

Footnotes

1 Derogations and extensions apply for AI models and systems already placed on the market or put into service before the entry into force of the EU AI Act. In addition, different/longer timeline (36 months) apply for high-risk systems in Annex I of the EU AI Act – i.e., those used a safety product or a component of a safety product in certain industries or subject to specific harmonised EU product safety law. 

2 2 February 2025

3 2 August 2025

4 https://www.deloitte.com/content/dam/assets-shared/docs/industries/financial-services/2024/changing-the-game-the-impact-of-ai-on-bcm-sector.pdf

https://boe.es/diario_boe/txt.php?id=BOE-A-2023-18911

6 At the behest of the Commission, the European Standardisation Organisations (ESOs) are developing standards – known as ‘harmonised standards’ – in relation to requirements for GPAI and high-risk systems. Although harmonised standards are industry-led and voluntary, once adopted by the EU Commission and published in the Official Journal, their application provide organisations with a presumption of compliance with the relevant obligations of the AI Act.