The EU AI Act, the first comprehensive legislation specifically addressing AI, entered into force on 1st August 2024, after more than three years of intense debate. This landmark legislation, applicable to organisations using AI - across all sectors - in the EU, marks a significant shift in how AI is regulated. For many large global firms, in the absence of other regulation, it is often already being used as a benchmark for compliance.
With a two-year implementation period, culminating on 2nd August 20261,businesses face a tight deadline to adapt. Some key provisions, including prohibitions for select AI applications and requirements for GPAI, will come into play even earlier, at six2 and twelve3 months, respectively.
This pressing timeline highlights the urgent need for businesses to understand and address the Act's potential impact on their organisations. Beyond the substantial compliance efforts required, companies developing or using high-risk applications or GPAI models face critical decisions that will shape their AI strategies and, in some cases, business models. These decisions will reverberate across areas such as product governance and portfolio management, procurement strategies, target market selection, compliance frameworks, and approaches to supervisory engagement.
As the AI Act’s implementation begins in full swing, this article uses a case study to explore some of the key issues facing boards and senior leaders.
Note: The article provides some background context to aid the reader's understanding of its key points, but overall, it assumes a foundational understanding of the key elements of the AI Act. For a comprehensive overview of these elements, please consult our June 2024 blog, which examined the final text approved by the EU institutions.
AI's development, distribution, and deployment involve intricate supply chains. Rather than placing the entire regulatory responsibility on the final deployer, the AI Act defines specific roles and responsibilities for each of the entities involved. These are collectively known as "AI operators" (see Figure 1).
A critical first step for organisations is therefore to determine their role within the supply chain of GPAI and high-risk AI systems. This understanding is a pre-requisite to effectively identify and evaluate the potential implications of the applicable requirements in the AI Act.
Figure 1 – Key operators in the AI supply chain as defined in the AI Act
For clarity and to emphasise the core implications, our case study below focuses solely on organisations acting as AI providers and deployers.
Globally, leading innovators are redefining AI as a key driver of future growth strategies, moving beyond its use as a mere accelerator of existing business processes. For example, in retail banking there is a growing emphasis on harnessing the potential of AI to elevate credit assessments. Some banks are using AI to dynamically adjust pricing, offering preferential lending rates to specific customer segments, based on a more comprehensive evaluation of their credit risk4.
However, the AI Act classifies AI systems used for creditworthiness assessment as high-risk applications, subjecting them to stringent compliance requirements (see Figure 2).
Figure 2 – AI Act risk-based classification and regulation of AI systems and models
Our case study examines two fictional global organisations operating in the EU: CleverBank and DataMeld. CleverBank uses an AI-powered loan approval system, incorporating a GPAI model from DataMeld, a US company offering its AI models in the EU. We assume that DataMeld would be regulated as a GPAI provider under the AI act, while CleverBank would be regulated as both a downstream AI provider and an AI deployer (See Figure 3).
Figure 3 – Illustrative case study
We have identified four focus areas for boards and senior leaders developing their organisations' AI strategies. It is important to recognise, however, that these areas are interconnected and will influence one another.
Overview:
Case study – Examples only
Overview:
Case study – Examples only
Overview:
Case study – Examples only
Overview:
Case study – Examples only
The AI Act ushers in a new era of AI governance, demanding a deliberate and strategic response from organisations. The finer points of compliance are still being clarified through secondary legislation, guidance, and harmonised standards. However, early engagement with the Act's implications is crucial for organisations to navigate this new regulatory landscape successfully.
Fulfilling regulatory requirements alone does not guarantee the ethical development and use of AI. Organisations will still need robust ethical frameworks, particularly where regulations are open to interpretation or require balancing competing priorities, such as privacy and accuracy. Even where rules are clear, compliant policies may not always align with broader ethical considerations. For example, fully automated decisions or extensive use of personal data, even when legally permissible, might not always be ethically acceptable to customers, society or align with an organisation’s own position on privacy.
Nevertheless, compliance must underpin any ethical framework. The AI Act, designed to foster trustworthy AI, offers a powerful blueprint for integrating ethics into every stage of the AI lifecycle. This begins with establishing clear ownership and accountability for each AI project and viewing risk and compliance as enablers of a sustainable and responsible AI strategy.
Creating and maintaining comprehensive inventories of AI models and systems is the immediate priority. These inventories will form the basis for initial risk assessments, mapping each system against the AI Act's requirements and identifying areas requiring immediate attention. More fundamentally, the AI Act underscores the need for investment in people, skills, and technology across technical, business, risk, and compliance teams.
Upskilling, including a thorough understanding of the key elements and implications of the AI Act, is also crucial for boards and senior management. This knowledge will empower them to scrutinise their organisations' AI strategies more effectively, ensuring compliance while also fostering a culture of responsible AI. Such a culture, along with the right skills and capabilities, will be pivotal in unlocking the long-term value of AI transformation.
______________________________________________________________________
Footnotes
1 Derogations and extensions apply for AI models and systems already placed on the market or put into service before the entry into force of the EU AI Act. In addition, different/longer timeline (36 months) apply for high-risk systems in Annex I of the EU AI Act – i.e., those used a safety product or a component of a safety product in certain industries or subject to specific harmonised EU product safety law.
2 2 February 2025
3 2 August 2025
5 https://boe.es/diario_boe/txt.php?id=BOE-A-2023-18911
6 At the behest of the Commission, the European Standardisation Organisations (ESOs) are developing standards – known as ‘harmonised standards’ – in relation to requirements for GPAI and high-risk systems. Although harmonised standards are industry-led and voluntary, once adopted by the EU Commission and published in the Official Journal, their application provide organisations with a presumption of compliance with the relevant obligations of the AI Act.