On 29 March, the UK unveiled its eagerly-awaited consultation on its strategic approach to Artificial Intelligence (AI) regulation. Led by the newly established Department for Science, Innovation and Technology (DSIT), the proposals are a key component of the UK’s ambition of becoming “a science and technology superpower by 2030”.
Central to the UK’s strategy is a principles-based AI regulatory framework applicable across all sectors. The framework will not be set in law, at least initially. The Government argues that a non-statutory approach will allow the UK to respond quickly to AI advancements while avoiding excessive regulation that could stifle innovation.
In this article, we look at some of the key elements of the framework and its potential ramifications for regulators and businesses.
A shared understanding of what AI covers is crucial to the success of any regulatory framework. The Government defines AI by referencing two defining characteristics:
A characteristic-based approach helps the UK avoid the drawbacks of a rigid definition, which can be too narrow, too broad, or rapidly become obsolete. It also supports the Government’s strategy of adopting an outcome-focused approach regulating the use of AI, not the overall technology.
However, the framework allows individual regulators to interpret adaptivity and autonomy and develop sector-specific definitions of AI. This of course raises the risk of diverging interpretations and definitions by UK regulators – e.g., Financial Conduct Authority (FCA) or Ofcom. This could create significant regulatory uncertainty for AI applications and businesses operating across sectors. In response, the Government pledges to support coordination amongst regulators and monitor the impact of any diverging interpretations on the application of the framework. The question is whether such coordination will be effective in the absence of a statutory duty for regulators to cooperate. As we will see, this is a challenge which extends to the rest of the framework as well.
The proposed framework will be underpinned by five cross-sector principles to guide the responsible design, development and use of AI.
The Governments preferred model for applying the principles is based on five elements:
The UK proposed framework undoubtedly provides some important advantages but also poses some considerable practical challenges for both businesses and regulators.
We agree with the Government that a principle-based approach may allow for greater flexibility in responding to technological and market developments. We also welcome the emphasis on the outcomes of AI rather than a technical definition. Granting regulators autonomy in deciding whether and how to apply the principles in their domains also has merits. In principle, it could foster greater proportionality and reduce compliance costs for firms, especially start-ups and new entrants.
The full benefits of the framework will only be realised with effective coordination between regulators, consistency of interpretation of principles and supervisory expectations of firms. We have some reservations in this regard, as collaboration is currently constrained by the existing legal framework. For instance, the FCA or Ofcom must interpret the five AI principles to best protect consumers according to the specific sector laws they each separately oversee. This means their interpretations may not always align, causing potential conflicts – for example, with data protection, as the ICO highlighted in its consultation response.1
Businesses need assurance that adhering to one regulator’s interpretation will not result in compliance challenges with another. In the absence of such clarity, organisations may be discouraged from investing in or adopting AI. The government's efforts to facilitate coordination and the cross-sector regulatory AI sandbox may alleviate some issues. The Government also confirmed it will leverage existing voluntary initiatives, such as the Digital Regulation Cooperation Forum (DRCF), to support regulatory dialogue.2 However, we believe regulators will continue to struggle with formal coordination until such responsibilities are mandated by law.
The framework is also unclear as to which regulator would take the lead in regulating general purposes or foundation AI models or use cases in sectors that are not fully regulated or adequately supervised (e.g., education, employment and recruitment). In these areas appointing a specific regulator(s) to oversee the framework may have benefits. This would be similar to what EU Member States will need to do under the draft AI Act.
For the time being, the Government will monitor the evolution of foundation AI models. It aims to collaborate closely with the AI research community to understand both opportunities and risks prior to refining its AI framework.
Challenges also arise concerning the development of individual or joint regulatory guidance. With many regulators already resource-constrained and facing AI expertise gaps, developing AI guidance within 12 months may prove difficult. Regulators need clearer direction on where to focus their efforts. For instance, the ICO suggests the Government should first prioritise research into the most valuable guidance types for businesses, such as sector-specific, cross-sector, or use-case guidance. Currently, we see a multitude of non-statutory guidance emerging, e.g. for medical devices, public sector, data protection. While helpful individually, these do not provide sufficient regulatory clarity for businesses. Where they are not fully complementary or compatible, they can create further complexity or confusion for AI developers and smaller organisations.
The proposed framework is designed to be complemented by tools like AI assurance techniques and industry technical standards. The Government will promote the use of these tools and collaborate with partners, such as the UK AI Standards Hub. These tools are crucial for effective AI governance and international coordination, e.g., via the International Organisation for Standardisation (ISO). However, as the Ada Lovelace Institute recently emphasised, assurance techniques and standards do and should focus on managing AI's procedural and technical challenges. Their effectiveness relies on policymakers elaborating the framework principles – such as fairness or transparency – in sufficient detail in any future guidance.
A major Government-commissioned review indicates that the UK has a narrow 1-2 year window to establish itself as a top destination for AI development. However, the UK faces stiff competition. Other jurisdictions, including the EU, have been moving at pace to position themselves as global AI authorities. The review underscores the importance of implementing an effective regulatory framework to achieve this goal. Such a framework must be flexible and proportionate but also provide sufficient regulatory clarity to boost investment and public trust in AI. As discussed above, there are areas where the proposed framework will face challenges in achieving this balance.
Regarding international alignment, the UK's proposed five principles closely align with the Organisation for Economic Co-operation and Development (OECD) values-based AI principles, which encourage the ethical use of AI. This should facilitate international coordination, as numerous other key international AI frameworks also align with the OECD’s framework – including the draft EU AI Act and the US National Institute of Standards and Technology (NIST) AI Risk Management Framework.
However, at a more detailed level, the UK’s approach diverges significantly in approach from other major jurisdictions. For example, the draft EU AI Act imposes much more granular and stringent requirements on organisations developing, distributing or using AI applications. This and the lack of a clear statutory footing could also pose challenges in terms of international recognition or other forms of equivalence. Embedding a voluntary principle-based framework in international industry-led standards may also be challenging.
Finally, the timelines for the finalisation of the framework are lengthy and, in some places, vague. This could further prolong regulatory uncertainty and fail to provide a minimum level of certainty needed to encourage investment or adoption.
_______________________________________________________________________________________
Reference
1See our paper on Building trustworthy AI for more details about key areas of interaction between conduct, data protection and ethics in financial services: https://www2.deloitte.com/uk/en/pages/financial-services/articles/building-trustworthy-ai.html
2The DRCF is a voluntary cooperation forum that facilitates engagement between regulators on digital policy areas of mutual interest. It currently has four members: the FCA, ICO, CMA and Ofcom.