Skip to main content

Key developments proposed for the EU AI Act as it moves to latter stages

The final month of 2022 saw lawmakers take a major step towards finalising the EU Artificial Intelligence Act (the Act) – legislation that will radically shape how AI systems are developed and operated within the European Union. Scheduled for enactment early next year, the Act is the world’s first broad standards for regulating or banning certain uses of AI.[i]

Adopted on 6 December, the Council of Europe’s compromise text proposes significant changes to the draft law presented by the European Commission in April 2021.

Key developments (discussed below):

  • The definition of AI systems narrowed
  • Qualified exceptions to the high-risk AI categories were introduced
  • The European Commission can add or remove high-risk categories post-enactment, and new high-risk AI categories were introduced
  • The concept of “General Purpose AI systems” was added, though substantive guidance will only come later through implementing acts
  • The use of AI systems for “social scoring” was prohibited in the private sector

Other notable developments:

  • Producers of high-risk AI systems, plus public agencies using high-risk AI systems, must be registered in the EU database
  • Emotion recognition systems must disclose themselves to people they operate upon, whereas AI systems used for creative or artistic purposes no longer require disclosure
  • The Act would not apply to AI systems used across most defence, security, and immigration contexts
  • Most of the Act should not apply to AI systems and their outputs used for the sole purpose of scientific research and development, or to “purely personal non-professional activity”

Results of a recent survey conducted by the German AI Association Ki Bundesverbrand suggest that industry participants are preparing for more disruption than was forecasted by the European Commission. Out of 113 EU-based AI startups, 33% believed their systems would qualify as high-risk under the new legislation. This was more than double the 5-15% predicted by the Commission’s Impact Assessment, and suggests that many more firms will be preparing to greatly enhance their regulatory readiness and internal controls.

Below are some of our additional insights on the key developments proposed by the compromise text (as noted above):

Changes to the definition of “AI system” may make inventory assessment more complex

 

The compromise text would exclude simpler software systems from the scope of the AI Act by specifying that the law applies only to systems “developed through machine learning approaches and logic- and knowledge-based approaches.” These concepts are described at some length in Recitals 6a and 6b of the Act.

The addition of specific techniques within the definition of AI system is a notable departure from the European Commission’s technique-agnostic approach – originally intended to make the regulation future-proof.

The proposed amendment would make self-assessment a two-part process. First, firms might assess whether the AI system was developed using specified techniques or approaches. Second, firms would consider whether the systems have “elements of autonomy” – described by Recital 6 as the degree to which the AI system functions without human involvement. As part of this process, we expect that firms may need to look at both quantitative and qualitative factors, as well as nuances like the process of operationalising human oversight, the automation’s role in delivering products or services, and how the outputs will be relied upon.

Changes to the “high-risk” criteria may make classification more challenging

 

Article 6(2) of the compromise text adds provisions that would exempt AI systems which are “purely accessory” from being classified as high-risk. Recital 32 explains this as situations where an AI system’s outputs are of “negligible or minor relevance for … action or decision.”

Before this amendment, any AI system described in the categories listed by Annex III was considered high-risk without exception. For example, if the AI system were “intended to be used for recruitment”, all AI systems contributing to the recruitment process were plausibly high-risk. Now, firms will also need to assess whether their AI system’s outputs have “a high degree of importance in respect of the relevant action or decision.” With these new provisions, we expect classification may be influenced by factors such as:

  • The AI system’s intended use – does it relate to a high-risk category?
  • The kinds of outputs produced by the AI system – are these purely accessory, mere guidance, or do they automatically determine the outcome?
  • How human decision-makers ought to use those outputs – are staff aware of the limitations of the AI systems they work with and trained to consider these in their own actions and decisions?
  • How human decision-makers are actually using AI outputs – are staff appropriately relying on the outputs of the AI systems they work with, and does the firm have governance procedures to monitor this?

Firms that consider their AI systems may be exempted from high-risk classification under the compromise text need to look more broadly than the AI system itself. Failing to consider all relevant factors could result in firms misclassifying their AI as lower risk than they really are.

The European Commission is empowered to add or remove high-risk categories

 

Assuming the threshold for a change is met, the compromise text empowers the European Commission to add or remove high-risk categories through delegated acts after the Act comes into force.

The effect is that firms producing or using AI systems classified as limited risk could find themselves subject to expanded compliance obligations after the Act comes into force. It is not yet clear how quickly such changes could occur, or the enforcement timeframes once a change has been made. Firms should maintain an inventory of AI systems they are developing or deploying and periodically assess these against any changes to the Act’s high-risk categories.

Additionally, the general approach has made several changes to the current list of high-risk AI systems. New categories of AI systems deemed high-risk include insurance and life insurance, and critical digital infrastructure. Systems no longer deemed high-risk include crime analytics, deepfake detection by law enforcement, and AI systems used to verify the authenticity of travel documents.

General Purpose AI Systems (GPAIs) will require further legislative guidance

 

The compromise text introduces the concept of GPAIs, defining these as “AI systems that are intended by the provider to perform generally applicable functions, such as image/speech recognition, and in a plurality of contexts.” Notably, where GPAIs are components of other high-risk AI systems they will be subject to some of the same compliance obligations. It will also require the providers of GPAIs to cooperate and share information with providers of high-risk systems to facilitate the latter’s compliance with the Act. More details on compliance obligations for providers of GPAIs is reserved for future implementing acts to be published within 18 months of the Act entering into force.

This proposed change could have significant ramifications. In the survey by Ki Bundesverbrand (discussed above), 45% of respondents considered their AI systems to be GPAIs. As such, many firms may be waiting for implementing acts to explain the full extent of their obligations.

The prohibition on social scoring could impact credit and insurance industries

 

The compromise text would extend to private actors the prohibition on using AI for “social scoring.” This concept is described as AI systems that “evaluate or classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics.”

Previously, this prohibition applied only to public authorities. By extending it to the private sector, it is possible that some of the more common commercial AI business activities resembling social scoring could become illegal. Firms that use AI systems to assess a person’s eligibility for credit and insurance should pay close attention to this change and consider whether their current use of AI systems may qualify as social scoring within the meaning of the Act.

Preparing for the EU AI Act

 

The Council of Europe will now enter negotiations with the European Parliament, and the latter will assess the compromise text and likely adopt a counter position. Assuming the two can be reconciled without undue delay, the EU AI Act would pass into law in early 2024.

With AI systems now widely used within all manner of industries and public services, a vast array of organisations must begin the process of reviewing how this legislation will affect core business activities. This includes any organisation whose products or services will interact with European citizens, regardless of where the organisation is located. Deloitte’s Algorithm and AI Assurance team is closely tracking the AI Act as it takes shape. To discuss any challenges your organisation faces, from the classification of your AI systems to mandatory conformity assessments, please reach out to us.

_______________________________________________________________________________________________

References: