Skip to main content

EU Artificial Intelligence Act

What does this landmark regulation mean for Swiss companies?

 

The EU AI Act, the world's first comprehensive AI law, will enter into force 20 days after its publication in the EU’s Official Journal on 12 July 2024, i.e., 1 August 2024. The Act will be implemented in a phased approach and companies, including those in Switzerland, will face extensive new compliance requirements. This Act will significantly impact how businesses develop, deploy, and manage AI systems, ensuring they align with strict regulatory standards. This article provides an in-depth overview of what companies need to know and how they can prepare.

The EU AI Act sets forth a comprehensive framework to address the potential risks associated with AI systems. Using a broad definition of AI, the legislation outlines extensive requirements and carries significant penalties for non-compliance.

Definition of an AI system

 

The EU AI Act defines an AI system as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. The broad definition is designed to cover a wide range of AI technologies and applications, from simple automated systems to more complex, self-learning algorithms.
 

The Act introduces a risk-based approach, categorising AI systems based on their use case and establishing requirements according to the risk category. The Act sets apart general-purpose AI models, which are subject to a different set of rules.

The risk-based approach

 

  1. Unacceptable risk – prohibited practices: These AI systems are banned due to the threat they pose to society, people's safety or fundamental rights. Examples include social scoring systems, cognitive behavioural manipulation of vulnerable groups, or real-time biometric identification systems in public spaces (with some limited exceptions for law enforcement).
  2. High-risk AI systems: These AI systems pose significant risks to health, safety, or fundamental rights, for example: many applications in recruiting, law enforcement or critical infrastructure. They are subject to strict requirements and must undergo conformity assessments before market placement.
  3. Limited risk – transparency obligations: These AI systems are subject to transparency obligations only. Users must be made aware they are interacting with AI, for example chatbots or deepfakes.
  4. Other risk: These AI systems are not covered by any obligations under the Act. Examples of such systems are spam-filters, inventory management systems, or AI-enabled videogames.

General-Purpose AI models and systems

 

General-Purpose Artificial Intelligence (GPAI) refers to AI that is able to operate with significant generality and is designed to execute a broad spectrum of distinct tasks. The EU AI Act differentiates between GPAI models, such as OpenAI’s GPT-4, and GPAI systems that utilise these models. Examples of GPAI systems include virtual personal assistants like Apple Siri or translation services, such as Google Translate. The AI Act imposes obligations on providers of GPAI models with systemic risk due to their high-impact capabilities, like OpenAI for GPT-4. GPAI systems built upon a GPAI model are assessed separately and may fall into any risk category. For instance, an assistant chatbot built upon GPT-4 falls into the limited risk category and only needs to comply with transparency requirements.

Categories of entities that need to comply with the AI Act

 

The Act defines different types of entities such as providers, deployers, importers, distributors, product manufacturers, and authorised representatives, which must comply with different requirements. Providers, who develop and place AI systems within the EU market, are primarily responsible for ensuring compliance. Deployers, who use AI systems in their operations, must adhere to specific obligations, especially for high-risk and limited risk AI systems.

EU AI Act extra-territorial impact: How does it affect Swiss companies?

 

For companies located outside of the EU, including those in Switzerland, the EU AI Act applies if they:

  • Develop or provide AI systems that are used within the EU
  • Use AI systems that produce outputs in the EU or affect individuals in the EU
  • Export AI systems into the EU market

Penalties – The higher the risk category, the higher the fine

The AI Act’s penalty regime is based on the nature of the violation, with fines increasing according to the risk category. Simply put, the higher the risk category, the higher the fine. The Act establishes a three-tiered system of penalties based on the severity of the infringement:

Here are six steps we have identified to get your organisation ready for the AI Act

If you would like to find out more about the EU AI Act and its implications for Swiss companies, please do not hesitate to contact us.

Did you find this useful?

Thanks for your feedback