The EU AI Act, the world's first comprehensive AI law, will enter into force 20 days after its publication in the EU’s Official Journal on 12 July 2024, i.e., 1 August 2024. The Act will be implemented in a phased approach and companies, including those in Switzerland, will face extensive new compliance requirements. This Act will significantly impact how businesses develop, deploy, and manage AI systems, ensuring they align with strict regulatory standards. This article provides an in-depth overview of what companies need to know and how they can prepare.
The EU AI Act sets forth a comprehensive framework to address the potential risks associated with AI systems. Using a broad definition of AI, the legislation outlines extensive requirements and carries significant penalties for non-compliance.
The EU AI Act defines an AI system as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. The broad definition is designed to cover a wide range of AI technologies and applications, from simple automated systems to more complex, self-learning algorithms.
The Act introduces a risk-based approach, categorising AI systems based on their use case and establishing requirements according to the risk category. The Act sets apart general-purpose AI models, which are subject to a different set of rules.
General-Purpose Artificial Intelligence (GPAI) refers to AI that is able to operate with significant generality and is designed to execute a broad spectrum of distinct tasks. The EU AI Act differentiates between GPAI models, such as OpenAI’s GPT-4, and GPAI systems that utilise these models. Examples of GPAI systems include virtual personal assistants like Apple Siri or translation services, such as Google Translate. The AI Act imposes obligations on providers of GPAI models with systemic risk due to their high-impact capabilities, like OpenAI for GPT-4. GPAI systems built upon a GPAI model are assessed separately and may fall into any risk category. For instance, an assistant chatbot built upon GPT-4 falls into the limited risk category and only needs to comply with transparency requirements.
The Act defines different types of entities such as providers, deployers, importers, distributors, product manufacturers, and authorised representatives, which must comply with different requirements. Providers, who develop and place AI systems within the EU market, are primarily responsible for ensuring compliance. Deployers, who use AI systems in their operations, must adhere to specific obligations, especially for high-risk and limited risk AI systems.
For companies located outside of the EU, including those in Switzerland, the EU AI Act applies if they:
If you would like to find out more about the EU AI Act and its implications for Swiss companies, please do not hesitate to contact us.