The rapid and widespread adoption of artificial intelligence (AI) systems (and generative AI in particular) has been one of the major topics discussed over the past few years by the public and, to some extent, by legal scholars and legislators.
Although this technology is still recent, successful use cases can be found across all industries and allow businesses to create new opportunities or to streamline and simplify their current processes. However, companies must understand and manage the legal risks they are exposed to when they use AI systems in their activities. This may be challenging as national legislations have not caught up yet with this new technology and may be tempted to adopt different regulatory approaches, complexifying the international legal landscape companies will have to navigate.
In this article, we will provide a short overview of the EU and Swiss regulatory legal landscape on the use of AI systems as well as of the other common legal risks that arise from their use (intellectual property, data protection, contractual liabilities).
European Union
Following a legislative process launched on 21st April 2021 by the European Commission, the European Commission, the European Parliament and the Council reached a provisional agreement on 8th December 2023 regarding the Regulation Of The European Parliament And Of The Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts (“EU AI Act”).
With the EU AI Act, the European Union lawmakers aim to establish comprehensive regulations for AI systems, impacting their development, placement on the market, putting into service and usage within the European Union in conformity with its values. In particular, the EU AI Act seeks to guarantee the protection of health, safety, fundamental rights, democracy, rule of law, and the environment against the detrimental impacts of AI systems in the Union.
The EU AI Act will apply to any provider that develops, manages, customizes, or implements AI systems within the EU, as well as to users who use such AI systems within the EU, regardless of if these providers or users are in the EU. This extraterritorial effect means that, similarly to the General Data Protection Regulation (“GDPR”), foreign companies will have to be compliant with the EU AI Act if their activities fall within its scope of application. For these reasons, the EU AI Act emerges as a first legislative initiative of its kind and may influence AI policies in other jurisdictions.
Central to the EU AI Act is its risk-based regulatory approach, which aligns the level of regulatory intervention with the potential societal harm posed by an AI system. This approach categorizes AI systems based on the severity of risk they present imposing stricter regulations on those with greater capacity to cause harm.
Violations of the EU AI Act may result in fines set as a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. The provisional agreement of the EU AI Act notably provides for the following:
Because of the extraterritorial effects of the EU AI Act and the potential penalties associated, companies should already prepare to be compliant with the EU AI Act.
Switzerland
There is currently no specific AI systems regulation in the Swiss legal system. However, the Federal Council has instructed on 22nd November 2023 the Federal Department of the Environment, Transport, Energy and Communications ("DETEC") to prepare a report on the possible regulatory approaches to AI systems by the end of 2024, and to involve all federal agencies responsible in the legal areas affected. This analysis should create the basis to issue a concrete legislative mandate for an AI regulatory proposal in 2025.
The Federal Council mentioned that the overview will notably focus on the following elements:
The approach taken by the Federal Council already highlights the role the EU AI Act will take in shaping AI policies, as a stated goal of the Federal Council is to adopt an approach that is compatible with the EU AI Act.
Other legal risks
Even in the absence of specific AI regulations, companies must in any case consider the other common legal risks associated with the use of AI systems. In particular, those related to contractual liabilities, intellectual property rights and data protection. As a non-exhaustive list, the following examples can be mentioned:
Actions to take
Companies that develop, use or intend to use AI systems can already take appropriate measures today, to address the current legal risks and to prepare for the upcoming EU AI Act. Among others, companies should notably take the following actions:
Conclusion
In conclusion, although the lawmakers have yet to catch up with the rate at which AI systems are adopted and new use are discovered, companies should stay abreast of the incoming EU AI Act and address the current legal risks associated with their use and development.
If you would like to discuss this topic, please do reach out to our key contacts below:
Authors