It may become easier for individuals in the European Union who are harmed by AI systems to seek compensation, thanks to two new Directives proposed by the European Commission in September 2022. In conjunction with the proposed EU AI Act (”EUAIA”), the proposed Directive on adapting non-contractual civil liability rules to artificial intelligence (”AI Liability Directive”) and the proposal for the extension of existing EU product liability rules (”Updated Product Liability Directive”) would form a three-part regulatory system for preventing harm and regulating AI in the EU.
In this article, we introduce the new AI Liability Directive and Updated Product Liability Directive, and propose some foundational steps organisations deploying artificial intelligence can take to prepare for regulatory change.
Figure 1: A three-part regulatory system for AI in the EU, with upstream harm prevention provided by the EUAIA and downstream harm redress provided by the proposed directives.
For all the characteristics that make AI an attractive business tool – from cost-effectiveness and operational efficiency, through to its autonomy – the complexity and opacity of AI systems can make them difficult to explain and understand, especially for external stakeholders. Because of this, a person who believes they have been harmed may be unable to demonstrate that an act or omission involving an AI system was the cause of their loss. Consequently, they are likely to struggle to meet the requirements of existing fault-based civil liability laws and ultimately may find it impractical or impossible to bring forward a claim to recover damages.
The AI Liability Directive aims to address this by reducing the barriers to accessing justice when an AI system may be the cause of the fault. Applying extraterritorially to providers, developers, or users of any AI systems operating within the European Union market, the Directive has two key elements:
1. Provision of evidence
The first element is that Member States’ national courts would be empowered to make accessing evidence easier for a person seeking to claim against a provider or user of an AI system for alleged harm suffered.
The proposed powers, which are only intended to be used if a claimant can demonstrate its claim is plausible, and that all proportionate steps have already been taken to obtain relevant evidence first, would:
The European Commission also recognises in its proposal the potential concern that organisations may have about disclosing confidential information and particularly trade secrets and proprietary information. Accordingly, the proposal makes clear that the interests of all parties, including third parties, will be taken into account in determining what information should be disclosed in support of a claim.
2. Rebuttable presumption of a causal link between a failed duty of care and harm caused by the AI system
The second element is that Member States’ national courts would be required to assume that there is a causal link between the fault of the defendant and the harmful output produced by the AI system or the failure to produce a relevant output when:
The defendant can rebut this presumed cause-and-effect relationship for example by providing evidence that its fault could not have caused the damage. Practically, this reinforces the importance of stringent record-keeping procedures as set out under the EUAIA.
The presumption will not apply to high-risk AI if the defendant can demonstrate that sufficient evidence and expertise are reasonably accessible for the claimant to prove a causal link, and it will only apply to non-High-Risk AI Systems where the court considers it excessively difficult for the claimant to prove the causal link.
The current EU product liability regime was enacted to provide a redress mechanism for people who suffer physical injury or damage to property due to products being defective, i.e., not being as safe as the public is entitled to expect. Unlike the proposals set out in the AI Liability Directive, which relate to fault-based claims, the current Product Liability Directive established a no-fault liability regime in order to provide certainty as to who is responsible in the event harm is caused by a defective product. Nonetheless, the burden of proof is still generally on the injured person to prove the damage they have suffered, the defectiveness of the product and the causal link between the two. It generally makes the manufacturer of a product liable for product defects, or if a product is imported into the EU, then the importer is responsible.
The changes proposed for the Updated Product Liability Directive would bring AI systems within the scope of the product liability regime by:
In addition, the changes to the Product Liability Directive would further strengthen the regulatory coverage of Artificial Intelligence by:
These proposed directives are likely to evolve as they make their way through the EU legislative process before ultimately needing to be reflected in the national law of EU Member States. However, they already make clear the increasing expectations on, and likely liability of, developers, providers, users, manufacturers, and importers of AI systems. Organisations should be taking a proactive approach now in order to prepare for future regulatory requirements, and could begin by considering the following:
Focus on Documentation
The AI Liability Directive and the Updated Product Liability Directive emphasise the importance of organisations having robust risk management frameworks that require accurate, timely, and comprehensive data and documentation to be maintained in relation to their AI systems. Failure to do so is likely to make defending claims more difficult and in organisations having to incur further costs and time spent in order to rebut unfavourable presumptions.
Systematise record-keeping processes
Organisations operating high-risk or non-high-risk AI systems should consider the appropriateness of their record-keeping systems and data management to ensure that they are able to comply with the requirements of the EUAIA and respond to requests for disclosure or provide evidence of compliance, should requests arrive.
Maintain an accurate inventory
A robust inventory of all AI systems operated by organisations will also become crucial as algorithmic, and AI systems become more widespread. Many companies may not even be aware that they perform the kinds of activities and deploy the kinds of systems that fall within the EU’s broad definition for AI systems and that are the focus of the EUAIA and AI Liability Directive. Without a strong understanding of where AI is used throughout operations, organisations cannot expect to ensure compliance with regulatory and legislative requirements.
In conjunction with the proposed EUAIA, these proposed draft directives represent substantial regulatory changes for AI in the EU through a wide-reaching combination of upstream harm prevention and downstream harm redress.
Amidst so much change, organisations that take the initiative to regulatory preparedness can continue to create, innovate, and execute with confidence.
To understand more about the implications of the AI Liability Directive, the Updated Product Liability Directive, the EUAIA and more broadly how to prepare for upcoming AI regulation, please do get in touch.