Current liability regulations
Manufacturers of AI systems are liable for defects in their technical systems according to the German Civil Code (BGB) and the German Product Liability Act (ProdHaftG). Under general tort liability, they are liable for damages in accordance with § 823(1) BGB. They may also be subject to special statutory liability under the ProdHaftG. However, this only applies if there is a “product” within the meaning of § 2 ProdHaftG. According to current legislation, it is unclear whether and when software such as an AI system falls under the term “product”. As a result, liability issues for manufacturers of AI systems remain largely unresolved so far.
New legislative proposals and their objectives
The aim of the proposal for a Directive of the European Parliament and of the Council on liability for defective products of 28 September 2022 (Draft Product Liability Directive) is to adapt the current EU regulations on product liability to ecological and digital changes and, in particular, to new technologies such as AI. Above all, the Draft Product Liability Directive aims to put an end to the ongoing dispute as to whether software falls under the product definition of § 2 ProdHaftG. According to Art. (1) of the Draft, software is expressly covered by the term “product”.
The proposal for a Directive of the European Parliament and of the Council adapting the rules on non-contractual civil liability to AI of 28 September 2022 (Directive on AI Liability) is at the center of the planned changes. The aim of this Directive is to create a uniform level of protection for damages caused by AI systems in the EU, in particular by facilitating access to evidence, the presumption of a breach of the duty of care, and the presumption of causality between fault and the AI result.
The two proposed Directives are complementary. Although they introduce similar instruments – such as disclosure of evidence or simplification of the burden of proof – they differ in important aspects. For example, claims for damages under the Directive on AI Liability can be asserted by both consumers and legal entities. Claims under the Draft Product Liability Directive, on the other hand, are only available to consumers.
The proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on AI and amending certain Union acts (Artificial Intelligence Act) – last amended on 14 June 2023 – mainly contains safety-oriented provisions aimed at reducing risks and preventing damage in advance. If damage cannot be prevented, the liability regime under the AI Liability Directive will apply.
The AI Liability Directive
Currently, the AI Liability Directive does not aim to establish strict liability for the use of AI or any other strict liability. Its scope of application only extends to “non-contractual fault-based civil law claims for damages”, i.e., it solely applies in the realm of tort law and not to contractual claims.
Potential conflicts may arise because claimants may seek to utilize acquired knowledge for both tort claims and contractual claims for damages enforcement, under Art. 3 of the AI Liability Directive, which provides for disclosure of evidence. The advantages of contractual liability under German law – such as the compensability of financial loss, the presumption of fault (§ 280(1) 2 BGB) and the absence of the possibility of exculpation – could thus be utilized.
Easier access to evidence
The Directive provides for easier access to evidence. Member States will be obliged to ensure that national courts are empowered to order the disclosure of evidence relating to a high-risk AI system at the request of a (potential) claimant if the latter can sufficiently demonstrate the plausibility of a claim for damages. The term “high-risk AI system” is based on Art. 6 of the Artificial Intelligence Act.
According to the current draft, injured parties who have not yet filed a lawsuit are entitled to information. However, the right to information is limited to what is necessary and proportionate to support a claim for damages by a (potential) claimant.
Presumption of a breach of duty of care
If a defendant fails to comply with a national court’s order to produce evidence in the context of a claim for damages, the AI Liability Directive presumes that the defendant is in breach of its relevant duty of care (Art. 3(5) of the AI Liability Directive). The defendant has the opportunity to rebut this presumption.
Presumption of causality between fault and AI outcome
Another core element of the AI Liability Directive is the presumption of causality between the defendant’s fault and the result produced by AI (Art. 4(1) of the AI Liability Directive). This is because proving a causal link between non-compliance with the regulations and the AI result can be extremely difficult in practice.
The Directive draws on an AI result, which is likely to be understood as the direct and not further processed output of an AI system. For instance, if several AI systems are used in a vehicle, it is assumed that the AI result is only the specific result of a single AI system, e.g., the evaluation of a camera image or the recognition of a person.
The presumption rule ultimately has three requirements:
- There must be fault. This can either result from a breach of a relevant duty of care or it is presumed if the defendant failed to comply with an order to disclose evidence.
- There must be a possibility of a causal link between the fault and the AI result. In terms of the presumption of causality, this means that it is reasonable to assume that the defendant’s fault influenced the result produced by the AI system. In order for the presumption of a causal link between the defendant’s breach of duty of care and the AI result to apply at all, the claimant must prove the possibility of this causal relationship. According to German dogma, Art. 4(1) of the AI Liability Directive is more of a “lowering of the standard of proof” rather than a “legal presumption”.
- The causality between the AI result and the damage must be proven. As a result, the claimant must prove that the result generated by AI caused the damage. The additional intermediate step between the causality of the AI result and the damage is a prerequisite that is not required under German tort law. According to § 823(1) BGB, both the causality between the breach of duty of care and the breach of legal interest, as well as the causality between the breach of legal interest and the damage, must be proven.
Conclusion
The two Draft Directives and the Artificial Intelligence Act are still going through the European legislative process. Experience shows that it will take several years before they are adopted and enter into force. Subsequently, directives are usually given a period of several years to be transposed into national law. Consequently, the two Draft Directives are not expected to be implemented into German law until 2025/2026 at the earliest.
Autor

McDermott Will & Emery, Munich
Attorney-at-Law, Partner
Autor

McDermott Will & Emery, Munich
Attorney-at-Law, Associate
