Artificial Intelligence And Non-Contractual Civil Liability: What Happens If The AI Causes Damage?

EE
EJASO

Contributor

In non-contractual civil liability for damages caused by Artificial Intelligence (AI) systems, the main issue to be resolved is to determine who should be liable for the damage...
Spain Technology
To print this article, all you need is to be registered or login on Mondaq.com.

In non-contractual civil liability for damages caused by Artificial Intelligence (AI) systems, the main issue to be resolved is to determine who should be liable for the damage caused by the AI system. In general, product and service liability require the principle of fault, i.e., the existence of negligence in the breach of a legal or contractual duty, the existence of certain or foreseeable damage in the case of loss of profit, and a causal link between the act or omission and the damage produced.

However, cases related to AI are usually a bit different from the rest. One of the main aspects of AI systems is their opacity for many of their outputs. In many cases we do not know how the system has reached a certain conclusion. For this reason, the classic rules of imputation of non-contractual civil liability discussed above do not work for all AI cases or would make it very difficult for the victims to obtain compensation. As if the above were not enough, there are different agents or intervening parties that may be liable in the production of the damage: data supplier, algorithm designer, model or system trainer, configuration manager, manufacturer and even the user.

Despite being an important issue, the European Union AI Act has left non-contractual civil liability out of its regulation. Therefore, the AI Act is focused on determining the obligations to be fulfilled by the AI systems (Compliance) and not to solving the problems that may arise from the damages provoked by such systems. Instead of including non-contractual civil liability in the AI Act, the European Commission has opted to regulate it specifically by a Directive, as it considers that this is the most suitable instrument because it provides the desired harmonisation effect and legal certainty. It also considers that the Directive provides the flexibility to enable Member States to transpose the rules without friction into their national liability regimes.

Thus, the future Directive on adapting non-contractual civil liability rules to AI (hereinafter, the "Proposal for a Directive") will apply to claims for damages caused by AI systems and will require the corresponding transposition into the national legal systems. The current regulation on non-contractual civil liability does not specifically provide for the procedural difficulties associated with AI. Because of this, the Proposal for a Directive contains procedural measures aimed to facilitate the obtention and disclosure of evidence in a civil process. In addition, it seeks to solve some problems associated with the proof of causal link that may arise from the opacity of the AI systems.

The most important changes are summarized below:

Reversal of the burden of proof in certain cases

A first difficulty encountered in imputing non-contractual civil liability for decisions carried out by an IA system is that, under current national liability rules, victims must prove the existence of a wrongful act or omission on the part of a person who has caused the damage. In this regard, AI can in many cases be complex and opaque - known as the "black box effect" - a fact that can make it difficult for the claimant to prove non-compliance. For this reason, the Proposal for a Directive provides for a particular regime in relation to the disclosure of evidence in case of damage possibly caused by an AI system.

Thus, according to Article 3 of the Proposal for a Directive, national courts may order the disclosure of evidence from the provider of a high-risk AI system1 that may have caused damage to facilitate proof of non-compliance. This obligation only applies in case the provider refuses to disclose the relevant evidence on such system. Therefore, for this to happen, the claimant must have previously made all proportionate attempts at gathering the relevant evidence from the defendant.

The purpose of this measure is, on the one hand, to provide the claimant with access to adequate information and evidence to substantiate his claim and, on the other hand, to ensure that there are effective means to determine the persons potentially liable, as well as to exclude possible wrongly determined defendants.

Presumption of causal link in the case of fault

It may also be difficult for claimants to prove the causal link between non-compliance and the output produced by the IA system or its failure that gave rise to the relevant damage. For this reason, Article 4 of the Proposal for a Directive provides for a rebuttable presumption of causality in the case of fault. For such a presumption to operate, all the following conditions must be met:

- The claimant has demonstrated, or the court has presumed the fault of the defendant.

- It can be considered reasonably likely that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output.

- That the claimant has demonstrated that the output produced by the AI system, or the failure of the AI system gave rise to the damage.  

In the case of a claim for damages against a provider of a high-risk AI system, this presumption will operate if the complainant has demonstrated that the provider failed to comply with any of the requirements listed in Article 4(2) of the Proposal for a Directive.

In the case of a claim for damages against a user of a high-risk AI system, the presumption shall operate if the claimant proves that the user has failed to comply with any of the requirements listed in Article 4(3) of the Proposed Directive.

For AI systems that are not a high-risk AI system (e.g. chatbots), the presumption will only apply where the national court considers it excessively difficult for the claimant to prove the causal link.

In the case of damage caused by the personal use of an AI system (of a non-professional activity), the presumption will only apply when the defendant has materially interfered with its operation or when the defendant refuses to explain the operation of the system.

In any case, this is a rebuttable presumption, so it allows the claimant to have a presumption in favor of the causal link but does not prevent the defendant from proving due diligence in the fulfilment of its obligations.

One of the issues that has not been finally incorporated into the Proposal for a Directive – and that was present in the discussions of expert groups – is whether to introduce a strict liability regime, which is the one that exists due to the mere risk involved in the use of certain products (for example, in vehicles). However, it has finally been decided to maintain the fault-based liability regime but qualified with the presumption of causal link and reversal of the burden of proof mentioned above.

However, it is likely that the current wording of the Proposal for a Directive will be modified before it is finally approved by the European Parliament and the Council. Once approved, it will enter into force twenty days after its publication in the Official Journal of the European Union and, thereafter, it will have to be transposed by the Member States within a maximum period of two years.

Footnote

1. In the sense of the definition and classification provided by the EU AI Act. For example, education and vocational training, migration, asylum and border control management, etc.

 

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More