AI: Avoiding And Managing Disputes – Claims In Negligence

TS
Thomson Snell & Passmore

Contributor

Thomson Snell & Passmore
A supplier of AI, or a business using it to supply a service, may still owe a duty of care to the third parties they deal with, even if there is no formal contract in place. If a duty applies...
United Kingdom Litigation, Mediation & Arbitration

A supplier of AI, or a business using it to supply a service, may still owe a duty of care to the third parties they deal with, even if there is no formal contract in place. If a duty applies, the supplier will be required to exercise reasonable care.

Where the law already recognises a duty of care arising, for example, in the case of lawyers advising their clients, or doctors providing medical advice to patients, the use of AI by the service provider is unlikely to change that – the duty of care should still arise in the same way.

If a scenario arises where there is no (or no clear) recognised duty, the position is more uncertain. The court will need to consider the facts and apply the relevant criteria established under the law of negligence to determine whether a duty of care exists. This includes the nature, or proximity, of the parties' relationship, whether the damage that has occurred was foreseeable, and whether it is fair, just and reasonable to impose a duty of care. For example, if an AI product, or the ability to use it, is supplied at no cost, or has been used for an unintended purpose, the situation may not be clear cut.

What is reasonable care?

There is no one size fits all answer to this. The question of what is reasonable care will depend on the circumstances of the parties' relationship, and the facts of the case in issue; the standard of care is that of the hypothetical "reasonable person". It follows that this is assessed in purely objective terms and the standard expected will vary from, say, a surgeon undertaking an operation, to a professional advisor.

The law has to evolve to keep pace with technological developments and so AI users may not fully understand, or be able to assess, the extent of their legal obligations.

Has the standard of care been met?

This question also poses complications. The risk of simply relying on the output of an AI product is regarded as too high by may service providers making use of the technology. One way of mitigating that is to ensure that there is human oversight, to review the results. However, even this might not provide a complete answer, as what if the human supervisor fails to spot an error? Is it reasonable to rely on a human in the context of the service being delivered, and what is it reasonable to expect a human to be able to spot in terms of errors?

What else is relevant to establishing negligence?

The same principles of causation, establishing a legally recognised loss, and the question of remoteness, apply to negligence claims involving AI systems as with any other claim in negligence.

However, the defence of "contributory negligence" may apply if the service user bringing the claim was (either in full or in part) the author of their own misfortune. Service users bringing claims will have other hurdles to overcome, particularly with complex, interconnected AI systems where it might be challenging to show exactly who is responsible, and to prove the necessary link between the AI system, the output and the alleged loss they have suffered.

Reducing the risk of claims in negligence

The points raised in Part 1 of this series are relevant here, too. For example, AI developers need to consider how they can demonstrate that they have taken reasonable care throughout the development process, such as the selection of suitable data used to train the AI product and how the output is checked.

As noted, a human supervising the output is one safeguard; it makes commercial and legal sense to ensure technical experts are tasked with reviewing the output of the AI system before reliance is placed it. It is also worth checking whether any industry guidance or codes or practice are available to help inform the expected standard of care in any given industry or situation. Clearly establishing a contractual framework that governs the relationship will be key to reducing the risk of non-contractual negligence claims arising.

In Part 3 of this series, we will take a look at some of the interesting issues that arise from an IP perspective, with a particular focus on the application of copyright to AI products.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More