The contribution of Artificial Intelligence (‘AI’) to innovation is growing. For example, AI in the healthcare market is expected to reach $6.6 billion by 20211 and global retailer spending on AI is predicted to reach $6.3 billion by 20222. In addition to seeing a growth in inventors being assisted by AI, there will also be an increase in inventions created by AI itself. In this context, there is likely to be increased litigation surrounding such AI and the innovations and inventions emanating from it. There is also likely to be an increase in instances of patent infringement where the infringer is the AI enabled machine rather than a human operator. However, this poses some problems for the law relating to patent infringement.

The Patents Act provides for both direct and indirect patent infringement3. However, the statute does not provide guidance regarding how to resolve issues when AI is involved in patent infringement. Traditionally, the individual or corporation that manages or controls a machine will be liable in the event that AI infringes a patent. In many circumstances this rule functions well, however, two problems arise considering AI infringement:

  • the first is the question of who is liable if AI (without human intervention) infringes a patent and such infringement was not foreseeable when the AI was invented or programmed; and 
  • the second relates to how to establish whether a patent has been infringed – this is especially problematic when AI processing occurs in a black box (AI decision-making process is not evident without significant analysis).

Infringement without human intervention

The first problem arises in determining who is liable when AI infringes a patent in an unforeseeable way and without human intervention. If AI operates using its own neural network (such as DABUS, as discussed in my article here) it can generate novel ideas which are not foreseeable to the manager or controller of the machine.

If such an invention infringes the patent rights of third parties, the manager or controller of the AI machine could be held to be liable for actions which were not contemplated when the machine was invented and could not be controlled. This outcome is problematic because an underlying principle of our legal system is that people can predict when their actions break the law (or infringe somebody else’s rights). Where the manager or controller of the AI plays no role in the machines neural network and it operates autonomously, it is difficult to see how that individual can predict when they are likely to infringe another’s patent rights. As a matter of public policy, it is difficult to justify a successful patent infringement claim.

Attributing the actions of AI to humans associated with the machine, whether or not the human is involved in the decision-making of that AI machine, also gives rise to issues for other areas of law.  It is likely that similar issues relating to liability will arise in other areas, such as personal injury law. For example, Tesla uses AI in the development of its self-driving cars, issues of liability will be considered in the event that a self-driving car is responsible for an accident causing the death or personal injury of an individual. It remains to be seen the impact that AI will have on other areas of law, however, it is important to ensure that the approach to patent law is harmonised with issues involving AI in other areas of law.

Ultimately, somebody will need to be responsible for loss caused by AI. The existing position may therefore continue to apply and the manager or controller of the AI may continue to be liable for patent infringement or personal injury. However, the need to harmonize the legal approach across other areas of law may give rise to challenges for judges faced with ordering large damage awards against individuals who were unable to control the actions of AI. At a minimum, additional Parliamentary guidance to confirm that AI managers or controllers are liable for all acts of AI (whether foreseeable or not) would be beneficial.

Proving infringement

The second problem is evidential and is most relevant to patents for processes4 or products of patented processes5: if AI processing takes place within a black box, it may be difficult to establish that that AI infringes processes protected by patents. This problem has two limbs:

  1. If AI decision-making and inventing cannot be interpreted, or it is difficult to interpret such processes, how can claimants prove that the AI has infringed their patent?
  2. Can a claimant establish exactly where the infringement takes place?

Firstly, claimants may have difficulty in proving that AI decision-making, which takes place within a black box, infringes a patent. Claimants may be unable to, without significant disclosure (or even at all), determine whether the AI infringes their patent. Given that disclosure can be an extremely costly process, and that courts have shown an unwillingness to encourage large amounts of disclosure, this imposes a significant burden on claimants who may be required to commit significant time and funds to an action before being able to predict the prospects of success. 

Secondly, if a claimant is unable to identify exactly what decision-making process is taking place in the black box, they will also face problems establishing exactly where an infringement takes place. If an AI’s neural network is spread across the cloud or on servers across multiple jurisdictions, it may be very difficult to establish the location that a patented process takes place. Given that patents are national rights and given that infringement is only committed in the jurisdiction in which that right applies, such a situation may lead to another string of decisions regarding the location of a process, as was considered in William Hill6, Illumina7 and Motorola8.

Although there is no easy solution to the black box problem, potential claimants will need to be aware of the challenges of bringing an infringement claim against AI. These challenges may increase the risk of litigation and are important factors in determining the viability of a claim. Conversely, developers of AI may become dissuaded from seeking patent protection for their AI processes in circumstances where infringement will be difficult or impossible to prove and may fall back on trade secrets or other forms of IP protection for their innovations.

Conclusion

As the number of AI inventions increase, these issues are likely to come to the forefront of patent innovation and disputes. Given the possible impact of AI on a variety of areas of law and the need to harmonize a legal approach to dealing with liability caused by independent AI, there is some uncertainty regarding the risk to managers or controllers of AI. Parliamentary or judicial clarification would be beneficial in understanding this issue.

Moreover, the existence of black box decision-making places additional challenges in the path of potential patent claimants. When determining whether to pursue a case, the claimant should be aware of the additional risk of proving infringement and proving infringement within this jurisdiction. 

Footnotes

1 Accenture - https://www.accenture.com/fi-en/insight-artificial-intelligence-healthcare

2 Juniper Research - https://www.juniperresearch.com/press/press-releases/retailer-spending-on-ai-to-grow

3 The Patents Act 1977

4 S60(1)(b) Patents Act 1977

5 S60(1)(c) Patents Act 1977

6 William Hill v Menashe [2002] EWCA Civ 1702

7 Illumina v Premaitha [2017] EWHC 2930 (Pat)

8 RIM v Motorola [2010] EWHC 118 (Pat)

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.