An autonomous vehicle is heading into another vehicle with five people inside. It cannot brake in time. The two options are: to do nothing and allow itself to crash into the vehicle or to divert its trajectory and crash into a wall, almost certainly killing the autonomous vehicle's sole occupant. This is the Artificial Intelligence version of the 'Trolley Problem' - at its core a philosophical/ethical problem tackled from a number of perspectives, not least by a utilitarian approach.

This moral question, which was previously a problem which 'merely' troubled the mind, now troubles us in an actual, physical and even a legal way - in the form of 'embodied AI'. From a legal point of view, the question is "who would be responsible in such a scenario: the AI, computer programmers, manufacturers, vendors or the vehicle's occupant?". The reality is that the courts would not revert to philosophical arguments when faced with an accident involving an AI powered machine, but would be confined to enforcing the letter of the law - primarily laws equipped to assign responsibility to a human for malicious or negligent acts or omissions. Yet to keep the status quo would be dangerous since judges would thus be confined to black letter law even if it produced unjust results.

Hence the responsibility gap. Indeed, probably the single largest hurdle before the mass release of AI powered machines is that of assigning responsibility in the case of an accident. Malta's non-contractual liability regime (tort law) is dictated by fault. The gist of the responsibility problem is that in AI powered machines, for example fully autonomous vehicles which operate without human oversight,any given accident can hardly be said to be the fault of the owner or the human 'driver', who is now more akin to a passenger or bystander.

As one of the most pervasive, and indeed one of the most immediate AI-related technologies, autonomous vehicles ('AVs') shall be taken as the main case study for these purposes. AVs combine complex software, in the form of AI, with a corporeal presence which can have an actual physical impact on this world – as opposed to software for example which only exists in the virtual world. AVs thus represent what is broadly referred to as embodied AI, or, in other words, a robot. A widely accepted definition of a robot is that contained in Mataric's 'Robot Primer' that is: "A robot is an autonomous system which exists in the physical world, can sense its environment, and can act on it to achieve some goals." 1 The key terms to take out of this definition are that a robot:

  • is autonomous broadly meaning that it can act independently of a human;
  • exists and interacts with the physical world and is therefore also limited to what is mechanically possible;
  • can sense its environment meaning that it has sensors such as vision or sound through which it can collect data from its surroundings;
  • can act to achieve goals meaning that it can have a physical impact on the world.

The above features, combined with AI's ability to adapt or learn from its surroundings (known as machine learning algorithms), including its trials as well as it errors, are the features which are most challenging to existing legal systems. We are therefore faced with two options: to do nothing, keep the status quo and leave the assigning of responsibility/liability in the hands of the courts; or to pre-emptively prepare for the mass release of AVs and similar AI technologies through regulation.

" Keeping the Status Quo

Is the current legal system equipped to deal with the vast potential impact on responsibility regimes that AI promises to bring? What would happen under the current regime, and which laws would take effect, if a robot were to cause an accident in Malta? All these are questions which need answering in the interest of industry players as well as of the public at large.

The extent to which a party can be found responsible is dictated by the applicable law of choice, that is, for our purposes: contract law, tort law and/or product liability law, which often go hand-in-hand but which may have different consequences for interested parties. 2

In particular, if a party were a victim of an accident involving an AV, this would provoke the following questions of the current legal system:

  • Who is liable for the damages suffered, i.e. Who do I sue?
  • How can I prove that the defendant/s caused the damages suffered, i.e. How can I prove it?

1. Who do I sue?

Can the robot itself be found responsible?

The starting point in assigning responsibility would be to take a look at the provisions of tort law. However, a glance at the provisions of responsibility in tort immediately suggest that these were drafted with a natural person in mind. For example, Article 1031 of the Civil Code states that "Every person...shall be liable for the damage which occurs through his fault." In other words, every person has a duty of care, to make use of his rights within the proper limits. A product is not a person (so far), and to say that an AI e.g. an AV has a duty of care (even if granted legal personality) would be to stretch current interpretations too far. Therefore, it is clear from the start that fault cannot, at least on the basis of the current legal framework, be attributed to a robot itself. Whether we grant legal personality to robots is another matter altogether.

Could the owners of the car be found responsible?

It could be argued that by extension of their duty of care, human drivers or owners of autonomous vehicles may have vicarious liability over such vehicles (under tort law), just as the owner of an animal may have by virtue of their responsibility to the public regarding something which should be in their control. It would however be unreasonable and unwise to attribute such form of strict or objective liability to drivers or owners of AVs if they are to be adopted on a widespread basis. Firstly, what use are AVs if it is demanded that drivers/owners are to keep a constant watchful eye on the driving of the vehicle? Secondly, it is probably easier to train a dog than it is to train an AV and the average consumer cannot be expected to understand the workings of an AV, so how can it be expected to care for it? Finally, it is arguable that few would be willing to risk buying AVs if vicarious liability were attached to them. It is thus clear, at least to this author, that vicarious liability is not a viable option.

Could the vendor or the producer/manufacturer be made responsible?

The Vendor

When it comes to attribution of liability to the vendor of an AV one may primarily resort to contract law. If we take a B2C scenario, the vendor (e.g. an auto dealer) is bound to carry out two principal obligations, that is to deliver and warrant the product that is sold (Article 1378, Civil Code). In the case of AVs, the thing we are most concerned with is the latter, the warranty against latent defects. This warranty is covered by Article 1424 of the Civil Code (a similar provision exists in the Consumer Affairs Act providing for a remedy in cases of a 'lack of conformity'), describing latent defects as defects which exist at the time the contract was made and which:

  • "render [the product] unfit for the use for which it is intended;
  • which diminish its value to such an extent that the buyer would not have bought it or;
  • would have tendered a small price, if [the buyer] had been aware of them."

In parallel to this it is necessary to refer to the main features of robots, and particularly their ability to learn as fuelled by machine learning algorithms and as is present in most AI systems. Considering that AVs learn as they drive on the road, how can a vendor be made to answer for latent defects (or lack of conformity) which AVs learn AFTER the time the contract was concluded? This question would already complicate the task of attributing responsibility to the vendor under contract law.

The Producer/Manufacturer

In a claim for damages caused by a product, interested parties may, besides pursuing an action against the actual front-end vendor of the product concerned, pursue the producer or manufacturer instead – this is made possible by product liability law. The Consumer Affairs Act, states that 'the producer' or 'manufacturer' of products (including manufacturers/producers of the whole e.g. the chassis of the car, or of a part e.g. the wheels or software of an AV) shall be liable for damages which are caused by defective products which are produced or manufactured by them.

It has been established that it would be unreasonable/impossible to attribute responsibility for damages in an accident caused by an AV to consumers/drivers. This means that the only course of action would be to sue those who are responsible for the development and manufacture of the car, its parts or its software.

However even if, by elimination of other parties, it is the producers or manufacturers of AVs who must be held responsible for an accident caused by AI, how can I prove that they are responsible for the damages suffered?

2. How can I prove it?

Product liability law to the rescue?

Product liability law seeks to establish an equilibrium between the expertise and financial power of the producer/manufacturer and that of the consumer, inter alia by placing strict liability on such producer/manufacturer. Moreover, it protects the interests of consumers, demanding that they are sold products which are safe and which conform to their description. Just as you wouldn't expect your microwave to explode upon use , neither would you expect your AV to crash into a wall.

Due to the strict liability introduced by product liability law, injured parties must merely prove:

A ). that damage actually occurred;

B ). that the product was defective; and

C ). that a causal relationship existed between the defect (b) and the resulting damage (a).

It is asserted that in the case of an accident caused by an AV, this will in itself constitute first hand evidence that damage has occurred and that the product was defective. Therefore, the only task that remains is to prove a causal relationship (c) - between (a) and (b). To prove this, one would need to prove inter alia that the producer/manufacturer failed to properly warn its customer of any possible dangers of the vehicle, or that the AV's safety systems were not up to scratch. In an analogous situation, the operation of aircrafts is now a highly automated affair. It has been reported that the lack of manual intervention mechanisms in certain aircrafts were the cause of recent fatal plane crashes in Ethiopia. and Indonesia. Fingers are thus being pointed at the aircrafts' manufacturers for the damage caused.

In light of the cutting-edge technology at the core of our discussion it is probable that manufacturers/producers faced with a lawsuit will invoke the 'state of the art' defense. It follows that manufacturers/producers will likely argue that the resultant damages caused by AVs were unforeseeable, in that the scientific and technical knowledge at the time the product was put into circulation did not enable the discovery of the defect which led to the accident.

This does not even take into consideration that the perceived 'defect' may arguably not be a defect at all. The AI might have very well made an informed decision and consciously (a word having implications that veer beyond the scope of this paper) chosen one course of action over another. To return to the example mentioned above, the AI might have decided that crashing into the wall would result in lesser fatalities than allowing the oncoming vehicle to collide with it. Can the heirs or dependants of the deceased passenger successfully argue that there was a 'defect' here? Wouldn't the manufacturer be able to claim that the AI did what it was programmed (or taught itself) to do?

" The opacity problem

It has already been highlighted above that AI's ability to learn makes it an arduous task to prove that a vendor is responsible for an allegedly defective AV. In fact, the machine learning algorithms fuelling such ability to learn are sometimes referred to as 'black box algorithms'. In other words, the logic behind the decisions taken by AI systems such as AVs are either: intentionally opaque, in that they are protected by trade secrets or Intellectual Property or even; unintentionally opaque in that the code behind such AI systems is so complex, that even experts in computer programming are unable to interpret the logic behind such decisions. The latter was the case in a Facebook experiment, where two AI powered machines were shut down because they started talking to each other in their own invented language which not even their own developers could understand.

As long as the opacity problem persists, considering the sheer impossibility of peering inside the black box, for any party to prove that the defect was:

(a) discernible to the consumer; and/or

(b) present at the time of conclusion of the contract

could prove to be an insurmountable task.

" Logistical problems

Even though product liability law may present an option at law to bridge the responsibility gap, instituting a case against producers/manufacturers presents several financial and logistical problems in addition to the conceptual issues outlined in the last preceding paragraph.

One must keep in mind that the manufacture of ordinary cars is already a highly technical affair, often requiring a wide array of know-how and the involvement of numerous experts in different fields. The specialist nature of AVs means that, besides the need to involve manufacturers of engines, wheels, chassis etc, to produce such a vehicle one would also require expertise in AI and robotics. In turn, the development of AI may rely on or be the end-product of boundless bits and pieces of software from different specialists all around the world.

In its current form, bringing evidence at a lawsuit would be a costly and time-consuming affair for all parties, especially considering the likely need for court experts versed in AI technology and digital evidence, who may need to be flown in from all over the world.

Finally, due to the 'who do I sue?' dilemma outlined above, how do you even determine which witnesses to call into a suit? What about the AI itself? If the future holds legal principles that may assign liability to AI by way of a legal fiction yet to be determined (similarly to what is done with companies), would the AI have any 'rights'? Could the AI (or at least logs of the AI's thought process) be 'heard' by a court?

Conclusion

Considering all the above, although the current framework of product liability law could be, in principle equipped to deal with AI systems, at a practical level there are too many uncertainties for products such as AVs to be rolled out to the public en masse.It is clear that some sort of action/strategy is needed. It would be reckless to allow the courts to make judgement calls on such delicate matters involving such moral dilemmas as the AI equivalent to the 'trolley problem'. It is submitted that to keep the status quo would also be detrimental to the development of the market. A vague legal framework does not allow industry players any certainty – manufacturers will not be willing to release their products to the public and the latter will not trust such products enough to use them.

There have been many proposals over the last few years to bridge the responsibility gap. Some proposals, such as the attribution of legal personality to AI, have been criticised as farfetched or even morally untenable. Others argue that our law of tort is already equipped to deal with damage caused by AI.Yet, as highlighted above, considering the lacunae which AI-powered technology uncovers, inaction may be just as harmful.

Owing to its size, geographical position and weather, Malta offers an ideal landscape to act as an AI test bed, potentially allowing it to become a first mover and innovator in the industry. There is no need to go overboard with extensive changes and legalisms. Perhaps the best solution is a mix of the current arsenal of laws combined with the adaptation of some of the current tools, such as insurance, tools which are to a certain extent already equipped with the ability to soften the blow of potential damages.

This piece merely exposes the responsibility gap. The best path towards bridging the gap requires another discussion altogether.

Footnotes

1.M J Mataric, The Robotics Primer (MIT Press, 2007) chapter 2

2.For a comprehensive overview of Maltese laws applicable to AI see: Micallef, TL., 'Civil responsibility for damage caused by artificial intelligence' (University of Malta, 2016)

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.