The Situation: New technologies incorporating AI create questions about how product liability principles will apply and adapt.

The Issue: Legislatures and courts have not addressed how product liability laws apply to new AI technologies.

Looking Ahead: Companies should consider whether the courts will treat their AI technology as a product or a service, whether and how to allocate liability in agreements, and how industry standards may influence liability for AI.

Automobiles, drones, surgical equipment, household appliances, and other products are increasingly using artificial intelligence ("AI") and, in particular, machine learning to make decisions. The promise and expectation is that AI will improve product safety. But often programmers do not know exactly how their AI will learn, change from experience, and arrive at decisions. When injuries occur, it may be difficult to determine what went wrong and who should bear liability.

Traditional tort law will likely apply to AI, with modest adaptation, just as tort law adapted to crashworthiness of automobiles. Some vehicle manufacturers reportedly will accept liability if their AI does not prevent an accident. Absent such an agreement, courts will need to determine fault among product manufacturers/sellers, AI designers/suppliers, and AI purchasers/users. A central issue will be whether the user controls a product assisted by AI, or AI completely controls the product's operation.

Should AI have learned to recognize a child darting out between parked cars? Should AI have elected to avoid hitting that child or an oncoming school van?

Another threshold question is whether an AI system is a product or a service. Strict liability applies to flaws in product design, manufacture, or warnings that cause personal injury or property damage to others; negligence applies to services, such as data analysis to determine maintenance. Under the Uniform Commercial Code, mass-produced, off-the-shelf software is a "good," but software specifically designed for a customer is a service. Some courts distinguish between the thing containing the software (a product) and information produced by software (not a product).

Some scholars advocate applying a negligence standard to AI because AI is "stepping into the shoes" of humans. But courts may find it difficult to apply a "reasonable person" or "reasonable computer" standard. Should AI have learned to recognize a child darting out between parked cars? Should AI have elected to avoid hitting that child or an oncoming school van?

Plaintiffs typically favor strict liability for claims of defective products. Plaintiffs will argue that, barring product misuse, failure to update, or physical damage, a product with AI causing injury or property damage may suffice to prove a defect claim.

Planning can reduce uncertainty. Contractual warranties, indemnities, and limitations on each may allocate liability. Companies also should consider how to demonstrate their AI's decision-making process generally and in specific instances. Because AI, using technologies such as neural networks, can learn to perform functions and arrive at decisions beyond its original programming, companies also will need to consider how to document and prove that a function was performed or a decision was made as a result of reasonable programming that met then-current industry standards or best practices. Or, a company may need to rely on a state of the art defense: that the product risk was not reasonably foreseeable at the time of programming. To complicate matters, depending on regulations, event recorder data may be available but not admissible to determine fault.

A risk analysis should consider consumer expectations of performance and safety of products with AI. It will be important for companies to educate consumers about the capabilities, risks, and limitations of AI, particularly limitations on operating domain. The risk-utility test may turn on proof that products incorporating AI performed as least as safely as their human-dependent counterparts. Testing, simulations, and field performance data across myriad foreseeable uses and misuses, as well as documented design changes to mitigate foreseeable risks, would help to demonstrate reasonable safety.

Companies should not overlook opportunities to participate in the creation of ethical, legal, and industry standards for products incorporating AI. Various organizations provide those opportunities, including the American Law Institute, the Partnership on AI, SAE International, and the National Council of Information Sharing and Analysis Centers. The U.S. Department of Transportation and National Highway Traffic Safety Administration have invited input from organizations to facilitate the development of regulations.

THREE KEY TAKEAWAYS

1. Companies should monitor how legislatures and courts shape tort law to apply to products, components, and software incorporating AI.

2. Companies should consider using contractual warranties, indemnities, and limitations to control liability risk.

3. Companies should consider participating with industry groups and government agencies to develop ethical guidelines and industry standards that reflect the benefits, risks, and limitations of products with AI.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.