We all remember the video from a few years ago in which (benevolent) hackers remotely seized control of an SUV, initially playing with the climate control, stereo, and screen but then interfering with more critical systems, like the brakes and the accelerator.

That video was comical because the driver was in on the experiment.  But as the "Internet of Things" (IoT) expands, and more and more devices are connected in some way to the outside world, all sorts of possibilities for security failures arise.  Manufacturers are therefore potentially exposed to both tort and regulatory liability—not just for acts and omissions during the manufacturing process, but for the failure to update existing devices as time goes by and vulnerabilities become known.

Consider, for example, the world of connected medical devices.  Security vulnerabilities in implanted devices like pacemakers, defibrillators, neurostimulators, and insulin pumps pose the most obvious risk to patient safety.  But the risk also runs to critical external devices, including computers and computerized devices supporting surgery, and less critical but still sensitive devices such as automated medical supply inventory systems.  Of course, identifiable patient data may be stored across a variety of devices as well.

A Shifting Scale of Potential Dangers

The most obvious security risks to medical devices include active malicious hacking—the human equivalent of the SUV-hacking scenario—where an attacker attempts to directly subvert the functions of the device.  A slightly less alarming attack might simply alter the data used or produced by a monitor or sensor, resulting in misdiagnosis or failure to treat.  But hacking may also take subtler approaches.  Some attacks might involve stealing identities or credentials, especially as implanted devices that effectively act as personal identifiers become more common.  In the fairly near future, it may even be possible to derive locational and activity data from implanted devices, just as it is presently possible to do so with wearable devices like fitness trackers.  

Finally, many attacks on IoT devices do not directly aim to disable or obtain data from the device itself, but to exploit its computing resources.  The Mirai botnet attack, for example, used a variety of low-security devices like cameras and routers to carry out "denial of service" attacks on certain websites and ultimately on a good portion of the internet's infrastructure itself.  Similarly, botnets of malware-infected devices can be used to mine cryptocurrency.  Of course, even if the intent is just to make money, such attacks can indirectly endanger patients by misdirecting a medical device's computing power such that the device fails to operate correctly.

The Liability Question

Given these risks, the question arises: What duties do manufacturers and sellers of IoT devices, including medical devices, have to deal with security threats—and what liability might arise from not protecting such devices, including by patching such vulnerabilities as they arise?

Liability could arise under a number of theories, including straightforward tort law. A 2013 article in The New Republic that surveyed the software-security litigation landscape concluded that tort law was almost entirely inapplicable because of the economic-loss rule (tort is intended to address harms to the person or property, not purely economic losses).  But that view ignores the ways that software (with all its vulnerabilities) is increasingly integrated into all manner of devices on which human life and safety depend.  For example, in Maryland Cas. Co. v. Smartcop, Inc., No. 4:11-CV-10100-KMM, 2012 WL 4344571, at *1 (S.D. Fla. Sept. 21, 2012), a county sued the maker of a GPS program for police cars under a theory of wrongful death after a sheriff's deputy died in a traffic accident.  The county alleged the company was "negligent in failing to properly maintain or update its software programs which caused the death of [the deputy]."

The threat of physical injury also brings the cybersecurity vulnerabilities in medical devices within the jurisdiction of the Food and Drug Administration, which has issued guidance on the issue.  This guidance suggests that there is a wide range of responses that may be required when a manufacturer discovers a vulnerability;  in general the FDA views cybersecurity updates as mere "enhancements" to the product, but where the update is to reduce health risks, the manufacturer is technically required to file public reports on the issue. 

However, the FDA has also stipulated in the same guidance that it "does not intend to enforce" the reporting requirements where there are no known serious injuries or deaths associated with the vulnerability and the manufacturer promptly develops a remediation plan, communicates effectively with its customers and end users, and "actively participates" as a member of an Information Sharing and Analysis Organization (ISAO) to share information about security threats.

In the second part of this post, however, we'll look at another federal agency that has taken a very active role in policing companies' cybersecurity policies, and appears to be ready to police security on the Internet of Things as well—the Federal Trade Commission.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.