Article by Debra Littlejohn Shinder

In the news this month: Sony Pictures settled a class action lawsuit filed on behalf of current and former employees in relation to the 2014 security breach that exposed their personal information. This was the case where a "hacktivist" group was suspected of infecting Sony's systems with malware in retaliation for releasing a satirical movie about Kim Jong-un.

Every week, there are reports of more security breaches at banks, stores, healthcare organizations and more. But who is to blame for this? 

The obvious answer is that it's the hackers and attackers who set out to gain unauthorized access to companies' (and individuals') computers and stored or transmitted data, and I don't think anyone would argue with the premise that they have primary responsibility. However, there is a longstanding and steadily increasing trend toward putting more and more of the liability on the companies that collect and store that data, for failing to secure it properly.

I recently ran across a new survey that indicates that shoppers blame retailers for security breaches involving their credit card information, with 40 percent saying they avoid stores that have been hit by security breaches. Rather than see the stores as fellow victims of the attackers, they believe it's retailers' responsibility to keep the bad guys from succeeding in hacking the retail records.  

Is this a case of "blame the victim" or are they right that by accepting and storing our sensitive information, retailers (and other businesses and government agencies) take on an absolute legal and moral accountability for what happens to that information?  In this article, I want to look at some of the most recent breaches and explore that topic more thoroughly.

Data breach lawsuits are costing companies millions of dollars. They're a little different from traditional civil lawsuits, in which case the plaintiff usually has to show that there was some harm caused by the defendant's action (or inaction).  In the Sony case, privacy attorneys noted that there had been no indications of actual harm, but in the case of digital data breaches, claims are based more on the potential for future harm. Once a victim's information is out there, it could be used years or decades later for identity theft or other illegal purposes.

Stolen data isn't always used directly by the thieves who take it. Frequently it's put up for sale on the "darknet" –  that part of the Internet that operates under the radars, not accessible through the standard search engines, where a kind of digital black market flourishes. There is a big global demand throughout the criminal underworld for personal data, especially in Russia, China and Brazil. According to TrendMicro, the going rates for web site account credentials, credit card credentials and business application account credentials range from $19 to $193 per record.

Security guru Brian Krebs noted last year that the criminal groups operating on the dark web are becoming increasingly innovative and professional, even creating loyalty programs to reward their repeat customers and using analytics to pinpoint their buyers' preferences as to types of stolen credentials. The San Francisco Chronicle reported earlier this year that some of the victimized companies, including PayPal, are actually buying back the stolen data to ensure that it doesn't fall into the hands of those who want to illegally use it.

And no wonder companies are resorting to such extreme measures, when a breach can cost them millions in lawsuits, damage their reputations, and in some cases subject them to government fines and other consequences for failing to adequately protect their data.

In the wake of the Target, Home Depot, and JP Morgan Chase data breaches in 2013 and 2014, legislators rushed in with new laws aimed at preventing or mitigating such incidents. California passed AB 1710, requiring organizations doing business with California residents to "implement and maintain reasonable security procedures and practices appropriate to the nature of the information, to protect the Personal Information from unauthorized access, destruction, use, modification, or disclosure." This places a legal liability on those businesses for damages incurred by their customers due to a breach.

Some retailer advocates have raised the question of whether this is akin to the old practice of blaming rape victims for their rapists' actions. Should the fact that poor (or simply imperfect) security practices allowed criminals to steal the data mean the primary responsibility lies with the legitimate holder of that data? In the frenzy to make businesses more accountable for security, it's almost as if the real criminals in these cases are being forgotten.

A more appropriate analogy might be this: If you put your valuables in the hands of a courier to deliver to someone else, and the courier is mugged along the way and your possessions are stolen, should the courier be liable for your loss? The courier was very much a victim of the crime; on the other hand, he entered into a contract with you to safely deliver the goods and failed to do so. If he took "reasonable security precautions" (didn't go through a known dangerous part of town, didn't advertise that he was carrying valuable items, didn't dither but proceeded straight from point A to point B) and was victimized in spite of it, does that make a difference? And who gets to decide what the threshold is for whether security measures are "reasonable," anyway? Can you say well, he should have rented an armored car or he should have traveled accompanied by an armed guard?

And what about your own responsibility in the matter? Do you have any? Should you have been more careful about which courier company you used (or more careful about which retailers and other organizations you trusted with your data)? Or should you have the reasonable expectation that your items/data would be safe with a popular, reputable company used by thousands or millions of people?

I think most would agree that the security of our data is a shared responsibility. As individuals, we should think about what information we share, and with whom. But in today's society, we often don't have the choice to keep our personal info private. We're required by law to share it with government agencies (and in some cases with private organizations). We have to have bank accounts and credit cards to function in an increasingly cash-hostile environment. We have to divulge intimate details of our lives in order to get medical treatment, or go to school, or get a job. And we should have the right to expect that those organizations will do everything they possibly can to keep our information safe.

What happens when those efforts fail? It's easy to hold companies liable for actual monetary damages that we suffer, but how can they compensate for things like the damage to our reputations or credit ratings, or the hours of time that we may have to spend to straighten out the nightmare of identity theft? Ultimately we know it will be the insurance companies that will pay – and raise rates for everyone as a result.

What's the price we're willing to pay for protection of our privacy? Better security costs money, and those costs will be passed along to us in the form of higher prices, higher taxes and more extra fees and surcharges. How much security can we afford? At what point does that extra cost outweigh the further reduction in risk? Given the sophistication of today's hackers and attackers and their ability to stay ahead of our security mechanisms, is resistance futile?

I don't pretend to have the answers to these questions, but as we look at the trends in data breaches and in laws governing liability for them, these are questions we need to be asking. The answers will have a profound impact on our businesses, our personal lives and our jobs as IT professionals.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.