Editors' Note: This is the fourth in our third annual series examining important trends in data privacy and cybersecurity during the new year. Our previous entries were on state law trends, comparing the GDPR with COPPA, and energy and security. Up next: cryptocurrency.

Predicting the future is always a bit of a mug's game, given that today's bold claims about what is coming next often end up being served as tomorrow's " claim chowder," to use John Gruber's memorable phrase. Despite the risks in doing so, here are a trio of emerging privacy and cybersecurity threats that seem likely to create headlines (and billable hours for attorneys) in the year to come.

Hardware Security Flaws, By Accident and By Design

2018 was the year that concerns about security vulnerabilities in hardware really came to the fore. It was the year that the world learned of the Spectre and Meltdown design flaws afflicting nearly every microprocessor manufactured in the last 20 years, but also the year that we seriously confronted the possibility that global electronic supply chains are vulnerable to state-level actors introducing security flaws into equipment during the manufacturing process. The accuracy of the Bloomberg News story alleging that Chinese spies implanted chips onto motherboards manufactured in that country by U.S.-based Supermicro has been hotly contested, yet the story demonstrates how easy it would be for an adversary possessing privileged access to the supply chain to introduce hardware flaws into devices. Indeed, the concern that devices and equipment manufactured by Chinese telecommunications companies such as Huawei and ZTE contain vulnerabilities is the key reason why several Western governments—including the United States and Australia—have imposed bans on the use of these companies' products in various parts of their networks.

Given the central role China plays in global electronic supply chains and the growing mistrust of the products manufactured by its "national champions" in much of the world, 2019 might well be the year that we see substantial efforts to secure these supply chains against malicious interference. Interestingly, there is much scope for such efforts to leverage the work that has been done over the last 20 years to audit, assess, and address the social and environmental impacts of supply chains. Such assurance systems could be leveraged for new purposes, though it will take a great deal of cooperation between competitors who use the same suppliers and components to develop effective measures.

A key question will be how the Chinese government reacts to this growing problem and any efforts to solve it. Will the Chinese leadership view it in their strategic interest to be a trusted supplier of products and services to the global market? Or will they find that their geopolitical aims (from their "Made in China 2025" policy to the " One Belt, One Road" initiative) are better served by exploiting their current position as the "world's factory," regardless of the long-term costs?

Encryption Policy: From Bad to Worse

Another major looming risk on the horizon comes from understandable yet ultimately ill-advised government moves to regulate encryption—such as by mandating the inclusion of backdoors into encrypted systems to permit lawful access. For the better part of the last five years, some version of the "Going Dark" debate has been raging, wherein law enforcement and intelligence officials complain about their investigative efforts being stymied by the growing prevalence of encrypted devices and services. This debate reached a fever pitch here in the U.S. back in 2016 when the Obama Administration sought to compel Apple to help it decrypt an iPhone belonging to the perpetrator of a mass shooting pursuant to the authority of the 1791 All Writs Act. In that case, as in many others, governments were ultimately able to find a way into the encrypted device because security software, like everything else produced by human hands, inherently contains flaws and imperfections that can be exploited.

Yet it is the fact that all software contains security flaws that points to the dangers of legislative proposals— such as the one recently enacted by the Australian Parliament—that would require technology companies to provide government agencies with access to encrypted communications. No reasonable person would deny that security threats need to be detected, that crimes need to be investigated, and more generally that no one and nothing should be beyond the reach of fair and just legal process. That said, the notion that we can improve our security against crime, terrorism, and other threats by weakening or restricting encryption fails to understand the security risk inherent in doing so. To paraphrase Bruce Schneier, the trade-off in weakening or restricting encryption is not between security and privacy, but rather between more or less security against different kinds of threats. While it is clear that the pervasiveness of encryption in our society has some very significant negative consequences, the threat posed by weakening encryption is far worse—given that so many mission-critical systems in our society (from healthcare to utilities to defense) all operate using the same commodity hardware and software.

Even so, pressure has been building in a number of jurisdictions to enact regulations to restrict or limit the use of encryption, or to require the providers of encryption technologies to provide governments with various forms of assistance to decrypt encrypted data—from best efforts assistance to the mandating of backdoors. Now that Australia has enacted legislation, 2019 may well be the year that efforts in other leading industrialized countries begin to gain ground—with serious consequences for us all.

AI and Privacy

2018 was also the year that hype about AI reached fever pitch. There are breathless predictions everywhere about how AI will transform society. Many of these predictions are dystopian, from the potential of killer robots to run amok to the possibility that automation will put millions of people out of work, but there is the occasional glimmer of hope, such as in stories of how AI systems are routinely beating the best doctors in diagnosing certain diseases.

Regardless of whether you're an AI optimist or a pessimist, there's no getting around the fact that AI is a data-hungry technology. Current machine learning techniques are premised on feeding algorithms vast sums of data from which they identify patterns and correlations that are used to make predictions. This is true of everything from the algorithms that power autonomous vehicles (which learn to decide how to drive the car based on petabytes of training data), to those underlying credit scoring models (which look at an array of financial data points to judge your credit-worthiness).

There are obvious challenges associated with ensuring that existing privacy laws are respected when data subject to these laws is fed into an AI system—whether for training or for analysis. What is much more difficult to deal with, however, is the manner in which AI-powered techniques can take data in which an individual has no privacy rights to generate powerful predictions about them.

The capture of the "Golden State Killer" in California last year exemplifies the challenge. By running DNA samples that had been collected at crime scenes nearly a quarter-century ago against online genealogical databases, the police were able to determine that the suspect bore specific degrees of consanguinity with other individuals in those databases. This allowed the police to narrow down the pool of potential suspects down to the individual who was ultimately arrested.

What is not yet widely appreciated is that the same techniques used to nab the Golden State Killer can be used to generate powerful predictions about other aspects of our lives from data belonging to the people around us. Much can be predicted about my health, my finances, and a multitude of other characteristics by looking at data from my spouse, my children, my close relatives, or my good friends. Since the data being used to generate predictions and insights about me fundamentally pertains to other people, however, existing privacy laws offer me few protections against such uses.

These are emerging challenges that current data privacy frameworks are simply not equipped to handle. In the long run, government regulation might be required to provide individuals with privacy protections in information pertaining to others that nonetheless reveals something fundamental about us. In the meanwhile, however, companies operating in this space would do well to seek wise counsel on how to do so in a socially responsible manner, so as to avoid problems later.

To view Foley Hoag's Security, Privacy and The Law Blog please click here

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.