DEVELOPMENTS AT THE FEDERAL TRADE COMMISSION

Heather Egan Sussman, Carla A. R. Hine and Evan D. Panich

In March 2012, the Federal Trade Commission (FTC) issued its final report "Protecting Consumer Privacy in an Era of Rapid Change", which describes its framework for protecting consumer privacy. The framework serves as a best practices guide for companies and as a roadmap for the U.S. Congress as it considers privacy legislation, and focuses on the following three elements:

"Privacy by Design: Build in privacy at every stage of product development;

Simplified Choice for Businesses and Consumers: Give consumers the ability to make decisions about their data at a relevant time and context, including through a Do Not Track mechanism, while reducing the burden on businesses of providing unnecessary choices; and

Greater Transparency: Make information collection and use practices transparent."

The report recommends that Congress enact legislation related to general privacy, data security and breach notification, and data brokers. While legislation is under consideration, the FTC has focused on five primary action items to implement its privacy framework:

  • Implement "Do-Not-Track" mechanisms
  • Improve privacy protections and disclosures for mobile services, which the agency explored further during its May 2012 workshop and in its guidance for mobile application developers (both of which are discussed below)
  • Improve transparency into data brokers' practices
  • Explore heightened privacy concerns related to large platform providers (that may comprehensively track consumers' online activity across the internet) by hosting a public workshop in December 2012
  • Develop self-regulatory codes of conduct with the U.S. Department of Commerce, industry and other stakeholders

The FTC explicitly does not intend for the report to serve as a template for enforcement actions (at least to the extent the framework exceeds existing legal requirements). In the meantime, the FTC continues to enforce Section 5 of the FTC Act, which prohibits "unfair or deceptive" trade practices. The FTC's cases in this area center around (1) companies' misrepresentations in their privacy policies about how they collect, use or maintain consumers' personal data; (2) companies' failures to reasonably maintain the security of consumers' data, resulting in data breaches and harm to consumers; and (3) deceptive, and in some cases unfair, gathering of consumer data without consumers' knowledge.

These cases, as well as the FTC's workshops and public guidance, suggest that the agency is focused on new vulnerabilities (e.g., data brokers, social networking and large platform providers) and developing technologies (e.g., peer-to-peer networks, Flash cookies and mobile applications). The FTC's actions also indicate a willingness to use all the tools in its box to bring a single action under multiple regulations, including the Gramm-Leach-Bliley Act (GLB Act), the Children's Online Privacy Protection Act (COPPA) and the Fair Credit Reporting Act (FCRA). Throughout 2013, observers can expect the FTC to continue in this same fashion, and further guidance will likely follow December 2012's large platform workshop.

FTC Enforcement Actions Based On Privacy Policy Content

Several FTC enforcement actions focus on companies' allegedly deceptive actions by failing to live up to their privacy policies. The FTC followed its first behavioral advertising case in 2011 against Chitika with its first Flash cookies case. ScanScout, Inc., an online advertiser that used both HTTP and Flash cookies to track users' online activity, settled allegations that its privacy policy misled consumers by stating they could block the company's use of all cookies, although Flash cookies cannot be blocked through internet browser settings. The FTC finalized its settlement with ScanScout in late 2011, requiring the company to provide consumers with an opportunity to opt out of targeted advertising, and requiring that the consumer's choice last at least five years (unless the consumer changes it). The settlement also required disclosures as to how the company uses consumer information, and reports regarding ScanScout's compliance with the order.

The FTC announced a settlement with Facebook in November 2011 regarding changes in Facebook's privacy policies that applied to current users. The changes resulted in users automatically sharing information and pictures, even if they had previously programmed their privacy settings to hide the content. As part of the settlement, which was finalized in August 2012, Facebook must obtain users' consent before implementing changes that would override existing privacy preferences, prevent anyone from accessing a user's data more than 30 days after the user deleted their account, establish and maintain a comprehensive privacy program, and obtain independent third-party audits of its privacy program every two years for the next 20 years.

Also in the social networking space, the FTC filed a complaint in March 2012 against social networking website MySpace alleging that the company's privacy policy misrepresented how it would use consumers' personally identifiable information. MySpace's policy stated that it would not share users' information or use the information in a way that was inconsistent with the purpose for which the consumer provided it without first informing the user and receiving his or her consent. However, without obtaining consent, MySpace provided users' personally identifiable information to advertisers. The company settled the FTC's complaint by agreeing to not misrepresent its privacy practices, establishing a comprehensive privacy program and submitting its privacy program for review by third-party auditors every two years.

The FTC is not simply entering into consent orders with companies, but is actively monitoring and enforcing these consent orders.

Enforcement Actions Related To Data Breaches

Although state attorneys general have initiated many data breach enforcement actions, the FTC also has jurisdiction to pursue claims related to data breaches through the FTC Act and the GLB Act.

In June 2012, the FTC filed a complaint against Wyndham Hotels and Resorts over alleged data security failures that the FTC claims led to three data breaches. The FTC's complaint alleges that Wyndham's failure to protect its customers' data was unfair and deceptive in violation of Section 5 of the FTC Act. In August, Wyndham filed a motion to dismiss challenging the FTC's authority to bring certain claims pursuant to Section 5 of the FTC Act. The case is significant in that it could define the scope of the FTC's enforcement authority going forward in the realm of privacy and data security.

In the same month as filing the Wyndham complaint, the FTC announced separate settlements with debt collector EPN, Inc., and auto dealer Franklin's Budget Car Sales, Inc. In both cases, the FTC alleged that the companies failed to implement reasonable security measures to protect personal information. In both instances, each company allowed peer-to-peer file-sharing software to be installed on their networks, exposing sensitive data to any computer connected to the peer-to-peer network, including Social Security numbers, health insurance numbers, medical diagnosis codes of hospital patients and sensitive financial information. Under both settlements, the companies must implement comprehensive security programs and undergo independent third-party audits for the next 20 years. Further, Franklin's sells and leases cars and provides financing for its customers, and is a "financial institution" under the GLB Act. Franklin's failure to protect consumers' financial information also violated the Safeguards Rule under the GLB Act, which requires financial institutions to ensure the security and confidentiality of personal information such as names, addresses and phone numbers, bank and credit card account numbers, income and credit histories, and Social Security numbers.

Deceptive Gathering Of Data On Consumers

Two of the FTC's cases in 2012 addressed allegations of deceptive, and in some cases unfair, gathering of data about consumers without their knowledge. In September 2012, in what is referred to as the DesignerWare cases, the FTC settled complaints against DesignerWare, LLC, and seven rent-toown companies, that they unfairly and deceptively spied on consumers using computers that consumers rented from them by capturing screenshots of confidential and personal information, logging consumers' computer keystrokes and taking webcam pictures of people in their homes, without notice to, or consent from, the consumers. DesignerWare licensed software to rent-to-own stores that could disable a computer if it were stolen or if the consumer did not make timely payments. There was an additional program known as "Detective Mode" that purportedly helped to locate the computer and collect late payments. However, when activated, Detective Mode logged keystrokes, which captured information such as user names and passwords for e-mail accounts, social media websites and banks; Social Security numbers; medical records; bank and credit card statements; and webcam pictures of children and private activities. The consent orders will ban DesignerWare and the rent-to-own stores from using monitoring software and deceptively gathering information from consumers. Further, the rent-to-own stores will be prohibited from using the information improperly gathered in connection with debt collection.

In December 2012, the FTC announced a settlement with Epic Marketplace Inc., an online advertising company, after investigating claims that it used "history sniffing" to secretly gather data from consumers about their interest in sensitive medical and financial topics. Epic's privacy policy stated that it would collect information only about users' visits to sites within its network. However, it allegedly employed history-sniffing technology that allowed Epic to collect data about consumers' visits to sites outside of its network. The consent order prohibits Epic from using history sniffing and requires it to destroy the data it collected using the technology.

Mobile Platforms

On May 30, 2012, the FTC hosted a one-day workshop to address whether industry needed new guidance on advertising and privacy disclosures in online and mobile environments. The FTC first issued its online advertising disclosure guidelines, the "Dot Com Disclosures", in 2000. Following the workshop, the FTC published a guide directed toward mobile app developers, "Marketing Your Mobile App: Get It Right from the Start". The guide applied the same principles articulated in the FTC's March 2012 omnibus privacy report to the mobile environment, including privacy by design, simplified disclosures regarding data collection and use, consumer choice and maintaining data security.

COPPA

After two rounds of proposed amendments to the COPPA in 2011 and 2012, the FTC adopted final amendments in December 2012. COPPA, which was revised for the first time since it was implemented in 2000, gives parents control over what personal information websites can collect from children under the age of 13, and limits the amount of data websites can collect and use about children. The FTC's amendments are meant to address how the internet has changed since COPPA was first enacted—before the proliferation of social networking sites, mobile devices, mobile applications and third-party tracking cookies. Among the changes are broader definitions of "operator" and "website or online service directed to children," which effectively capture third parties such as advertising networks or downloadable software kits collecting personal information from users through child-directed websites or services. To address heightened concerns related to large platform providers, the changes also modify the definition of "personal information" to include geolocation information and clarify that a persistent identifier will be considered personal information where it can be used to recognize a user over time or across different sites or services, where it is used for purposes other than support for internal operations.

In addition to working on the proposed amendments throughout 2012, the FTC actively enforced COPPA. In October 2012, Artist Arena, an operator of fan websites for celebrities such as Justin Bieber, Rihanna, Demi Lovato and Selena Gomez, agreed to pay $1 million to settle FTC charges that it illegally collected and maintained personal information from children under age 13. Artist Arena enabled users, including children, to register for fan clubs, subscribe to online newsletters, create online profiles and interact with other users. The FTC's complaint alleged that Artist Arena knowingly registered children, failed to provide direct notice to parents or obtain consent from parents, and made false and misleading statements that it would not collect personal information from children or activate a child's registration request without the parent's consent. Artist Arena also agreed to delete the information collected in violation of the rule.

In March 2012, RockYou, an operator of a social gaming site, agreed to settle the FTC's charges that it violated Section 5 of the FTC Act by failing to protect the privacy of its users by not maintaining reasonable security procedures. Additionally, the FTC alleged that RockYou illegally collected information from approximately 179,000 children without their parents' consent in violation of COPPA. The settlement required RockYou to implement a data security program, submit to third-party audits every other year for 20 years and delete information collected from children under age 13. RockYou agreed to pay a $250,000 civil penalty in connection with the alleged COPPA violations.

In February 2012, the FTC Staff issued its report "Mobile Apps for Kids: Current Privacy Disclosures are Disappointing". Staff described the report, along with recent enforcement actions, as "a warning call to industry that it must do more to provide parents with easily accessible, basic information about the mobile apps that their children use." For the report, Staff surveyed mobile apps directed at children and found little information about the data collection and sharing practices of the apps it reviewed. The FTC recommended that app developers provide information regarding data collection and use through simple, short disclosures, and alert parents if the app connects with any social media or allows the app to deliver targeted advertising, so users can make informed decisions prior to downloading. Staff recommended extending these guidelines to third parties, and also called on app stores to better enable these disclosure efforts. In the report, the FTC also announced its intent to "conduct an additional review to determine whether there are COPPA violations and whether enforcement is appropriate."

The FTC Staff issued a follow-up report in December 2012 on "Mobile Apps for Kids: Disclosures Still Not Making the Grade". The report detailed the results of the agency's second survey of children's mobile apps, finding that "parents still are not given basic information about the privacy practices and interactive features of mobile apps aimed at kids." The report urges all entities within the mobile app industry, such as app stores, app developers and third parties providing services within the apps, to ensure that parents have the information they need to make decisions about what apps they download for their children. The FTC also urges the industry to implement the recommendations in its privacy report, including incorporating privacy by design; simplified choice regarding data collection and sharing through kids' apps; and greater transparency about how data is collected, used and shared through kids' apps. The FTC noted that it is developing a consumer education piece and launching non-public investigations into whether participants in the mobile app industry are violating Section 5 of the FTC Act by engaging in unfair or deceptive trade practices, or violating COPPA.

Enforcement Actions Under The FCRA

During 2012, the FTC brought several enforcement actions under the FCRA, which was enacted to promote the accuracy, fairness and privacy of information maintained by consumer reporting agencies (sometimes incorrectly referred to as credit reporting agencies). So-called "consumer reports" are broadly defined in the FCRA as information "bearing on a consumer's credit worthiness, credit standing, credit capacity, character, general reputation, personal characteristics, or mode of living which is used or expected to be used or collected in whole or in part for the purpose of serving as a factor in establishing the consumer's eligibility for (a) credit or insurance . . . ; (b) employment purposes; or (c) any other purpose authorized under [15 U.S.C. § 1681b]." The FCRA imposes certain requirements on consumer reporting agencies and information providers, as well as those using the information (such as employers, insurers and landlords).

In June 2012, the FTC brought its first case against a data broker. Spokeo, Inc., a data broker that compiles and sells detailed information profiles on consumers, agreed to settle claims that it violated the FCRA and the FTC's Endorsement Guidelines (which interpret FTC Act Section 5's prohibition on "unfair or deceptive" trade practices as it pertains to endorsements). Under the settlement, Spokeo agreed to pay $800,000 in civil penalties and has entered into an order prohibiting it from violating certain FCRA provisions and misrepresenting the status of any user or endorser of its product or service.

Notably, this is the FTC's first case to address the sale of information collected through the internet from social media for use in the employment screening context, indicating that not only is the agency watching what social media companies are doing with users' information, it is also monitoring how data brokers are using information collected through social media sites.1 Spokeo collected information about individuals from online and offline sources to create profiles that included contact information, marital status and age range, and in some cases included a person's hobbies, ethnicity, religion, participation on social networking sites and photos that Spokeo attributed to a particular individual. Spokeo marketed these profiles to companies in the human resources, background screening and recruiting industries as information to serve as a factor in deciding whether to interview or hire a job candidate. As such, Spokeo acted as a consumer reporting agency, and the FTC alleged that Spokeo violated the FCRA by (1) failing to ensure the consumer reports it sold were used for legally permissible purposes, (2) failing to ensure that the information it sold was accurate and (3) failing to inform users of Spokeo's consumer reports of their obligations under the FCRA.

The complaint also alleges that Spokeo violated Section 5 of the FTC Act by directing its employees to post deceptive endorsements of its consumer reports as Spokeo customers (instead of disclosing that the endorsements were posted by Spokeo's own employees). The order requires that Spokeo remove (or request removal of) deceptive endorsements of its products, whether on its own website or third-party websites.

This case, while about the FCRA and Endorsement Guidelines specifically, echoes the enhanced scrutiny for data brokers (and large platform providers and mobile app developers) evident in the FTC's March 2012 privacy report.

Following Spokeo, the FTC announced a settlement in August 2012 with HireRight Solutions, Inc., an employment background screening company that provides consumer reports to companies. The FTC alleged that HireRight violated the FCRA by failing to use reasonable procedures to ensure the accuracy of the information it provided in its consumer reports, failing to give consumers copies of their reports and failing to reinvestigate consumer disputes, all of which the FCRA requires. The FTC's case was the first time the agency charged an employment background screening firm with violating the FCRA. HireRight agreed to pay a civil penalty of $2.6 million.

In October 2012, the FTC announced a settlement with Equifax Credit Information Services, LLC, one of the largest consumer reporting agencies, and its associated businesses. The FTC's complaint alleged that Equifax violated the FCRA by improperly selling lists of consumers who were late on their mortgage payments, and the FTC Act by failing to adequately protect sensitive consumer financial information. The buyers of this information then resold it to third parties that used it to market products to consumers in financial distress. Equifax agreed to pay $393,000 to settle the FTC's allegations, while the purchaser of the prescreened lists, Direct Lending Source, Inc., will pay a $1.2 million civil penalty.

TRENDS IN GOVERNMENT INVESTIGATIONS OF DATA BREACHES

David Gacioch and Matt Turnell

Several states updated their data breach notification laws in 2012. Under newly enacted Vermont law, companies must notify consumers within 45 days of discovery of the data breach and must notify the Vermont Attorney General within 14 days of discovery or at the time customers are notified, whichever is sooner. Similarly, Connecticut amended its data breach notification law to require notification to the Connecticut Attorney General when customers are notified. The Illinois Attorney General also issued new guidance on Illinois' data security law, which was amended in 2011. The new guidance covers data storage and protection, employee training and data breach response.

The year 2012 also saw some notable settlements of state-based enforcement actions. In May, the Massachusetts Attorney General announced a $750,000 settlement with a hospital resulting from a data breach reported in July 2010. The breach consisted of the loss during shipping of unencrypted computer backup tapes allegedly containing personal financial and protected health information of hundreds of thousands of patients. There was no evidence as of the time the settlement was announced that any of the lost data actually had been misused. In addition to the monetary component (which included $275,000 worth of credit for enhanced security measures the hospital undertook following the breach), the consent judgment-based settlement required the hospital to make further improvements in its policies and procedures, and to arrange for an independent audit of its compliance.

In October 2012, the California Attorney General settled a data-breach-related enforcement suit against a major California health insurance carrier by means of stipulated judgment. The Attorney General alleged that the company had improperly included customers' Social Security numbers on more than 30,000 mailings sent out between April 2011 and March 2012—and that those Social Security numbers were visible through the outer envelopes of the mailings. (The company did not admit any of the allegations as part of the settlement.) The settlement requires the company to implement new technological safeguards, enhance data security training, and pay a combined $150,000 to a private plaintiff and the State of California.

The U.S. Department of Health and Human Services, Office for Civil Rights (OCR) also entered into at least four major settlements of HIPAA-based enforcement actions in 2012. Entities include a cardiac surgery practice in Phoenix ($100,000 settlement), a major health insurance provider in Tennessee ($1.5 million), a hospital in Massachusetts ($1.5 million), and the Alaska Department of Health and Human Services ($1.7 million). The three largest (by dollar value) of these four settlements stemmed from reported data breaches involving lost or stolen computers/electronic storage media allegedly containing electronically stored protected health information. The fourth resulted from the alleged posting of protected health information on a publicly accessible, internet-based appointment calendar. All of the settlements included corrective action plans—some with third-party compliance monitoring—in addition to monetary payments. The insurance provider settlement is notable in that it resolved the first OCR enforcement action to result from a breach notification required by the HITECH Act's Breach Notification Rule.

PRIVACY CLASS ACTIONS AND THE HARM THRESHOLD

Jason Crow, Jason Casero and John Kocoras

Notable Privacy Class Actions In 2012

In 2012, courts across the United States were busy adjudicating consumer class actions alleging invasions of online privacy. The number of these cases has been on the rise, a fact that should be no surprise considering the explosive growth of the internet as a retail and social media platform. According to the U.S. Census Bureau, retail sales on the internet in 2012 neared $200 billion, and as of October 2012, Facebook alone had more than 1 billion active users per month. These staggering numbers represent millions of online transactions, including millions of people entering their personal e-mail and home addresses, credit card numbers, birthdates and other personal data into company servers.

Generally, the majority of privacy class actions filed in 2012 can be divided into two categories: invasion of online privacy cases and data breach cases. The first type of privacy claim arises when a company collects, uses and discloses an individual's personal information without the individual's knowledge or consent. Common allegations include violations of terms of use agreements; leasing, selling and improper disclosure of personal information from social media websites; and improper use of internet browser tracking technologies, such as Flash cookies (a cookie that regenerates itself when deleted), browser history sniffing code and online behavioral analysis.

The second type of privacy claim arises when an organization's records are compromised in a data breach, i.e., where secure personal information is unintentionally disclosed without an individual's authorization. Common allegations include payment card fraud, computer hacking or unauthorized installation of malware, and failure to secure data on company laptops or storage devices. As of December 2012, the Privacy Rights Clearinghouse estimated that businesses, including financial services, insurers and health care organizations, have experienced a total of 373 data breaches of electronically stored information, involving more than 8.5 million records. Among government, non-profit and educational institutions, the Clearinghouse estimates there have been 534 breaches involving more than 26 million records.2

Common theories of liability in both types of privacy class actions include unjust enrichment, negligence, negligence per se, breach of contract, breach of implied contract and breach of fiduciary duty. Class actions plaintiffs have also alleged violations of state consumer protection statutes such as Mass. Gen. Law ch. 93 § 105(a) or Mass. Gen. Law ch. 93A § 9, or federal statutes such as the Wiretap Act, the Computer Fraud and Abuse Act, and the Stored Communications Act. These claims have relied on various theories of harm, including loss of time to monitor and fix credit, personal information as property, increased risk of identity theft and emotional distress.

In 2012, class action plaintiffs in invasion of online privacy or data breach claims continued to struggle to quantify damages and define "harm" sufficient to survive a motion to dismiss, notwithstanding the creativity of the plaintiffs' bar. For example, plaintiffs in invasion of online privacy cases continued to unsuccessfully attempt to equate harm with violations of state statutes. For example, in Tyler v. Michaels Stores, Inc., 840 F. Supp. 2d 438, 440 (D. Mass. 2012), the court held that plaintiff did not have standing because her allegations that Michaels illegally requested customers' zip codes when processing their credit card transactions did not constitute "economic loss" as required by Massachusetts consumer privacy protection statute.

Despite class action plaintiffs' recent success in Anderson v. Hannaford Bros. Co., 659 F.3d 151, 162-67 (1st Cir. 2011) (holding that plaintiff's allegations of mitigation damages, including card replacement costs and identity theft insurance, arising from the theft of millions of credit and debit card numbers were sufficient to survive a motion to dismiss), class action plaintiffs in data breach cases continued to struggle to clear the harm hurdle. For example, in Katz v. Pershing, LLC, 672 F.3d 64, 69-70, 78-80 (1st Cir. 2012), plaintiff, a brokerage account holder who used the brokerage-firm-defendant's online software to manage and monitor her investments, alleged that the brokerage firm's failure to adhere to privacy regulations increased her risk of harm associated with the potential unauthorized disclosure of her personal information. The court affirmed dismissal of the case for lack of standing, reasoning that "the risk of harm that [plaintiff] envisions is unanchored to any actual incident of data breach. This omission is fatal: because she does not identify any incident in which her data has ever been accessed by an unauthorized person, she cannot satisfy Article III's requirement of actual or impending injury."

The challenge to articulate and quantify harm appears not only in the motion to dismiss phase of litigation, but also during the class action settlement approval phase. In Fraley v. Facebook, 2012 U.S. Dist. LEXIS 116526, at *7-16 & n.4 (N.D. Cal., Aug. 17, 2012), Judge Richard Seeborg sitting in federal district court in San Francisco rejected a proposed $20 million class action settlement over Facebook's use of 70 million users' names and likenesses in its "Sponsored Stories" service because the parties' methodology for calculating the harm suffered by class members was flawed, stating in a footnote that a calculation based on the revenue derived by Facebook from utilizing members' names and likenesses might serve as a method to estimate past actual damages. After painstakingly describing the deficiencies in the parties' methodology and calculations, Judge Seeborg asked, "are some class actions simply too big to settle?"

Major developments in the U.S. privacy legal landscape in 2012 may spur "private" privacy litigation in 2013 that will likely continue in 2014. Specifically, the development of enforceable codes of conduct and a consumer privacy "Bill of Rights" by the White House and the FTC that expand consumers' rights may provide new bases for class action plaintiffs to establish sufficient harm or standing to withstand a motion to dismiss. Accordingly, class action plaintiffs may begin to see more success in privacy class actions as the U.S. government rolls out its new national framework for privacy. The scope and impact of this new framework is discussed in the next section.

Looking Forward To Trends In 2013 And Beyond

The year 2012 saw the publication of two notable reports by the U.S. government that were intended to steer policy discussions in the privacy arena, act as guidelines for public and private organizations, and spur legislative change. The first was a white paper issued by the White House in February 2012 entitled "Consumer Data Privacy in a Networked World: A Framework for Protecting and Promoting Innovation in the Global Digital Economy." The second was a report issued by the FTC in March 2012 entitled "Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Businesses and Policymakers." Both are important because they may portend future developments in privacy regulation and ultimately affect the evolution of privacy litigation in the United States.

As the titles of these reports suggest, consumer privacy remains the primary focus of government attention. In large part, this focus is the result of public outrage over a continuing stream of financial data breaches and the lack of transparency in the ever-growing industry of web-based data collection. As consumers become more suspicious of how their personal data is used and increasingly reliant upon the transfer of information electronically, lawmakers have been pressured to respond to perceived weaknesses in the current privacy regime by both by consumer advocacy groups and the public at large.

Many have criticized both the White House white paper and the FTC report as inadequate to address the many privacy threats posed by corporate use of personal information. To be sure, for now the government is largely relying upon voluntary cooperation by companies that benefit from the use of personal data. For instance, the White House explained that its proposed "Consumer Privacy Bill of Rights"—a lynchpin of its privacy framework, discussed more fully below—only "provides general principles that afford companies discretion" in implementation of privacy policies. This approach, the White House explained, allows more flexibility than a "single, rigid set of requirements." For its part, the FTC explained in its report that its recommendation "calls on companies to act now to implement best practices to protect consumers' private information," and "urges industry to accelerate the pace of self-regulation."

Although both the White House and the FTC recommend that Congress enact legislation supporting their respective proposals, privacy advocates find these recommendations deeply unsatisfying. In the absence of concrete penalties for violations of consumer trust, many fear that consumer privacy will not be a priority for companies that profit from the use of private information. An observation made in the FTC report itself illustrates the problem. Although the FTC commends companies for responding to an earlier call for a "Do Not Track" mechanism—a protocol that would allow consumers to opt-out of online tracking technologies with relative ease—in recent months companies that rely upon such tracking technologies have balked at what they perceive to be unreasonable demands by privacy advocates. Therefore, in spite of the hopeful tone struck by the FTC, the future of cooperative efforts by stakeholders in this area is uncertain.

While privacy advocates may be dissatisfied by the absence of new legislation that specifically responds to the rapid changes in the privacy arena, history leads to a singular conclusion: where once there was nothing, in time there will be something. This is undoubtedly true with respect to new privacy regulation. As just one example, both Houses of Congress recently approved changes to the 1988 Video Privacy Protection Act, and the revisions need only the president's signature before becoming law. Although it is unclear whether these revisions will have any major effect upon the industry, recent action makes clear that new privacy legislation may be inevitable.

Both the White House white paper and the FTC report outline broad frameworks upon which future privacy legislation might be developed. Companies that wish to anticipate what measures they may need to take in response to a new regulatory regime should focus on two concepts: the Consumer Privacy Bill of Rights and "privacy by design."

In its February 2012 white paper, the White House set forth the proposed Consumer Privacy Bill of Rights in an effort to provide "a baseline of clear protections for consumers and greater certainty for companies." Heavily influenced by the Fair Information Practice Principles (FIPPs) at the heart of the Privacy Act of 1974 and recognized internationally (reaching closer to the European Union's Data Protection Directive), the Bill of Rights places an emphasis on several core principles. For example, the Bill encourages "respect for context"—that is, that "[c]onsumers have a right to expect that companies will collect, use, and disclose personal data in ways that are consistent with the context in which consumers provide the data."

In its March 2012 report, the FTC emphasized the concept of privacy by design and encouraged companies to "build in privacy at every stage of product development." Consistent with the same FIPPs that influenced the White House's Privacy Bill of Rights, privacy by design proposes a process whereby companies develop new products and services with consumer privacy as a guiding principle rather than an afterthought. Although the concept is general in nature, the FTC has already emphasized the use of privacy by design by companies at the cutting edge of new technologies. For instance, in October 2012 the FTC issued a report outlining best practices for the use of facial recognition: "Facing Facts: Best Practices for Common Uses of Facial Recognition Technologies." In the report, the FTC "recommends that companies using facial recognition technologies design their services with privacy in mind, that is, by implementing 'privacy by design,'" and sets forth suggestions on how the concept might be applied.

Although the White House's Consumer Privacy Bill of Rights and the FTC's concept of privacy by design may never be codified, the principles they embody will undoubtedly influence privacy regulations that will govern the development and use of emerging technologies. Companies would be well advised to carefully consider what might otherwise be disregarded as an unenforceable set of "best practices" that, as of now, have no bite. The teeth are sure to follow.

FEDERAL CYBER SECURITY LEGISLATIVE ACTION AND DEVELOPMENTS

David Ransom and Steve Ryan

Cyber Security Legislation In Congress

In 2012, the U.S. Congress failed to enact cyber security legislation—despite the urging of the Obama administration— largely because of the business community's concerns that such legislation will give the federal government authority to regulate private-sector "critical infrastructure" (e.g., communications networks, financial services companies, hospitals, chemical plants). The Obama administration is currently circulating a draft executive order on cyber security that would seek to accomplish many of the matters now included in cyber security legislation in Congress. Despite congressional inaction on this issue, Congress very likely would act without delay and in bipartisan fashion on cyber security legislation in the event of a major cyber attack.

In the cyber security debate, Congress is mostly focused on identifying and hardening "critical infrastructure," which is generally regarded as the electric grid, water plants, financial service company data centers, hospitals, communications networks and the like. Much of this critical infrastructure is owned and operated by private-sector companies. Federal and state governments currently have limited authority to affect or regulate the security of those entities.

The leading bill currently in Congress is S. 3414 (the Cybersecurity Act of 2012), which was introduced by Sens. Joe Lieberman (I-CT) and Susan Collins (R-ME), the chair and ranking member, respectively, of the Senate Homeland Security Committee. However, Senate Democrats failed to invoke cloture on S. 3414 on two occasions, and Senate Majority Leader Harry Reid (R-NV) said in November 2012 that the issue was dead for the remainder of the year.

The House has not considered legislation that is as expansive as S. 3414. In April 2012, the House passed H.R. 3523 (Cyber Intelligence Sharing and Protection Act) by a vote of 248–168. The bill was introduced by Rep. Mike Rogers (RMI), the chairman of the House Permanent Select Committee on Intelligence. The entire thrust of this bill is to facilitate the sharing of threat information from the federal government to private-sector entities that may be the target of a cyber attack. The legislation also would prohibit the federal government from sharing specific types of "sensitive personal documents" (e.g., book sales records, tax return records, medical records) that identify an individual. H.R. 3523 has not been taken up in the Senate.

Despite congressional inaction on cyber security legislation, this issue will not fade away. In 2013 the debate will continue. And if the United States is the victim of a significant cyber attack, there is a very real likelihood that Congress will act in response without delay and perhaps provide the federal government with broader, more explicit authority to regulate critical infrastructure than is envisioned under S. 3414.

Administration Executive Order

The Obama administration is now circulating internally a draft executive order on cyber security that seeks to accomplish many of the same things as the cyber security legislation in Congress. That is, the executive order would create a Critical Infrastructure Partnership Advisory Council to coordinate improvements to the cyber security of critical infrastructure. The secretary of homeland security also would be directed to establish a voluntary program to support the adoption of standards and guidelines (the Cyber Security Framework) by the owners and operators of critical infrastructure. The draft executive order also contains explicit provisions (in Section 5) on privacy and civil liberties protections.

An executive order can only use existing legal authority to establish voluntary standards for private entities. Furthermore, it cannot alter existing legal liability laws in an attempt to facilitate information sharing, as S. 3414 would seek to do. The administration knows that an executive order is an insufficient response to the cyber threat, but administration officials believe such an order will provoke congressional action. Nevertheless, if the president signs this executive order, it will be regarded as antagonistic toward House and Senate Republicans, and could complicate negotiations on any possible legislative solution.

MOBILE APPLICATIONS

Heather Egan Sussman

Mobile application privacy was a hot topic in the United States in 2012, and two developments over the year seemed to shape the landscape in this area. First, in February 2012, the California Attorney General (AG) announced that the office had reached an agreement with six major players in the mobile app market space. The AG commenced negotiations with these marketplace companies after it was widely reported in the media that one particular social app was able to—and did—download entire address books from the mobile devices of its users and store that data on its servers. Separately from the negotiations, the social app company agreed to delete the data from its servers and update its privacy policy to be more transparent about its information practices. However, the AG's office still used this opportunity to work with the app marketplace providers to find ways to encourage more transparency of privacy practices. As a result, the AG convened discussions with the six major marketplace providers and ultimately agreed to issue a joint statement. Notably, the joint statement does not impose any legally binding obligations, but it does articulate five principles that are designed to promote compliance with privacy laws in the mobile app arena.

For example, the joint statement provides in its first principle that "where applicable law requires, an app that collects personal data must have a privacy policy that is clear and conspicuous." With this principle the AG's office was clearly seeking to emphasize that California's Online Privacy Protection Act— which requires, among other things, that an online company publish a privacy policy giving notice of its information practices—applies equally to the mobile app environment.

In addition, the principles of the joint statement provide that the six participating market companies will build into their application submission processes certain fields where the developers can include a link or the text of the privacy policy, and then the marketplace company will link back to that privacy policy from the store environment. Again, while these principles are voluntary, the overall goal of the AG's efforts to encourage these infrastructure changes is to increase transparency in privacy practices within the marketplace, with a long-term goal of increasing the frequency with which developers will publish notices and follow not only the law, but also their own published statements about their practices with respect to user information.

While publicizing this joint statement, the AG's office explained that it will still prosecute developers that fail to have privacy policies in violation of California law, and the fines for offending companies can range up to several thousand U.S. dollars per download of the offending app. Since issuing the joint statement, the AG has been sending letters to companies with apps perceived to be out of compliance, warning them to post a privacy policy or face enforcement action. The AG has since filed a complaint against one major airline that allegedly failed to post the policy as directed. For companies throughout the world that market apps to California . users, the key takeaway from these developments is that in order to avoid catching the attention of the California AG's office, make sure the app complies with California's Online Privacy Protection Act.

In addition to these state-based efforts in mobile app privacy, there have been ongoing discussions at the U.S. Department of Commerce through its National Telecommunications & Information Administration (NTIA). Earlier in 2012, the White House released its landmark privacy report entitled "Consumer Data Privacy in a Networked World", which outlines key privacy principles in a Consumer Privacy Bill of Rights. That report also called on the NTIA to work with industry and others to develop voluntary but enforceable codes of conduct that specify how the principles in the Consumer Privacy Bill of Rights will apply in specific business contexts.

The NTIA started its multi-stakeholder initiative by focusing on developing a code of conduct that applies the "transparency" principle outlined in the Consumer Privacy Bill of Rights and, even more narrowly, by focusing on the topic of "transparency" in mobile applications. Some faulted the NTIA for starting with just one principle and for narrowing it to just one platform, but the NTIA responded that it had to start somewhere, and the realm of mobile apps was ripe for this discussion.

The first meeting on mobile app transparency occurred at the Department of Commerce building in July 2012, and there were subsequent meetings each month through the end of 2012, as well as parallel stakeholder meetings outside of the NTIA, all seeking to move the process forward. Videos of past meetings are available on the NTIA's website, as well as discussion drafts of a Transparency Code of Conduct that have been circulated. Interested parties can participate in the live meetings from anywhere in the world via webcast.

UPDATES IN SOCIAL MEDIA

Sabrina Dunlap

With one billion users on Facebook, 500 million on Twitter and roughly four billion YouTube views per day, social media continued to present interesting privacy issues in 2012. At the state level, Maryland, Illinois and California passed laws in response to stories of employers requesting or requiring login credentials from applicants or employees to access their personal social media accounts. These so-called "password protection" laws prohibit employers from asking applicants and employees to submit their log-in credentials to social media sites, such as Facebook and Twitter. While Maryland and Illinois passed a straight ban on such requests, the California law contains an exception permitting employers to ask an employee to share personal social media content that the employer "reasonably believe[s] to be relevant" to investigations of employee misconduct or legal violations. Maryland's law went into effect in October 2012, and both the Illinois and California laws became effective January 1, 2013. As of December 31, 2012, password protection legislation was pending in 11 other states.

In addition to the password protection laws, there was some debate throughout 2012 at the National Labor Relations Board (NLRB) and beyond as to what constitutes protected "concerted activity" with respect to social media use. Section 7 of the National Labor Relations Act (NLRA) protects certain "concerted activities" of employees, including discussions related to workplace conditions. In today's workplace, activities such as blogging or posting messages on social networking websites can be considered concerted activity, and unless the activity falls within one of the exceptions to the NLRA's protections (e.g., confidentiality breaches, extreme disloyalty), the law can limit an employer's control over what employees may write and post through social media.

In early 2012, the NLRB's acting general counsel, Lafe Solomon, released a report describing social media cases reviewed by his office (read McDermott's blog post on this issue here). The report addresses 14 cases related to employer social media policies, several of which involved employees who had been discharged after they posted comments on Facebook. The general counsel found that a number of the terminations were improper because employees had engaged in protected activity under Section 7 of the NLRA when they posted on Facebook about wages or working conditions. However, the general counsel upheld several terminations where the employees were not engaged in protected activity and had merely posted general complaints or individual gripes unrelated to working conditions or wages. In a final report issued at the end of May 2012, the general counsel offered views on a handful of additional cases and even offered the complete text of a social media policy the general counsel found to be within the bounds of the NLRA.

The NLRB also recently issued a decision in which it invalidated Costco's employment policies related to internet and social media use. In Costco Wholesale Inc., the NLRB held that Costco's social media policy was too broad and wrongly prohibited employees from posting anything electronically that could damage the company or damage any person's reputation. According to the NLRB, such policies could be read to prohibit negative statements about working conditions, which are protected activity under Section 7 of the NLRA.

Both the general counsel report and the recent NLRB decision emphasize two key points: 1) employer policies related to social media and internet use should not be so broad that they prohibit activity protected by law, such as the discussion of wages or working conditions; and 2) an employee's comments on social media sites will generally not be protected if they are simply complaints unrelated to working conditions or wages that affect a group of employees.

DEVELOPMENTS IN HEALTH CARE: OCR ISSUES DE-IDENTIFICATION GUIDANCE UNDER HIPAA PRIVACY RULE

Amy Gordon, Daniel Gottlieb, Jennifer Geetter and Amy Hooper Kearbey

On November 26, 2012, the Office for Civil Rights (OCR) released guidance regarding methods for de-identification of protected health information (PHI) in accordance with the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule.

The guidance largely restates prior interpretive guidance to, and health care industry understandings of, the Privacy Rule's de-identification standard. Since the guidance follows a lengthy process of public meetings and other opportunities for input from stakeholders, it appears that OCR has determined that the current de-identification standard strikes an appropriate balance between individuals' interest in the privacy of their personal information and the interests of the research community and other data users. For more information about OCR's proposed modifications to the Privacy Rule, see McDermott's White Paper "OCR Issues Proposed Modifications to HIPAA Privacy and Security Rules to Implement HITECH Act".

Background

The Privacy Rule applies to PHI, which is information (subject to certain limited exceptions) that (1) relates to the individual's past, present or future physical or mental health or condition; the provision of health care to the individual; or the past, present or future payment for the provision of health care to the individual; and (2) that identifies the individual, or for which there is a reasonable basis to believe can be used to identify the individual. PHI includes many common identifiers (e.g., name, address, birth date, Social Security number) when they can be associated with the health information listed above.

The Privacy Rule provides a pathway for HIPAA covered entities (including health plans, health care providers and health care clearinghouses) and other data users to create and then use and disclose de-identified health information outside the disclosure restrictions on PHI. De-identified information is health information that does not identify an individual, and with respect to which there is no reasonable basis to believe that the information can be used to identify an individual.

The Privacy Rule establishes two methods to de-identify PHI: (1) removing 18 specific identifiers (such as name, address and Social Security numbers), or (2) obtaining a professional statistical analysis and opinion that the risk of identification of an individual is very small. The guidance document provides interpretive guidance to the two methods in the form of frequently asked questions.

Guidance With Respect To The Removal Of 18 Specific Identifiers Method

The guidance includes the following clarifications of the Removal of 18 Specific Identifiers De-Identification Method:

May parts or derivatives of any of the 18 identifiers be disclosed consistent with the Removal of 18 Specific Identifiers Method?

No. For example, a data set that contained patient initials or the last four digits of a Social Security number would not meet the requirement of the Removal of 18 Specific Identifiers Method for de-identification.

What are examples of dates that are not permitted according to the Removal of 18 Specific Identifiers Method?

Elements of dates that are not permitted for disclosure include the day, month and any other information that is more specific than the year of an event. For instance, the date January 1, 2009, could not be reported at this level of detail. However, it could be reported in a de-identified data set as 2009.

Many records contain dates of service or other events that imply age. Ages that are explicitly stated or implied as over 89 years old must be recoded as 90 or above. For example, if the patient's year of birth is 1910 and the year of health care service is reported as 2010, then in the de-identified data set the year of birth should be reported as "on or before 1920." Otherwise, a recipient of the data set would learn that the age of the patient is approximately 100.

Guidance With Respect To The Professional Statistical Analysis Method

The guidance includes the following clarifications with respect to the Professional Statistical Analysis Method:

What is an acceptable level of identification risk for an expert determination?

The guidance states that there is no explicit numerical level of identification risk that is deemed to universally meet the "very small" level indicated by the method. The analysis is more of a facts and circumstances analysis based on the ability of a recipient of information to identify an individual (i.e., subject of the information). This is notable because it preserves a degree of latitude for statistical experts engaged to de-identify information to place "very small risk" into context informed by any number of relevant factors, including the specific intended recipient. It also demonstrates that OCR recognizes that a "very small" risk of re-identification is not the same as no risk, and that covered entities are not out of compliance if re-identification occurs despite the statistical expert's expectation that it would not.

How long is an expert determination valid for a given data set?

There is no per se expiration date. The guidance does, however, state that experts recognize that technology, social conditions and the availability of information changes over time. For example, the U.S. Department of Commerce's release of U.S. census data may affect the ongoing validity of a statistical opinion. Thus experts should assess the expected change of computational capability, as well as access to various data sources, and then determine an appropriate timeframe within which the health information will be considered reasonably protected from identification of an individual. Covered entities and others requesting statistical opinions should expect the expert to request that the statistical opinion only be valid for a certain length of time and should factor in the cost of renewals of the opinion when deciding whether to pursue the Professional Statistical Analysis Method over the Removal of 18 Specific Identifiers Method.

Next Steps

Covered entities and business associates with the right to de-identify PHI that they receive from their customers should review their current de-identification methods in light of the guidance and make any necessary changes to comply with the new guidance. As part of the review, data users should consider whether a previously issued statistical opinion needs to be refreshed in light of new publicly available data sources, such as census data.

PRIVACY AND PROGRESS IN WHOLE GENOME SEQUENCING

Jennifer Geetter

Advances in technology and improved research techniques are likely to drive downward the cost of whole genome sequencing, which commentators predict will result in an expanding broad-based clinical and research adoption of whole genome sequencing. In response to this trend, commentators have raised questions of how best to facilitate data sharing and maximize data utility while protecting individual confidentiality and endeavoring to equitably distribute the risks and benefits of genomic data capture and analysis. In light of these concerns, on October 11, 2012, the Presidential Commission for the Study of Bioethical Issues released a report entitled "Privacy and Progress in Whole Genome Sequencing," enumerating concerns such as these and offering preliminary recommendations to guide future policy making. Although concerns about the privacy of genetic information are not new, the increasing number of people who may undergo whole genome sequencing—and thus who will be affected by such privacy and equitable distribution concerns—and the increase in the volume of genetic information collected, used and disclosed has prompted this closer look by the Commission in order to assess whether current protections are sufficient for the genomic age and to advance a public dialogue about how to prepare for the advancing genomic age. As these discussions, debates and deliberations continue, whole-genome-related stakeholders should maintain compliance with current laws that apply to genetic information; implement policies and processes that leverage existing compliance and operational systems to be understood through "genomic glasses"; and continue to remain informed of, and involved with, further developments in this area.

Footnotes

1 Toward that same end, the FTC issued Orders to File Special Report to nine data brokerage companies regarding how these companies collect and use personal data about consumers. See http://www.ftc.gov/opa/2012/12/databrokers.shtm.

2 See Privacy Rights Clearinghouse, "Chronology of Data Breaches," available at http://www.privacyrights.org/data-breach.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.