On August 15, Uber Technologies, Inc. ("Uber") reached an agreement with the Federal Trade Commission ("FTC") to settle allegations that the company deceived consumers by misrepresenting its security and employee access practices for personal consumer information, including such information stored in the cloud. The settlement reflects the FTC's first foray into data security practices in the "gig" or "sharing" economy and an expansion of its data security enforcement in the cloud space, as well as a potential shift in how the FTC may expect companies to secure data from internal users. The settlement may also reflect refinements in the FTC's conception of its own authority in the data security context.

The FTC's Settlement with Uber

The complaint alleges that ride-sharing start-up Uber deceived consumers regarding (1) insider access to consumer information; and (2) the security of consumer information secured in its databases. The first allegation stems from a series of news reports published in 2014 about Uber employees' use of a tool called "God View" to track users' rides and view their information. According to the complaint, in response to public outrage, Uber released a statement promising that insider access to consumer data would be "closely monitored and audited . . . on an ongoing basis." The FTC alleges that Uber abandoned the new monitoring system less than a year after this statement was made and that, while the system was in place, it "was not designed or staffed to effectively handle ongoing review of access to data by [Uber's] thousands of employees and contingent workers." The FTC's second allegation arises out of a data security breach in 2014 in which a hacker allegedly gained access to the personal information of more than 100,000 drivers that Uber had stored in a cloud datastore provided by Amazon Web Services. The FTC claims that the breach resulted from Uber's failure to implement reasonable security measures with respect to this personal information, and that these failures contravened Uber's public commitment to employ "commercially reasonable" data security. According to the FTC, these two categories of allegedly false or misleading statements were "deceptive" acts in violation of Section 5(a) of the FTC Act.

The proposed consent order, if accepted by the Commission after a public comment period, would prohibit Uber from misrepresenting (1) the extent to which it monitors or audits internal access to consumers' personal information; or (2) the extent to which it protects the privacy, confidentiality, security, or integrity of consumers' personal information. It would also require Uber to implement a "Mandated Privacy Program," obtain initial and biennial assessments of that program, submit a compliance report, create and retain specified records, and undergo compliance monitoring. The requirements of the order would apply for 20 years.

Analysis

The Uber settlement is the FTC's first foray into data security practices in the "gig" or "sharing" economy – the model epitomized by Uber's services. Companies should take note of the FTC's position, which has never been tested in court, that both riders and drivers of Uber are "consumers" that the FTC is authorized to protect under Section 5 the FTC Act, even though Uber takes the position that drivers are independent contractors. The FTC's position raises the question of whether and to what extent the agency will contend that other sharing economy participants – such as handymen, home renters and everyone in between – are also "consumers" such that allegedly deceptive or unfair conduct with respect to those groups could also prompt FTC enforcement.

The settlement also represents an expansion of the agency's focus on protection of consumer data stored in the cloud. The FTC's previous enforcement activity in the cloud space focused on alleged design flaws in a cloud product that the company itself offered directly to consumers and not, as is the case with the Uber settlement, a company's alleged failure to employ practices to secure information that it stored with a third-party cloud service provider. See In the Matter of ASUSTeK Computer, Inc., Complaint, FTC Dkt. No. C-4587 (July 18, 2016) (company's cloud storage service, offered in connection with sale of internet routers, was allegedly insecure).

Companies should also take note of allegations in the Uber complaint that may reflect tightened, more granular expectations by the FTC regarding how organizations should secure data from access internal to their systems to meet a promise of "reasonable" data security. To name two examples, the FTC faulted Uber for allegedly permitting its computer programs and engineers to use common access keys that provided full administrative privileges to access rider and driver cloud-based datastores and for "fail[ing] to require multi-factor authentication" to access those same datastores. While in the summer of 2015, the FTC had issued a publication stating that companies need merely "consider" whether to implement these measures, see FTC, Start with Security, at 3, 5, the FTC's complaint faulted Uber for not using these measures in 2014 and earlier. The complaint, without further explanation, also alleges that these and other measures could have been deployed "through relatively low-cost measures."

Notably absent from the FTC's complaint is an allegation that Uber's data security practices constituted "unfair" acts or practices in violation of Section 5 of the FTC Act. The FTC has frequently included both unfairness- and deception-based claims in its data security enforcement actions. Here, although the FTC claims that Uber "failed to provide reasonable security" and failed to monitor and audit employee access, the alleged violations were deception-based in that these alleged practices did not match Uber's public statements. The agency did not allege that the failure to employ these measures was unlawful in and of itself as an "unfair" act. Unlike a deception claim, an unfairness claim would require that the FTC establish, among other things, that the alleged practice "causes or is likely to cause substantial injury to consumers." 15 U.S.C. 45(n). The FTC is currently reevaluating the types of injury that it believes constitute "substantial injury" for this purpose, amidst concerns being expressed by many in the business community and in Congress that the FTC has stretched its enforcement activity under the unfairness prong too far. This and other related issues surrounding the FTC's unfairness authority, such as how probable a substantial injury must be in order to be "likely," are front and center in data security litigation currently pending at the Eleventh Circuit. See LabMD Inc. v. FTC, No. 16-16270 (11th Cir.). (Ropes & Gray represents LabMD in the matter.) The potential that the agency intends to rely more on its deception authority and less on its unfairness authority going forward underscores that companies should carefully craft their public statements regarding data security and ensure that their actual practices match those statements.

Finally, the consent order is notable for including significant affirmative relief in the form of a "Mandated Privacy Program" and periodic assessments of that program, even though (as noted above) the FTC did not allege that Uber's privacy or security practices were in and of themselves unlawful. Section 5 of the FTC Act permits the FTC only to enter an order requiring the alleged law violator "to cease and desist from the violation of the law." The FTC is permitted to include ancillary affirmative relief only when some affirmative action must necessarily be taken in order for the company to cease and desist from the allegedly unlawful practice (e.g., patent licensing to remedy monopolistic behavior). See, e.g., Congoleum Indus., Inc. v. C.P.S.C., 602 F.2d 220, 225-26 (9th Cir. 1979) (discussing FTC authority). As former FTC Commissioner Orson Swindle explained when dissenting from affirmative relief in an early deception-based FTC data security action, a legally proper order to "cease and desist" from misrepresenting privacy or data security measures would merely require the company to cease the alleged misrepresentations – not to affirmatively change its privacy or data security practices – because refraining from the misrepresentations would terminate the allegedly unlawful conduct. See Statement of Comm'r Orson Swindle, In re Int'l Outsourcing Grp., No. 992-3245 n.1 (July 12, 2000), http://www.ftc.gov/os/2000/07/iogswin.htm. Despite this, the consent decrees entered into by the FTC in the data security context typically require the company to take specified affirmative steps to change its security practices – even when the action is based solely on alleged deception.

The burdensome nature of the remedies sought by the FTC underscores the value of proactively assessing and mitigating the risk of FTC enforcement in advance of any incident to help stave off a regulatory investigation before it even begins, ideally with the assistance of counsel familiar with FTC expectations for companies' data security practices. And, in the face of an FTC investigation, companies should engage counsel familiar with the limits on the FTC's authority so that it can cooperate with the investigation in a way that is consistent with the company's legal rights.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.