In a world which is becoming increasingly concerned with the privacy and use of personal data, the rise in mobile applications ("App(s)") which utilise people's biometrics is somewhat surprising. Take FaceApp, the latest App to, once again, rise to viral popularity which allows users to utilise "artificial intelligence algorithms to transform your photos" so that we may take a disconcerting glimpse into the possible faces of our future selves. Douglas Adams may have been right about our inability to telephone ourselves in the past, but he didn't anticipate our ability to use our phones to come face to face with a (possible) version of ourselves in the future.

FaceApp, which has found itself under the public spotlight numerous times (not least for its 'ethnicity filters' which were quickly removed) since its incorporation in 2017, is not unique in its use of personal data. This article aims to address several issues which appear to be at the forefront of the public's mind:

1. How exactly is FaceApp transforming me into a Methuselahn version of myself?

The App does not appear to use a filter, which many of us will be familiar with from our use of other Apps such as Snapchat. Instead, it uses an artificial intelligence photo-altering algorithm. Although the underlying technology has not itself been disclosed, the general understanding is that the algorithm is keying in on the general traits we use when judging the relative age, femininity or masculinity of faces.

2. Cloud storage, is this really a problem?

Public backlash against FaceApp appears to have arisen from two issues:

  1. Claims that the App immediately uploads users' entire camera rolls without consent; and
  2. Uncertainty as to whether users' photos are being uploaded to a remote server.

FaceApp has vehemently denied both claims, additionally stating it deletes "most" images from its servers within 48 hours of uploading; however, it is worth understanding that in order for such Apps to work, the App will need to upload the photo you want to transform. Apple's rules for managing photos does appear to be adhered to here, as although FaceApp appears to be headquartered at the Skolkovo Foundation, Saint-Petersburg, it has confirmed that they do not transfer any user data to Russia (or any other third parties).

Similarly to most other Apps, FaceApp use Amazon Web Services ("AWS"). This is understandable when considering the processing power required to transform your face. Considering users have varying models of devices, some which will have up to date machine learning capabilities imbedded into their hardware and others which will not, it is much more manageable to avoid incompatibility issues by dealing with the process in the Cloud. FaceApp has confirmed it utilises AWS data servers based in the U.S. and although the App does use third-party code and as such will reach out to their servers, these have been confirmed to be based in the U.S. and Australia.

3. Terms of use, what exactly am I signing up to?

Below is an extract from Section 5 of FaceApp's Terms of Use detailing what is being done with any User Content (e.g. your photos) together with a brief overview of what some of the wording means.

"You grant FaceApp a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you."

Although FaceApp specifically acknowledged it does not own and User Content, it does continue to provide for a licence which is both perpetual and irrevocable. This may set alarm bells ringing for EU users, considering how can people act upon their right to be forgotten under the GDPR, where a licence is to use their User Content is irrevocable? The clause continues to allow FaceApp to do a wide range of things with the User Content (including allowing FaceApp to sub-licence their User Content for the same) together with the user's "name, username or likeness" in "all media formats and channels now known or later developed". It should be noted however that notwithstanding the above, this is very similar to terms of use for other platforms, such as Facebook and Snapchat. Although in this instance, there is uncertainty as to what exactly the last highlighted phrase means and it could be a legitimate cause for concern when considering the general broadness of the clause.

"[...] You grant FaceApp consent to use the User Content, regardless of whether it includes an individual's name, likeness, voice or persona, sufficient to indicate the individual's identity. By using the Services, you agree that the User Content may be used for commercial purposes."

Where the definition of "use" is somewhat vague, issues such as surveillance, identify theft and unspecified training of AI technologies are some of the more severe consequences which come to mind when users are granting a perpetual and irrevocable worldwide licence to use their likeness; users are allowing access to arguably their most sensitive biometric data and they should be conscientious when agreeing to grant an organisation the ability to use their image, likeness, voice or persona, sufficient to indicate their identity. Albeit other Apps, such as Snapchat, include a very similar licence to use any Public Content (e.g. "Story submissions that are set to be viewable by Everyone as well as content you submit to crowd-sourced Services, including Our Story") under the premise that such content is "inherently public and chronicles matters of public interest"; this specifically carves out all other forms of user data which is created on Snapchat, where softer licence terms are given (e.g. the omission of a perpetual and irrevocable licence). The noticeable difference here is that FaceApp's terms do not distinguish between varying forms of User Content, and therefore all User Data generated will be licensed to FaceApp under Section 5 of its Terms of Use. Interestingly, the inclusion of "voice" and "persona" at this stage is not something which could necessarily be ascertained through the current service provided by FaceApp, suggesting that they intend to extend their services at some stage to allow users to upload and edit videos.

In a much broader context, where technology has developed to allow for Deepfake videos and AI generated facial images, the usage of Apps such as these allows for a differing cause of concern. Considering the viral use of these kinds of Apps and the wide range of users who upload their photos/ videos, the question arises as to whether allowing organisations to use their data under very broad terms, will result in your work colleagues all of a sudden seeming to endorse a particular brand or product (or even political affiliation?). This may be a bit of an exaggeration, however the thought is still worth considering.

"[...] You acknowledge that some of the Services are supported by advertising revenue and may display advertisements and promotions, and you hereby agree that FaceApp may place such advertising and promotions on the Services or on, about, or in conjunction with your User Content. The manner, mode and extent of such advertising and promotions are subject to change without specific notice to you."

This is also not too dissimilar to what is allowed in other similar kinds of Apps. This form of targeted marking is a product of today, in which people are arguably best influenced by organisations hoping to sell users their goods/services through user's interaction with the internet.

The following is an extract from section 12, which sets out FaceApps limitation of liability under the Terms of Use.

"FaceApp and the other FaceApp Parties will not be liable to you under any theory of liability—whether based in contract, tort, negligence, strict liability, warranty, or otherwise—for any indirect, consequential, exemplary, incidental, punitive or special damages or lost profits, even if FaceApp or the other FaceApp Parties have been advised of the possibility of such damages."

Under English law, this type of clause is attempting to limit FaceApp's liability for all reasonably foreseeable indirect/ consequential losses suffered, as per the second limb of the test established in the common law case of Hadley v Baxendale. The test provides for an injured party to claim for damages which were reasonably in contemplation of the parties at the date of contracting. Considering that the Terms of Use are subject to the laws of California, it is important to note that this approach has been generally echoed by the US courts. So the question then arises as to what would a loss of such nature mean? Speculatively speaking, a user could suffer some form of reputational harm resulting in consequential loss. However, considering defamation has not been excluded here, there could still be recourse for them in any case. It is also worth pointing out that many other Apps include this same type of language in their own terms of use and therefore the inclusion of such isn't particularly strange.

"The total liability of FaceApp and the other FaceApp Parties, for any claim arising out of or relating to these Terms or our Services, regardless of the form of the action, is limited to the amount paid, if any, by you to access or use our Services"

As most users of the App will have downloaded this for free, this would mean that FaceApp's total liability under the Terms of Use is zero. Under English law, this attempt to limit liability could be considered unreasonable should the breach concerned be disproportionate (e.g. a data breach or a claim for defamation).

4. Privacy Policy, what does this mean?

FaceApp's Privacy Policy sets out how they collect, use, share and protect User Content provided to them by users. The Policy starts out by clarifying the types of data which it collects - these being: User Data; analytics information; Cookies and similar technologies; log file information; device identifiers; and metadata. If this list of data is worrying, it should be pointed out that very much the same forms of data are collected by other Apps such as Instagram and Snapchat. Similarly, the "how we use your information" section of the Policy is also very much the same.

FaceApp confirm they "will not rent or sell your information to third parties outside FaceApp (or the group of companies of which FaceApp is a part) without your consent", which again is not uncommon. However, there is nothing to indicate what the FaceApp group of companies comprises. Effective privacy policies will generally specify who comes under any such "group of companies" in order to further transparency of their data use.

Section 4 of the Privacy Policy is where things start to get more interesting. Below is an extract:

"FaceApp, its Affiliates, or Service Providers may transfer information that we collect about you, including personal information across borders and from your country or jurisdiction to other countries or jurisdictions around the world. If you are located in the European Union or other regions with laws governing data collection and use that may differ from U.S. law, please note that we may transfer information, including personal information, to a country and jurisdiction that does not have the same data protection laws as your jurisdiction.

As I will address further below, this clause would seem to contradict AWS's approach to privacy. However, the deliberate inclusion of such wording does raise red flags as it could indicate an intention to transfer User Data outside countries with what would be considered "adequate" levels of data protection laws.

5. But what about the GDPR?

Considering FaceApp's statement that it uses AWS, it is important to firstly look at the GDPR compliance and terms of use for any business looking to use such services. Amazon itself has included its AWS GDPR Data Processing Addendum within its Service Terms for AWS. However, this does not mean that FaceApp is unable to transfer and/or process User Content in a manner which contravenes the principles of the GDPR.

Alongside its confirmation that it does not transfer any User Content to Russia nor sell/ share any User Content with third parties, FaceApp has also included within its statement that it accepts requests from users for removing all their data from its servers. Although they state their support team is currently overloaded, they stress that such requests have their priority. FaceApp recommends that users send the requests from the App using "Settings->Support->Report a bug" with the word "privacy" in the subject line. They continue to state they are working on a better user interface for this, which in all, suggests that they are conscious in bettering their internal data protection procedures.

Article 3.2 of the GDPR states that it shall apply to organisations that are not in the EU if two conditions are met: the organisation offers goods or services to people in the EU, or the organisation monitors their online behaviour. This would certainly cover the activities conducted by FaceApp and therefore they could be subject to fines should a breach of the GDPR principles be proven to have occurred.

Should FaceApp not have a physical presence within the EU, EU regulators would still have the ability, by virtue of international law, to issue fines. Considering the relationship between U.S and EU data protection authorities (and assuming FaceApp has a physical presence within the U.S), the ICO would be able to pursue FaceApp in the U.S. Failing that, the EU authorities would need to work with the Russian Roscomnadzor to bring action against FaceApp in Saint-Petersburg.

In summary, the evidence here doesn't suggest anything underhanded is occurring. Although it could be interpreted that there is serious causes for concern, the reality may be that this is very much the same scenario that users agree to when using that vast other Apps. Facebook, for example, applies facial recognition technology to photos which are uploaded to its servers. Previously, Facebook has pushed a VPN which allowed them to track users' activities, which resulted in them taking alternative measures following Apple's banning of the aforementioned App, eventually leading to the $5 billion fine issued by the FTC. The widespread concern seems to be, in large, borne out of negative assumptions surrounding the geographical headquarters of FaceApp. Although sometimes this can be a factor worth considering, generally speaking, users should be more concerned with the actual terms which they are agreeing to when using these Apps. Most social media Apps will have very similar terms and conditions, irrespective of their country of origin; so to have particular issue with one solely because of a political/ geographical prejudice, seems to be unfair. However, there are certainly reasons why users should be concerned surround the licensing terms and statements within the privacy policy, as outlined within this article. Broader issues surrounding the impact Apps such as these have on users' perception of themselves are something also to be borne in mind, however this is a part of a much larger picture in relation to the impact social media has on mental health.

Is this a part of an inevitable future in which technology plays a larger part in our everyday lives or more so a "hilarious" decent into an Orwellian future? In either case, people should be always be conscientious of the terms of which they agree to when using Apps such as these and also the privacy implications involved when considered the usage of their personal data. The passage of time, in this context, will certainly tell the tale.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.