ARTICLE
10 November 2023

Snap Receives Preliminary Enforcement Notice Related To Privacy Risks Posed By AI Chatbot

TS
Taft Stettinius & Hollister

Contributor

Established in 1885, Taft is a nationally recognized law firm serving individuals and businesses worldwide, in both mature and emerging industries.
On October 6, 2023, Snap Inc. and Snap Group Ltd. (collectively, "Snap") received a preliminary enforcement notice from the U.K. Information Commissioner's Office...
United States Privacy
To print this article, all you need is to be registered or login on Mondaq.com.

On October 6, 2023, Snap Inc. and Snap Group Ltd. (collectively, "Snap") received a preliminary enforcement notice from the U.K. Information Commissioner's Office (ICO) due to a potential failure to properly assess the privacy risks posed by its generative AI chatbot, My AI.

What is My AI?

The My AI chatbot feature is powered by OpenAI's popular GPT technology which allows the feature to generate humanlike text based on learned behaviors and past conversations. Essentially, the My AI feature is intended to offer recommendations, answer questions, and even converse with users of the Snapchat messaging feature. Snapchat Support describes the chatbot as having the following capabilities, "In a chat conversation, My AI can answer a burning trivia question, offer advice on the perfect gift for your BFF's birthday, help plan a hiking trip for a long weekend, or suggest what to make for dinner." 1

Snap launched the 'My AI' feature for U.K. Snapchat+ subscribers in February and moved forward with offering the feature to its general Snapchat user base in the U.K. in April. Notably, the release of My AI by Snap marked the first time that generative AI has been embedded in a major messaging platform in the U.K. 2 As of May, Snapchat had 21 million monthly active users in the U.K. alone. Similarly, the My AI functionality was made available to Snapchat users in the U.S. in April of 2023. As we often see in the information privacy space, the U.K. is moving quickly to create precedence with respect to a regulatory stance on this particular topic. However, it is quite likely that U.S. regulators will not be far behind.

Enforcement Notice

In June of 2023, the ICO called for businesses to address the privacy risks that generative AI can bring before rushing to adopt the technology, and promised tougher reviews on whether organizations are compliant with data protection laws. In particular, the ICO made it clear that it will take action against organizations in the event any use or development of generative AI by an organization presents a risk of harm to people through poor use of their data, stating further, "there can be no excuse for ignoring risks to people's rights and freedoms before rollout."

True to its word, the preliminary enforcement notice provided by the ICO to Snap is the result of an investigation conducted by the regulator related to Snap's development and launch of the My AI chatbot. Consequently, the ICO's investigation provisionally found that the risk assessment Snap conducted before it launched My AI did not adequately assess the data protection risks posed by the generative AI technology, particularly to children. The assessment of data protection risk is particularly important in this context which involves the use of innovative technology and the processing of personal data of 13 to 17-year-old children.

Notably, a preliminary enforcement notice affords companies that control consumers' data the chance to make representations back to the ICO before a final notice is issued by the regulator. Therefore, the issuance of this preliminary enforcement notice does not mean that Snap has, without a doubt, breached the U.K.'s data protection laws, or that a formal enforcement notice will be issued.

However, the preliminary notice sets out the steps that the ICO may require, subject to Snap's representations in response to the notice. If a final enforcement notice is adopted, Snap may be required to cease processing data in connection with My AI and pull the My AI product from the Snapchat platform (for U.K. users) until Snap completes an adequate risk assessment pursuant to the ICO's guidance.

Adequate Risk Assessment

The ICO's preliminary enforcement notice centers on Snap's obligation under the U.K.'s General Data Protection Regulation to carry out a data protection impact assessment in situations where the processing of a user's data is "likely to result in a high risk to their rights and freedoms." This assessment is designed to help organizations systematically analyze, identify, and minimize the data protection risks of a project or plan and is a key part of accountability obligations under the U.K.'s data protection regime. According to the ICO, the risk assessment is especially vital in situations where innovative technology is deployed, and data is gathered from users under 18.

In April 2023, the ICO issued guidance to developers and users of generative AI which included questions they should be considering when rolling out these offerings. Additionally, the guidance states that the ICO is expecting businesses to show the regulator how they've addressed the risks that occur in their context, even if the underlying technology is the same. The ICO advises companies to spend time at the outset to understand how AI is using personal information, mitigate any risks discovered, and roll out the intended AI approach with confidence that it will not upset customers or regulators. Markedly, the ICO states that "an AI-backed chat function helping customers at a cinema raises different questions compared to one for a sexual health clinic, for instance."

Key Takeaways – Data Protection Impact Assessments for AI

The market for AI is expected to show strong growth in the coming decade. Its value of nearly 100 billion U.S. dollars is expected to grow twentyfold by 2030, up to nearly two trillion U.S. dollars. The AI market covers a vast number of industries. Everything from supply chains, marketing, product making, research, and analysis, to the provision of healthcare and other industries will continue to integrate AI within their operations and business structures.

While we certainly acknowledge that not all uses of AI by these organizations will involve data processing that has the ability to result in a high risk to individual rights and freedoms, any organization seeking to develop or use AI technology, as well as those who are accountable for the governance and data protection risk management of an AI system, should be prepared to comply with data protection laws and to demonstrate compliance in any AI system that processes personal data.

However, in the vast majority of cases, the use of AI will involve a type of processing likely to result in a high risk to individuals' rights and freedoms, and will therefore trigger the legal requirement for an organization to undertake a Data Protection Impact Assessment (DPIA). Organizations will need to make this assessment on a case-by-case basis. Even in cases where it is determined that a particular use of AI does not involve high-risk processing, organizations will still need to document how this determination was reached.

Additionally, in order to perform an adequate DPIA, an organization has to understand all of its processing activities, including data flows, the stages of AI Processing, any automated decisions that may produce effects on individuals, as well as the scope and context of data processing involved in the use of certain AI technologies (i.e., what data will be processed, the number of data subjects involved, the source of the data, and to what extent individuals are likely to expect the processing).

It can be difficult to describe the processing activity of AI systems, particularly when they involve complex models and data sources. However, such a description is necessary as part of a DPIA. In some cases, although it is not a legal requirement, it may be good practice for you to maintain two versions of an assessment, with:

  • the first presenting a thorough technical description for specialist audiences; and
  • the second containing a more high-level description of processing, with an explanation of how the personal data inputs relate to the outputs affecting individuals (this may also support you in fulfilling your obligation toexplain AI decisions to individuals).

An acceptable DPIA can be very complex and difficult for many organizations to complete without the assistance of legal counsel and other specialists with experience in this area of the law. As such, we encourage organizations to seek qualified legal counsel whenever making determinations about your company's legal or compliance obligations. Taft's Privacy and Data Security Practice (PDS) stands ready to assist you with a risk-based, common-sense approach to your data governance needs.

Taft's Privacy and Data Security attorneys will continue to monitor this and other developments relating to privacy and the use of AI. For more information on data privacy and security regulations and other data privacy questions, please visit Taft's Privacy & Data Security Insights blog and the Taft Privacy and Data Security mobile application.

Footnotes

1. See Snapchat Support articlehere

2. See ICO enforcement newshere

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

We operate a free-to-view policy, asking only that you register in order to read all of our content. Please login or register to view the rest of this article.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More