ChatGPT Reported For Using Fabricated Personal Data

R
Rouse

Contributor

Rouse is an IP services business focused on emerging markets. We operate as a closely integrated network to provide the full range of intellectual property services, from patent and trade mark protection and management to commercialisation, global enforcement and anti-counterfeiting.
In April 2024, the data protection organization NOYB (short for "None of Your Business") reported the company behind ChatGPT, OpenAI, for not guaranteeing the accuracy of personal data and not correcting inaccurate.
UK Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Take aways on NOYB's report: "ChatGPT provides false information about people, and OpenAI can't correct it"

In a nutshell

In April 2024, the data protection organization NOYB (short for "None of Your Business") reported the company behind ChatGPT, OpenAI, for not guaranteeing the accuracy of personal data and not correcting inaccurate information, as required by EU data regulation. The overall concern, which this complaint highlights, regards whether AI systems such as ChatGPT actually comply with the EU data protection regulations when processing personal data.

The background

It is not unusual for AI to generate incorrect information when users make a request. But if that request is about an individual it is legally very serious when the AI fabricates false personal data.

This complaint derives from a request filed at OpenAI by a person in the public eye whose birthday was repeatedly and incorrectly created by ChatGPT. However, this was denied by OpenAI when the individual requested that their personal data be corrected. According to OpenAI, it was technically impossible to amend or block the AI's response without completely blocking it from answering other questions about this person.

According to NOYB, the company thereby failed to comply with the EU data protection regulations. as OpenAI did not appropriately address this person's request to access, correct or delete their personal data.

The take aways

  • Before developing and implementing an AI system within an organization, one must bear in mind the importance , during the planning process and model design, how the rights of data subjects can be safeguarded.
  • Organisations should carefully assess which data is truly needed to train the model and should not process more data than necessary. It all comes down to having control of the data that is being stored and used, and how to comply with the rights of the data subjects.
  • Organisations should be aware that this type of complaint is likely to be more common as the use of AI becomes more regulated and the interaction between AI and the EU data protection regulations becomes clearer. From this example, entities should not blame deficiencies on technical obstacles as it will not be an acceptable excuse for not complying with the requirements in the EU data protection regulations.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More