What is ChatGPT?
Chat GPT is the latest language model created by OpenAI in order to produce natural human language and to use it in a human-like conversation. Every person may now have a digital conversation with ChatGPT, which is also able to correct its own mistakes and even remember its past conversations with the current conversation partner. The program can also provide its user with information about current world situation as well as with facts about the past. ChatGPT belongs to the AI (Artificial Intelligence) category, which according to the OECD is a "machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments"1.
However, because of such advanced technological abilities of the program – should we be afraid of our personal private data being gathered by the program?
How does the privacy policy of Chat GPT work?
The general outlook on the privacy policy of ChatGPT does not bring up any suspicions. The model gathers information from the user's account. The program also is identifying the data about the device, from which the user is accessing Chat GPT, such as location or IP address. That is how usually most of the websites work, so that policy does not seem controversial nor unusual.
However the controversies begin with the fact, that the program does gather also the information that the user provides during the chat. That may become an issue for example when the user needs proofreading a document and uploads said document onto the model. In that situation Chat GPT can access and save all the information from the document, for example the personal data of the user's clients. That makes it very easy to leak sensitive data by mistake. According to the privacy policy in order to enter any personal data of a third person to the chat, it is supposed to provide that person with adequate privacy notes beforehand. What is more, one shall also provide consent of that third person to put their information and prove that the data is being processed within the law.
First GDPR case against ChatGPT
On 30th of March 2023 the first allegations of breaching GDPR (General Data Protection Regulation) have reached Open AI and its product. The Italian Data Protection Authority (the Garante) has accused the company of various infringements, such as: failure to verify users' age, what might lead to exposure of minors to inappropriate content; not providing required transparency information about processing of personal data collected by Chat GPT or failure to provide a legal basis for processing personal data for training purposes. In consequence of that the Garante has ordered Open AI to temporarily stop processing personal data as an interim measure until the investigation is finished.
Legal and ethical questions
As the Chat GPT is one of the first programs that are technologically advanced to this extent, it is going to be difficult to rely on any previous experiences. Italy was the first EU member to take legal actions against Open AI's Chat GDP regarding data protection and potentially breaching GDPR. The question that raises many controversies is should the personal data of users be provided for the purpose of training the algorithms? Should a powerful company have such valuable information? Can the purpose of training the artificial intelligence algorithm be called a legitimate interest?
After Italy published its report regarding that case, many more EU countries decided to take a closer look into the matter of Open AI's product regarding data privacy. The countries that increased their focus on said topic are: the Netherlands, France, Spain also an ex-EU member – the UK. That leads to a conclusion, that many more cases and controversies are yet to arise regarding AI.
EU Policy towards AI
Currently the European Union bodies are during the process of creating a set of rules regarding AI2. When the act will be adapted it will be the first legal act regulating AI in the world. The purpose of the act is to create obligations for users and providers of the AI which are indicated by the risks that the program may bring. The rules distinguish the levels of the said risk and prohibit the systems which generate an unacceptable level of risk to safety of people. The proposal also describes a category of high-risk AI which should be compiled with stated mandatory requirements and conformity assessment. The act will also contain transparency obligations for certain AI systems. On the other hand the proposal is going to provide measures which aim to support innovation and development of AI systems without unnecessary disruptions.
Footnotes
1. OECD (2019), Artificial Intelligence & Responsible Business Conduct, available at: https://mneguidelines.oecd.org/RBC-and-artificial-intelligence.pdf, [accessed: 22-05-2023, 10:55].
2. EUROPEAN COMMISSION, Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCILAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS, Brussels, 21.4.2021 COM(2021), 206 final 2021/0106 (COD), available at: https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF, [accessed: 22-05-2023, 13:18].< sup>
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.