People’s Privacy Concerns Spark Numerous Queries for Advocates on ChatGPT

Privacy advocate on ChatGPT: "Lots of questions from people who are worried"

ChatGPT: The Challenge to Data Protection

ChatGPT, a Large Language Model (LLM), has become a topic of concern for data protection authorities as it processes enormous amounts of user data without sufficient control. Only a few authorities, such as Italy, have taken action against ChatGPT, while many others are still waiting for answers. In an interview with MIT Technology Review, Marit Hansen, the data protection officer for the state of Schleswig-Holstein, discussed how data protection is coordinated for ChatGPT, her assessment of the model and whether the EU General Data Protection Regulation (GDPR) could halt ChatGPT’s operations.

Not Knowing the Sources of Data

The LLMs pose a significant challenge to data protection authorities as the sources of the data used to feed them are mostly undisclosed. Hansen said that as data protection officers, they need to understand the data sources before making any judgments. The data protection officers must determine whether the sources of the training data are legitimate and whether there is any legal basis for using the data.

Compliance with GDPR

Hansen questioned whether an LLM such as ChatGPT should even be launched in Europe. The GDPR applies to the European market, and if personal data is processed, any processor must comply with the regulation. Hansen listed several criteria, such as legal basis, data subject rights, data security, and data protection impact assessment that must be taken care of by any company offering its product in Europe.

Responsibility of OpenAI and Authority’s Coordination

OpenAI has not yet established itself in Europe, raising the question of who is responsible for ensuring compliance with GDPR. Since several data protection supervisory authorities may be involved, coordination is necessary to ensure that ChatGPT complies with GDPR. Hansen said that various state data protection officers in Germany are coordinating with each other and in the European context to examine ChatGPT.

Italian Colleagues’ Ban on ChatGPT

The Italian data protection authority has already banned ChatGPT, citing various grounds such as legal bases, information obligations, data subject rights, and the protection of children. However, Hansen believes that further inquiries need to be made to understand the model better, and information is continually evolving. Therefore, deadlines are set for OpenAI to provide answers to the lengthy questions posed by the data protection authorities regarding ChatGPT.

Personal Data and Data Use

Hansen notes two sets of data protection issues in connection with ChatGPT. Training LLMs without personal reference is theoretically possible, but personal data is newly collected through user interactions with ChatGPT, raising concerns among data subjects. Many disclose intimate details while using the model, making it a matter of concern for the data protection supervisory authorities.

Conclusion

ChatGPT’s potential impact on data protection has raised questions among data protection supervisory authorities. The sources of data used to train the models and the control over personal data collected through user interactions with ChatGPT have become issues of concern. GDPR compliance, company responsibility, and coordination among supervisory authorities are critical when addressing the challenges posed by ChatGPT.

Leave a Reply