OpenAI's ChatGPT Under GDPR Investigation in Italy: Facing 30-Day Defense Deadline

Zach Anderson  Jan 30, 2024 11:19  UTC 03:19

0 Min Read

The Italian Data Protection Authority, Garante, has raised concerns about potential violations of the European Union's General Data Protection Regulation (GDPR) by OpenAI's ChatGPT. This follows a multi-month investigation, leading to a formal notice issued to OpenAI, suspecting breaches of EU privacy regulations. OpenAI has been given a 30-day period to respond and present a defense against these allegations​​​​.

Previously, the Italian authority had ordered a temporary ban on ChatGPT's local data processing in Italy, citing issues such as the lack of a suitable legal basis for collecting and processing personal data for training ChatGPT’s algorithms. Concerns about child safety and the AI tool’s tendency to produce inaccurate information were also noted. OpenAI temporarily addressed some of these issues, but now faces preliminary conclusions that its operations might be violating EU law. The core issue revolves around the legal basis OpenAI has for processing personal data to train its AI models, considering that ChatGPT was developed using data scraped from the public internet​​.

OpenAI initially claimed “performance of a contract” as a legal basis for ChatGPT model training, but this was contested by the Italian authority. Now, the only potential legal bases left are consent or legitimate interests. Obtaining consent from numerous individuals whose data has been processed seems impractical, leaving legitimate interests as the primary legal basis. However, this basis requires OpenAI to allow data subjects to object to the processing, which poses challenges for an AI chatbot’s continuous operation​​.

In response to increasing regulatory risks in the EU, OpenAI is seeking to establish a physical base in Ireland, aiming to have GDPR compliance oversight led by Ireland’s Data Protection Commission. This move is part of a broader effort to address concerns related to data protection across the EU. In addition to the Italian investigation, OpenAI is also under scrutiny in Poland following a complaint about inaccurate information produced by ChatGPT and OpenAI’s response to the complainant​​.

The outcome of this investigation will likely have significant implications not only for ChatGPT but also for the broader landscape of AI applications and their adherence to data protection standards in the EU. As the situation unfolds, it highlights the challenges and complexities that innovative technologies like AI chatbots face in navigating the stringent data protection regulations in Europe​​.


Image source: Shutterstock


Read More