The Italian data protection watchdog, Guarantor for Personal Data Protection (aka Guarantor), has imposed a temporary ban on OpenAI ChatGPT services in the country, citing data protection issues.
To that end, it has ordered the company to stop processing user data immediately, stating that it intends to investigate the company whether it is processing the data illegally in violation of the EU General Data Protection Regulation (GDPR) law.
“No information is provided to users and data subjects whose data is collected by Open AI,” Garante noted. More importantly, there appears to be no legal basis for the massive collection and processing of personal data to ‘train’ the algorithms on which the platform relies.”
ChatGPT, which is estimated to have reached more than 100 million monthly active users since its release late last year, has not disclosed what to use to train its newest big language model (LLM), GPT-4, or how to train it.
Garante also pointed out the lack of an age verification system to prevent minors from accessing the service, which could potentially result in them receiving “inappropriate” responses. Google’s own chatbot, called Bard, is just open to users over 18 years of age.
In addition, regulators raised questions about the accuracy of the information displayed by ChatGPT, while also highlighting a data breach the service experienced earlier this month which exposed some users’ chat titles and payment-related information.
In response to the command, OpenAI has blocked its generative AI chatbot to prevent users with Italian IP addresses from accessing it. It also said it was issuing refunds to ChatGPT Plus subscribers, in addition to stopping subscription renewals.
The San Francisco-based company further emphasizes that they provide ChatGPT in compliance with the GDPR and other privacy laws. ChatGPT is already blocked in China, Iran, North Korea and Russia.
In a statement together with Reuters, OpenAI said it was actively working to “reduce personal data in training our AI systems like ChatGPT because we want our AI to learn about the world, not about private individuals.”
OpenAI has 20 days to notify Garante of the steps it has taken to comply, or risk facing a fine of up to €20 million or 4% of total annual worldwide turnover, whichever is higher.
However, the ban is not expected to affect apps from other companies that use OpenAI technology to enhance their services, including Microsoft’s Bing search engine and its Copilot offering.
Developments also come as Europol be warned that LLMs like ChatGPT are likely to help generate malicious code, facilitate fraud, and “offer criminals new opportunities, particularly for crimes involving social engineering, given their ability to respond to messages in context and adopt a particular writing style.”
This isn’t the first time an AI-focused company has gone unnoticed. Last year, controversial facial recognition firm Clearview AI was fined by Lots Europe regulators to scrape a user’s publicly available photos without consent in order to train his identity matching service.