Italy has become the first Western nation to prohibit the advanced chatbot ChatGPT due to privacy concerns, according to the Italian data-protection authority.
The model was created by US start-up OpenAI, which has backing from Microsoft. The watchdog stated that it would ban and investigate OpenAI immediately.
Since launching in November 2022, millions of people have used ChatGPT, which can answer questions with natural, human-like language and can mimic different writing styles using the internet as a database. Microsoft has spent billions of dollars on it and included it in Bing last month.
There have been worries over the potential dangers of AI, including job loss and the spreading of misinformation and bias. Elon Musk and other prominent figures have called for these types of AI systems to be suspended.
The Italian regulator will also investigate whether OpenAI complies with the General Data Protection Regulation governing the use, processing, and storage of personal data.
On 20 March, the watchdog announced that the app had suffered a data breach that included user conversations and payment information.
It said there was no legal basis to justify “the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform”.
It also said that since there was no way to verify the age of users, the app “exposes minors to absolutely unsuitable answers compared to their degree of development and awareness”.
Bard, an AI chatbot developed by Google as a competitor to ChatGPT, is now available to specific users over the age of 18 due to similar privacy concerns.
The Italian data-protection authority has given OpenAI 20 days to address the watchdog’s concerns or face a fine of €20 million ($21.7m) or up to 4% of annual revenues.
The Irish data protection commission is in contact with the Italian regulator and plans to coordinate with other EU data protection authorities regarding the ban.
The UK’s independent data regulator, the Information Commissioner’s Office, has stated its support for AI development but will also challenge non-compliance with data protection laws.
Dan Morgan from cybersecurity ratings provider SecurityScorecard emphasized the importance of regulatory compliance for businesses operating in Europe and prioritizing the protection of personal data.
BEUC, a consumer advocacy group, has urged EU and national authorities, including data protection watchdogs, to investigate AI chatbots such as ChatGPT, following a complaint filed in the US.
While the EU is in the process of developing the world’s first AI legislation, BEUC has expressed concerns that it may take years before the AI Act can come into effect, leaving consumers vulnerable to harm from unregulated AI.
Ursula Pachl, deputy director general of BEUC, has cautioned that AI is currently posing risks to society that are not being adequately addressed.
“There are serious concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people. These AI systems need greater public scrutiny, and public authorities must reassert control over them,” she said.
ChatGPT is already blocked in a number of countries, including China, Iran, North Korea and Russia.
OpenAI has not yet responded to the BBC’s request for comment.