The European Consumer Organisation (BEUC) is calling ‘for EU and national authorities to launch an investigation into ChatGPT and similar chatbots’, following the filing of a complaint on March 30th, 2023, on the other side of the Atlantic by the Center for Artificial Intelligence and Digital Policy (CAIDP) in relation to ChatGPT-4.
In the complaint addressed by the CAIDP to the Federal Trade Commission (FTC), OpenAI is accused of having developed a tool that is ‘biased, deceptive, and a risk to privacy and public safety’ and therefore, of having failed to comply with the US regulator’s guidelines calling for all AI applications to be transparent, fair and easy to explain. ‘OpenAI’s product GPT-4 satisfies none of these requirements’, the CAIDP said.
The CAIDP is therefore asking the FTC to open an investigation into OpenAI. At the same time, it proposes suspending further deployment of commercial products related to its language models until the company complies with existing regulations on artificial intelligence. In addition, the CAIDP mentions a dozen major risks posed by statements included in the technical description of the tool proposed by OpenAI (e.g., risks involving cybersecurity). With regard to these potential hazards, the CAIDP urges the FTC to require the establishment of independent assessments of GPT products prior to future deployment but also, throughout the GPT AI lifecycle.
On the same day, the Deputy Director General of the BEUC, Ursula Pachl, declared to the press that ‘for all the benefits AI can bring to our society, we are currently not protected enough from the harm it can cause people. In only a few months, we have seen a massive take-up of ChatGPT and this is only the beginning. Waiting for the AI Act to be passed and to take effect, which will happen years from now, is not good enough as there are serious concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people’.
This statement did not fall on deaf ears. The Italian privacy regulator, Garante Per La Protezione Dei Dati Personali, ordered that ChatGPT be banned, in response to a report on 20 March of a data breach affecting ChatGPT users’ conversations and information in relation to payments by subscribers to the service. The designated representative of OpenAI in the European Economic Area has been asked to ‘notify the Italian SA within 20 days of the measures implemented to comply with the order, otherwise a fine of up to EUR 20 million or 4% of the total worldwide annual turnover may be imposed’.
By blocking access to the ChatGPT conversational agent from Friday 31 March, Italy is being seen as a pioneer in the European Union. But other countries could soon join it. Other countries such as Belgium, Germany and Ireland could follow in Italy’s footsteps.
This move signals the importance of regulatory compliance for companies such as OpenAI in the EU. In terms of the ongoing legislative process taking place in Brussels that will result in the adoption of a Regulation laying down harmonised rules on Artificial Intelligence (AI Act), the European Parliament has also suggested that AI Systems such as ChatGPT should be classified as “High Risk”.