Italian DPA Clamps Down on ‘Replika’, An AI Conversational Agent

On February 2nd, 2023, the Italian Data Protection Agency (Garante Per La Protezione Dei Dati Personali) urgently ordered a temporary limitation “on the processing of personal data relating to users in the Italian territory as performed by Luka Inc., the US-based developer and operator of Replika, in its capacity as controller of the processing of personal data that is carried out via the said app”.

Replika is a generative AI chatbot application available via iOS and is “equipped with a text and voice interface generating a ‘virtual friend’ users can configure as a friend, partner or mentor”. It adapts its personality to that of the user, based on the information delivered to it. Such information may concern the lifestyle, temperament, family or even the friends of the user.

The capability of ‘Replika’ to improve users’ emotional well-being, help users understand their thoughts and alleviate anxiety through stress management, socialisation and the search for love entails – according to the DPA’s decision – “increased risks to individuals who have not yet grown up or else are emotionally vulnerable”.

The absence of an age verification procedure on the platform during account creation, the inexistence of blocking mechanisms in response to a user explicitly stating that it is underage, the non-provision of information about the use of minors’ personal data in the privacy policy and the difficulty in determining the legal basis for the chatbot’s processing activity brought the Italian DPA to the conclusion that “the processing of personal data relating to [ Replika ] users, in particular underage users,” is in breach of Articles 5, 6, 8, 9, 13 and 25 of the GDPR.

An important point that should also be highlighted, apart from the breaching of GDPR provisions in relation to processing of personal data, is the “comments by users pointing to the sexually inappropriate contents that are provided by the Replika chatbot”. An issue which generally raises concern about the use of conversational AI-enabled agents by underage users due to the risk of data training bias occurring. The French National Pilot Committee for Digital Ethics (Comité National Pilote d’Éthique du Numérique – CNPEN) highlights the ethical risks related to the use of conversational agents in its recently delivered Opinion n°3 on the Ethical Issues of Conversational Agents. According to the CNPEN opinion, “bias in training data is a major source of ethical conflicts, particularly through ethnic, cultural, or gender discrimination”.

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email