With the rapid development of technology, artificial intelligence is becoming more and more sophisticated. This trend has led States to be concerned about the possible consequences that this technology may have on society. Ethical issues surrounding Artificial Intelligence have taken centre stage in the discourse, as demonstrated by UNESCO’s recommendation on ethical AI regulation. In order to shed some light on the ethical problems raised by such technological developments in relation to ‘chatbots’, the French National Pilot Committee for Digital Ethics (CNPEN) issued an opinion on the “ethical issues of conversational agents” under the aegis of the Ethics Committee (CCNE). On July 15, 2019, the French Prime Minister commissioned the CCNE to “launch a probing investigation into the ethical questions of digital science, technologies, applications, innovations, and artificial intelligence”. In order to accomplish this, the CNPEN was created in December 2019, and was mandated to identify “ethical issues of conversational agents, commonly known as ‘chatbots’”. The Committee gathered together 27 professionals from various backgrounds such as computer scientists, philosophers, doctors, lawyers, and policy representatives.
Between July 2020 and January 2021, the French National Pilot Committee received ninety- six contributions to their consultation on “ethical issues of conversational agents”. The Committee identified three main ethical issues regarding conversational agents: their status, the way in which they imitate language and emotions, and the public awareness of their effectiveness and limitations. The opinion provides a series of recommendations and design principles, and poses research questions on the topic.
The Committee defines a ‘conversational agent’ (or ‘chatbot’) as “a machine that interacts with users in natural language orally or in writing” which is usually “integrated into a multitask digital system or platform”. With new developments in machine learning, conversational agents are evolving and are able to engage with users in more efficient ways (a good example being ‘transformers’: “neural networks that learn the most likely regularities from vast linguistic corpora without regard for the word order”). These developments raise ethical questions and pose new issues and challenges regarding their use and deployment, notably in terms of responsibility. While some aspects are already regulated by the GDPR, others require an ethical case-by-case analysis of the possible long-term effects on society and its relationship with this technology.
Amongst the recommendations provided by the Committee, the need for transparent information for users stands out. While communicating with a conversational agent, users should be informed that they are conversing with a machine and the possible bias that might result from anthropomorphization. Some chatbots have the capacity to influence human behaviour, in which case, it is crucial that the consent of users be obtained. This application is seen as morally problematic, and the Committee has highlighted the need to set strict limits on chatbots’ capacity to manipulate, independently of the utility and context of its application. Furthermore, the Committee has emphasised the fact that special attention should be given to conversational agents that communicate with children and vulnerable individuals. In addition, it has emphasised the need for specific regulation concerning ‘deadbots’ (“a conversational agent that purposely mimics the way a dead person speaks or writes”) and ‘guardian angels’ (designed to protect user’s personal data) while accentuating the need for analysing the long- term, secondary or unintended effects of this technology.
In order to address the ethical issues raised by the design of chatbots, the Committee is proposing ten design principles that should be considered by the different actors involved in the creation of conversational agents. The principles seek to impose a series of obligations on developers: to comply with an “ethical design” by integrating human values during the design phase and to develop technical solutions for potential ethical issues that might arise. Developers must also seek to avoid language bias and adapt conversational agents to cultural codes (including codes of emotional conduct) in different parts of the world. A number of design principles revolve around the need for transparency: chatbots must declare their purposes and features, especially ‘affective’ conversational agents (“systems with the ability to recognize, express, synthesise, and model human emotions”). It is also important for users to be able to understand the agent’s behaviour and for chatbots to be GDPR compliant. With regard to ‘affective computing technologies’, the Committee demands that developers limit the “spontaneous projection of emotions” of conversational agents, and to “respect the proportionality and adequacy between the intended purposes and the necessity of affective computing to achieve them”. With regard to this issue, the Committee considers that developers must consider carefully the chatbot’s capacity to detect human emotions and simulate empathy.
Finally, the Committee is proposing eleven research questions, several of which are centred on the possible consequences and long-term effects this technology might have on society, especially on minors and vulnerable individuals, on the organisation of labour, and the possible effects on human emotional behaviour such as the possible lack of interaction with other humans, cognitive biases, and so on.
Contributions from citizens and stakeholders to the Committee have revealed the different levels of knowledge on the topic, and how a diversity of beliefs, traditions and origins impacts on the perception of chatbots. However, even though responses to the issue were polarised, the Committee also noted a wide acceptance of key principles like transparency and respect of users’ autonomy. Overall, the report highlights the importance of elaborating a case- by-case analysis of the different uses and applications of conversational agents, which takes into account the existent legal frameworks and human autonomy.
Source: https://www.ccne-ethique.fr/sites/default/files/cnpen3-ethical_issues_of_conversational_agents.pdf
SCJ