Legal and Ethical Aspects of ChatGPT: EU Parliament’s Amendment, French Experts’ Opinion on Ethical issues and Other Useful Resources

Parliament suggests that AI Systems such as ChatGPT should be classified as “High Risk”

The legal and ethical risks that the use of ChatGPT poses, as well as the need to regulate the deployment of similar generative chatbots, is currently being debated across the world. The European Parliament is considering placing the use of generative AI models, such as ChatGPT, in a “high risk” category in its upcoming compromise text on the AI Act (Parliament Approach), thereby intending to subject such tools to burdensome conformity assessment requirements.

According to the Parliament Approach’s compromise text that we have seen, article 8a was added to Annex III for the purpose of classifying the use of generative AI models as “high risk”. In relation to the use of ChatGPT, it is stated that the following systems are to be considered as high risk:

AI systems intended to be used to generate, on the basis of limited human input, complex text content that would falsely appear to a person to be human generated and authentic, such as news articles, opinion articles, novels, scripts, and scientific articles, with the exception of AI systems used exclusively for content that undergoes human review and for the publication of which a natural or legal person is liable or holds editorial responsibility”.

Under this approach, the Parliamentary amendment would assign conversational agents similar to ChatGPT as “high risk”, as well as Grammarly, Prose Media, Speechmate, AutoML, Bloomberg’s Brief Analyzer and others.

It should be noted that, as the debate around ChatGPT rages, the National Institute of Standards and Technology (NIST) has issued an “Artificial Intelligence Risk Management Framework” to create awareness about the unique risks posed by AI products, such as their vulnerability to being unduly influenced and manipulated, due to the data that their algorithms are trained on being tampered with.


ChatGPT (Generative Pre-trained Transformer) was launched by OpenAI in November 2022. It is a trained language model, which interacts with humans in a conversational way, and relies on these conversations to further its learning. ChatGPT was trained on historic data up to 2021, limiting its knowledge base to that timeframe. Its use has been contested around the globe due to the legal and ethical risks that it poses.

French Minister asks for Expert Opinion on Ethical issues posed by Automated Text Generation Systems: 

On February 20, 2023 the French Minister for Digital Affairs Jean-Noël Barrot asked the French National Committee for Digital Ethics (Comité National Pilote d’Éthique du Numérique – CNPEN) to render an Opinion on the ethical issues posed by automated text generation systems. This Opinion is due to be released on June 30, 2023. It is worth recalling that the CNPEN, of which AI-Regulation Director Theodore Christakis is a member, has recently delivered an Opinion on the Ethical Issues posed by Conversational Agents, which examines a number of issues concerning the use of chatbots such as ChatGPT.

Other Useful Resources:

Please find below other useful resources that we sourced online, regarding the legal and ethical risks associated with the use of ChatGPT.

  • Data Privacy risks
  1. The Conversation – ChatGPT is a data privacy nightmare. If you’ve ever posted online, you ought to be concerned:
  2. Private Internet Access Blog – ChatGPT Is a Privacy Disaster Waiting To Happen :
  3. Avast Blog – Is ChatGPT’s use of people’s data even legal? :
  4. Infosecurity Magazine – Addressing ChatGPT’s Shortfalls in Data Protection Law Compliance:
  5. Fieldfisher – Unveiling the Crucial 5 GDPR Obstacles of ChatGPT That Can’t Be Ignored :  

  • Liability risks
  1. Freeman Mathis & Gary  LLP  – CHATGPT AND COVERAGE B: What Copyright Liability Exposures Could AI Users Face? :
  2. Join TechCrunch – Who’s liable for AI-generated lies? :
  3. Stephenson Harwood LLP – ChatGPT: Will it pass its probation? :

  • Intellectual property risks
  1. Forbes – Who Ultimately Owns Content Generated By ChatGPT And Other AI Platforms? :
  2. Falcon Rappaport & Berkman LLP – Exploring the Legal Minefield of ChatGPT and Intellectual Property Rights :
  3. JDSupra – Who Owns Your ChatGPT Output? (Hint: Probably Not You) :
  4. The Wall Street Journal – AI Tech Enables Industrial-Scale Intellectual-Property Theft, Say Critics :
  5. Blues Event Content – ChatGPT and Legal Marketing – Where do We go From Here? :

  • Ethical issues
  1. Markkula Center for Applied Ethics – ChatGPT and the Ethics of Deployment and Disclosure :
  2. Cornell University – Exploring AI Ethics of ChatGPT: A Diagnostic Analysis :  
  3. Data Ethics – Testing ChatGPT’s Ethical Readiness :


[1] The adoption of the Commission’s AI Act proposal follows an Ordinary Legislative Procedure, which is the standard decision-making procedure used in the European Union. Following the Commission’s proposal, the European Parliament and Council of Ministers will either approve or amend the proposal. The text that the two co-legislators are required to approve has to be identical. On December 6th, 2022, the Council firstly adopted its amendments to the AI act proposal under the Council Approach, while the AI Act is still under consideration at the Parliament (Parliament approach).

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email