Legal and Ethical Aspects of ChatGPT: EU Parliament’s Amendment, French Experts’ Opinion on Ethical issues and Other Useful Resources

Parliament suggests that AI Systems such as ChatGPT should be classified as “High Risk”

The legal and ethical risks that the use of ChatGPT poses, as well as the need to regulate the deployment of similar generative chatbots, is currently being debated across the world. The European Parliament is considering placing the use of generative AI models, such as ChatGPT, in a “high risk” category in its upcoming compromise text on the AI Act (Parliament Approach), thereby intending to subject such tools to burdensome conformity assessment requirements.

According to the Parliament Approach’s compromise text that we have seen, article 8a was added to Annex III for the purpose of classifying the use of generative AI models as “high risk”. In relation to the use of ChatGPT, it is stated that the following systems are to be considered as high risk:

AI systems intended to be used to generate, on the basis of limited human input, complex text content that would falsely appear to a person to be human generated and authentic, such as news articles, opinion articles, novels, scripts, and scientific articles, with the exception of AI systems used exclusively for content that undergoes human review and for the publication of which a natural or legal person is liable or holds editorial responsibility”.

Under this approach, the Parliamentary amendment would assign conversational agents similar to ChatGPT as “high risk”, as well as Grammarly, Prose Media, Speechmate, AutoML, Bloomberg’s Brief Analyzer and others.

It should be noted that, as the debate around ChatGPT rages, the National Institute of Standards and Technology (NIST) has issued an “Artificial Intelligence Risk Management Framework” to create awareness about the unique risks posed by AI products, such as their vulnerability to being unduly influenced and manipulated, due to the data that their algorithms are trained on being tampered with.


Context

ChatGPT (Generative Pre-trained Transformer) was launched by OpenAI in November 2022. It is a trained language model, which interacts with humans in a conversational way, and relies on these conversations to further its learning. ChatGPT was trained on historic data up to 2021, limiting its knowledge base to that timeframe. Its use has been contested around the globe due to the legal and ethical risks that it poses.


French Minister asks for Expert Opinion on Ethical issues posed by Automated Text Generation Systems: 

On February 20, 2023 the French Minister for Digital Affairs Jean-Noël Barrot asked the French National Committee for Digital Ethics (Comité National Pilote d’Éthique du Numérique – CNPEN) to render an Opinion on the ethical issues posed by automated text generation systems. This Opinion is due to be released on June 30, 2023. It is worth recalling that the CNPEN, of which AI-Regulation Director Theodore Christakis is a member, has recently delivered an Opinion on the Ethical Issues posed by Conversational Agents, which examines a number of issues concerning the use of chatbots such as ChatGPT.


Other Useful Resources:

Please find below other useful resources that we sourced online, regarding the legal and ethical risks associated with the use of ChatGPT.


  • Data Privacy risks
  1. The Conversation – ChatGPT is a data privacy nightmare. If you’ve ever posted online, you ought to be concerned: https://theconversation.com/chatgpt-is-a-data-privacy-nightmare-if-youve-ever-posted-online-you-ought-to-be-concerned-199283
  2. Private Internet Access Blog – ChatGPT Is a Privacy Disaster Waiting To Happen : https://www.privateinternetaccess.com/blog/chatgpt-privacy/
  3. Avast Blog – Is ChatGPT’s use of people’s data even legal? : https://blog.avast.com/chatgpt-data-use-legal
  4. Infosecurity Magazine – Addressing ChatGPT’s Shortfalls in Data Protection Law Compliance: https://www.infosecurity-magazine.com/news-features/chatgpt-shortfalls-data-protection/
  5. Fieldfisher – Unveiling the Crucial 5 GDPR Obstacles of ChatGPT That Can’t Be Ignored : https://www.fieldfisher.com/en/insights/unveiling-the-crucial-5-gdpr-obstacles-of-chatgpt  


  • Liability risks
  1. Freeman Mathis & Gary  LLP  – CHATGPT AND COVERAGE B: What Copyright Liability Exposures Could AI Users Face? : https://www.fmglaw.com/insurance/chatgpt-and-coverage-b-what-copyright-liability-exposures-could-ai-users-face/
  2. Join TechCrunch – Who’s liable for AI-generated lies? : https://techcrunch.com/2022/06/01/whos-liable-for-ai-generated-lies/
  3. Stephenson Harwood LLP – ChatGPT: Will it pass its probation? : https://www.shlegal.com/insights/chatgpt-will-it-pass-its-probation


  • Intellectual property risks
  1. Forbes – Who Ultimately Owns Content Generated By ChatGPT And Other AI Platforms? : https://www.forbes.com/sites/joemckendrick/2022/12/21/who-ultimately-owns-content-generated-by-chatgpt-and-other-ai-platforms/
  2. Falcon Rappaport & Berkman LLP – Exploring the Legal Minefield of ChatGPT and Intellectual Property Rights : https://frblaw.com/exploring-the-legal-minefield-of-chatgpt-and-intellectual-property-rights/
  3. JDSupra – Who Owns Your ChatGPT Output? (Hint: Probably Not You) : https://www.jdsupra.com/legalnews/who-owns-your-chatgpt-output-hint-1719793/
  4. The Wall Street Journal – AI Tech Enables Industrial-Scale Intellectual-Property Theft, Say Critics : https://www.wsj.com/articles/ai-chatgpt-dall-e-microsoft-rutkowski-github-artificial-intelligence-11675466857
  5. Blues Event Content – ChatGPT and Legal Marketing – Where do We go From Here? : https://bluesevencontent.com/2022/12/11/chatgpt-and-legal-marketing/


  • Ethical issues
  1. Markkula Center for Applied Ethics – ChatGPT and the Ethics of Deployment and Disclosure : https://www.scu.edu/ethics-spotlight/generative-ai-ethics/chatgpt-and-the-ethics-of-deployment-and-disclosure/
  2. Cornell University – Exploring AI Ethics of ChatGPT: A Diagnostic Analysis : https://arxiv.org/abs/2301.12867  
  3. Data Ethics – Testing ChatGPT’s Ethical Readiness : https://dataethics.eu/testing-chatgpts-ethical-readiness/



TK


[1] The adoption of the Commission’s AI Act proposal follows an Ordinary Legislative Procedure, which is the standard decision-making procedure used in the European Union. Following the Commission’s proposal, the European Parliament and Council of Ministers will either approve or amend the proposal. The text that the two co-legislators are required to approve has to be identical. On December 6th, 2022, the Council firstly adopted its amendments to the AI act proposal under the Council Approach, while the AI Act is still under consideration at the Parliament (Parliament approach).

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email