AI and Data Protection: The CNIL develops an Action Plan

On May 16th, 2023, the French data control agency – Commission Nationale de l’Informatique et des Libertés (CNIL) – published an action plan aimed at ensuring respect for the privacy of people in relation to Artificial Intelligence (AI) systems and more specifically generative AI (e.g. Midjourney and ChatGPT from the company OpenAI). This action plan follows the 2017 CNIL’s first global approach on these new tools.  

In this new publication, the CNIL firstly proposes a definition of generative AI: “Generative artificial intelligence is a system capable of creating text, images or other content (music, video, voice, etc.) based on an instruction from a human user”. A definition which is in line with that of the European Parliament’s compromise text of May 11th, 2023, on the AI act.

The CNIL states that current systems could generate original content from training data, and their performance is now approaching that of humans, thanks to the abundance of data used in their deep learning. Nevertheless, to obtain the desired results, these systems require the user to clearly formulate his or her requests. Thus, expertise is emerging around the way in which requests are formulated by the user, or so called “prompt engineering”.

In an anticipatory approach, the CNIL and its AI department are working on the development of preventive regulation for these tools. It proposes four objectives to ensure effective regulation on how to protect the privacy of European citizens:

Understand how AI systems function and their impact on individuals. The CNIL outlines here the different ways in which important data, which are jeopardised by AI, can be protected. For example, this may apply to the protection of data transmitted by users, or the fairness and transparency of the data used to train AI.

Provide a legal framework for the development of privacy-friendly AI. It is important that the commission generates a legal framework for the development of AI and in particular explains how the GDPR applies to the training of AI.

Federate and Support innovative actors in the AI ecosystem in France and Europe. The aim here is to promote actors involved in AI development that respect French and European standards. There is also a desire to provide this support through dialogue with research teams and companies. It should be stressed here that the CNIL has already launched a specific support program to this effect for providers of ‘enhanced’ video surveillance, as the French government planned to use such systems to support security arrangements at the 2024 Olympic and Paralympic Games, however these plans were abandoned. See the AI Chair’s article on this matter.

Audit and control AI systems with a view to protecting people. The Commission wants to ensure that individual rights and freedoms are respected. It therefore aims to develop means of auditing AI systems in order to verify that “augmented” video surveillance is used respectfully, the use of AI in the fight against fraud is regulated effectively, and other complaints made to the CNIL about AI systems (e.g., OpenAI) are addressed satisfactorily. Finally, the CNIL plans to continue its advisory and analytical work on generative AI and augmented cameras for the rest of the year.

HT

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email