-
21/06/2024
On June 6, 2024, the European Center for Digital Rights (Noyb) filed a complaint to 11 European Data Protection Authorities (Austria, Belgium, France, Germany, Greece, Italy, Ireland, the Netherlands, Norway, Poland and Spain) about Meta’s intention to change its privacy policy regarding, among other things, the use of personal data to train its current and future Artificial Intelligence (AI) technologies.
-
15/06/2024
On June 10th, 2024, the French Data Protection Authority (CNIL) began a publication consultation on a second series of seven practical information sheets seeking to provide legal certainty for developers of AI systems by anticipating the relationship between the EU AI Act and the GDPR.
-
31/05/2024
This article delineates the AI incident notification rules within the AI Act, illustrating a two-stage incident notification procedure. Key issues include the challenge of uniformly assessing the threshold for serious incidents, particularly for high-risk AI systems.
-
27/02/2024
The purpose of this article is to explore the existing data portability rights under EU law, and assess the potential gaps among the GDPR, the DMA and the Data Act in the light of the new development of autonomous AI agents.
-
21/05/2023
On May 16th, 2023, the French data control agency – Commission Nationale de l’Informatique et des Libertés (CNIL) – published an action plan aimed at ensuring respect for the privacy of people in relation to Artificial Intelligence (AI) systems and more specifically generative AI (e.g. Midjourney and ChatGPT from the company OpenAI). This action plan follows the 2017 CNIL’s first global approach on these new tools.  
-
08/04/2023
The European Consumer Organisation (BEUC) is calling ‘for EU and national authorities to launch an investigation into ChatGPT and similar chatbots’, following the filing of a complaint on March 30th, 2023, on the other side of the Atlantic by the Center for Artificial Intelligence and Digital Policy (CAIDP) in relation to ChatGPT-4.
-
23/02/2023
Are you interested in the societal impacts and major legal issues posed by the development of Artificial Intelligence and new technologies, including the Metaverse? Do you have a PhD in legal studies (preferably digital law, intellectual property law or European/International/Human Rights law)? Are you ready to dive into the issues that concern protection of personal data and privacy, freedom of expression and other human rights in the era of AI? Do you have an open and curious mind?
-
07/02/2023
On December 16, 2022, Katia Bouslimani successfully defended her PhD Thesis entitled “Consent in the General Data Protection Regulation (GDPR)” in front of a Jury composed of Professors Brunessen Bertrand, Gloria González Fuster, Celia Zolynski, Peter Swire and Jean-Michel Bruguière and the supervisors of the PhD Thesis Karine Bannelier and Theodore Christakis.
-
06/02/2023
Data is the fuel of AI systems. Anonymisation has been presented as a panacea to protect personal data while enabling AI innovation. However, the growing efficiency of re-identification attacks on anonymised data raises a series of legal questions. 
-
19/08/2022
Due to the proliferation of “intelligent” video devices in public spaces, the French Data Protection Authority (CNIL) launched a public consultation on its draft position concerning the conditions for the deployment of so-called “smart” cameras in public spaces. Following several months of consideration, and various contributions from public and private actors, the Commission published its opinion last July.
-
08/07/2022
In June 2022, the Ada Lovelace institute published an ‘Independent legal review of the governance of biometric data in England and Wales’ written by Matthew Ryder. This review aims to address the current legal uncertainty concerning the collection, use and processing of biometric data in England and Wales. It also puts forward 10 recommendations to improve the legal framework as well as the governance of biometrics in England and Wales.
-
23/05/2022
This is the first ever detailed analysis of what is the most widespread way in which Facial Recognition is used in public (& private) spaces: to authorise access to a place or to a service. The 3rd Report in our #MAPFRE series should be of great interest to lawyers interested in data protection; AI ethics specialists; the private sector; data controllers; DPAs and the EDPB; policymakers; and the general public, who will find here an accessible way to understand all these issues.