This cross-sectional field of research enables analysis of how appropriate existing law is in relation with AI applications (e.g., the GDPR) and what AI governance might resemble  in the future. Research focuses on issues such as data and privacy protection - among other human rights - transparency, the audibility of AI systems, accountability/liability and oversight/control, and the fight against bias and discrimination.


On September 28th, 2022, the European Commission released two proposals, the aim of which is to regulate civil liability in relation to AI-enabled systems, drawing from the Commission’s White Paper1 considerations on the use of such systems: a revised version of the Defective Product Liability Directive (PLD)2 and a Directive that adapts non-contractual civil liability rules to Artificial Intelligence (AI Liability Directive)3. The combination of these proposals with that of April 21st, 2021, Laying Down Harmonized Rules On Artificial Intelligence (AI Act)4, will result in the national liability frameworks being adapted to the digital age, the circular economy and global value chains.
This is the first ever detailed analysis of what is the most widespread way in which Facial Recognition is used in public (& private) spaces: to authorise access to a place or to a service. The 3rd Report in our #MAPFRE series should be of great interest to lawyers interested in data protection; AI ethics specialists; the private sector; data controllers; DPAs and the EDPB; policymakers; and the general public, who will find here an accessible way to understand all these issues.
The French DPA, CNIL, stressed that “the current debate on facial recognition is sometimes distorted by a poor grasp of this technology and how it works”. This 2nd of 6 Reports of our MAPFRE series provides a path to understanding with a classification table presenting in the most accessible way the different facial processing functionalities and applications used in public spaces.
How to regulate the use of facial recognition in public spaces in Europe? This crucial debate has often been characterised by a lack of clarity and precision. Here is the first of 6 Reports from our big “MAPFRE” research project, a detailed independent study analysing the different ways in which FRT is being used and the related legal issues.


On November 8th, 2022, the Information Commissioner’s Office (British DPA) published a document entitled ‘How to use AI and personal data appropriately and lawfully’, which is a guide to how data controllers should use AI systems in accordance with the law and in particular with people’s fundamental rights. This publication also contains a ‘frequently asked questions’ section which addresses certain specific issues that data controllers may have to deal with. 
The Italian ‘Garante per la protezione dei dati personali’ (Italian data protection authority) published a press release on November 14th, 2022, in which it announced that it had opened two separate investigations into the use of ‘smart video systems’ by two Italian municipalities.
The ‘Commission Nationale de l’Informatique et des Libertés’ (CNIL – the French DPA) released its final decision on October 20th, 2022, sanctioning Clearview AI for its unlawful activity, which consists of collecting images of millions of individuals from the open web without any legal basis under the GDPR for doing so. 
On October 13th, 2022, the European Data Protection Supervisor (EDPS) published an Opinion entitled “Recommendation for a Council Decision authorising the opening of negotiations on behalf of the European Union for a Council of Europe convention on artificial intelligence, human rights, democracy and the rule of law”. This independent supervisory authority welcomes the initiative taken by the European Commission to authorise negotiations on behalf of the EU regarding the future Council of Europe’s (CoE) Convention on Artificial Intelligence (AI).