On November 23rd, 2022 an article by Le Parisien, a French Newspaper, revealed that the French Government had dropped its project to deploy facial recognition to support security arrangements at the 2024 Paris Olympics. In fact, the debate on the possible implementation of facial recognition systems during the Olympic Games is part of a broader debate which divides political leaders on whether AI-driven biometric systems should be used to monitor public places.
The use of facial recognition technologies for criminal investigation purposes has been under the spotlight for many years in France and in the European Union. In this article accepted for publication in the European Review of Digital Administration & Law, T. Christakis & A. Lodie discuss a major decision issued last year by the French Conseil d’Etat.
AI-Regulation Researchers propose a highly interesting comparative approach of the definition of different categories of “data”, including “sensitive” and “biometric” data, as found in more than 20 international instruments. Karine Bannelier and Anaïs Trotry decided to proceed to a comparative analysis of all relevant international instruments and to compile, in two Charts, the definitions appearing in these instruments. You can download the Charts and read their first findings in this article.
On September 28th, 2022, the European Commission released two proposals, the aim of which is to regulate civil liability in relation to AI-enabled systems, drawing from the Commission’s White Paper1 considerations on the use of such systems: a revised version of the Defective Product Liability Directive (PLD)2 and a Directive that adapts non-contractual civil liability rules to Artificial Intelligence (AI Liability Directive)3. The combination of these proposals with that of April 21st, 2021, Laying Down Harmonized Rules On Artificial Intelligence (AI Act)4, will result in the national liability frameworks being adapted to the digital age, the circular economy and global value chains.
This is the first ever detailed analysis of what is the most widespread way in which Facial Recognition is used in public (& private) spaces: to authorise access to a place or to a service. The 3rd Report in our #MAPFRE series should be of great interest to lawyers interested in data protection; AI ethics specialists; the private sector; data controllers; DPAs and the EDPB; policymakers; and the general public, who will find here an accessible way to understand all these issues.
The French DPA, CNIL, stressed that “the current debate on facial recognition is sometimes distorted by a poor grasp of this technology and how it works”. This 2nd of 6 Reports of our MAPFRE series provides a path to understanding with a classification table presenting in the most accessible way the different facial processing functionalities and applications used in public spaces.
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.