On November 23rd, 2022 an article by Le Parisien, a French Newspaper, revealed that the French Government had dropped its project to deploy facial recognition to support security arrangements at the 2024 Paris Olympics. In fact, the debate on the possible implementation of facial recognition systems during the Olympic Games is part of a broader debate which divides political leaders on whether AI-driven biometric systems should be used to monitor public places.
The use of facial recognition technologies for criminal investigation purposes has been under the spotlight for many years in France and in the European Union. In this article accepted for publication in the European Review of Digital Administration & Law, T. Christakis & A. Lodie discuss a major decision issued last year by the French Conseil d’Etat.
AI-Regulation Researchers propose a highly interesting comparative approach of the definition of different categories of “data”, including “sensitive” and “biometric” data, as found in more than 20 international instruments. Karine Bannelier and Anaïs Trotry decided to proceed to a comparative analysis of all relevant international instruments and to compile, in two Charts, the definitions appearing in these instruments. You can download the Charts and read their first findings in this article.
On September 28th, 2022, the European Commission released two proposals, the aim of which is to regulate civil liability in relation to AI-enabled systems, drawing from the Commission’s White Paper1 considerations on the use of such systems: a revised version of the Defective Product Liability Directive (PLD)2 and a Directive that adapts non-contractual civil liability rules to Artificial Intelligence (AI Liability Directive)3. The combination of these proposals with that of April 21st, 2021, Laying Down Harmonized Rules On Artificial Intelligence (AI Act)4, will result in the national liability frameworks being adapted to the digital age, the circular economy and global value chains.
This is the first ever detailed analysis of what is the most widespread way in which Facial Recognition is used in public (& private) spaces: to authorise access to a place or to a service. The 3rd Report in our #MAPFRE series should be of great interest to lawyers interested in data protection; AI ethics specialists; the private sector; data controllers; DPAs and the EDPB; policymakers; and the general public, who will find here an accessible way to understand all these issues.
The French DPA, CNIL, stressed that “the current debate on facial recognition is sometimes distorted by a poor grasp of this technology and how it works”. This 2nd of 6 Reports of our MAPFRE series provides a path to understanding with a classification table presenting in the most accessible way the different facial processing functionalities and applications used in public spaces.
The purpose of the Convention is to ensure that during their lifecycle, AI systems fully comply with human rights, respect the functioning of democracy and observe the rule of law, regardless of whether these activities are undertaken by public or private actors. The design, development and application of AI systems used for purposes related to national defence are expressly excluded from the scope of this Convention. The negotiators seem to agree that such a Convention must be seen first and foremost as a broad framework which might be supplemented by further obligations in more specific fields.
On December 15th, 2022, the European Union adopted an interinstitutional declaration on digital rights and principles that will guide the EU’s ambition to be “digitally sovereign in an open and interconnected world, and to pursue digital policies that empower people and businesses to seize a human centred, sustainable and more prosperous digital future”.
On December 6th, 2022, EU Member States voted on a “general approach” to the upcoming Artificial Intelligence Act (AI Act). On the same day, 192 civil society organisations and individuals published an open letter calling on the EU to modify a number of aspects of the AI Act to protect migrants from the risks that AI systems may pose to their fundamental rights.
On November 8th, 2022, the Information Commissioner’s Office (British DPA) published a document entitled ‘How to use AI and personal data appropriately and lawfully’, which is a guide to how data controllers should use AI systems in accordance with the law and in particular with people’s fundamental rights. This publication also contains a ‘frequently asked questions’ section which addresses certain specific issues that data controllers may have to deal with.
The Italian ‘Garante per la protezione dei dati personali’ (Italian data protection authority) published a press release on November 14th, 2022, in which it announced that it had opened two separate investigations into the use of ‘smart video systems’ by two Italian municipalities.
The ‘Commission Nationale de l’Informatique et des Libertés’ (CNIL – the French DPA) released its final decision on October 20th, 2022, sanctioning Clearview AI for its unlawful activity, which consists of collecting images of millions of individuals from the open web without any legal basis under the GDPR for doing so.
On October 13th, 2022, the European Data Protection Supervisor (EDPS) published an Opinion entitled “Recommendation for a Council Decision authorising the opening of negotiations on behalf of the European Union for a Council of Europe convention on artificial intelligence, human rights, democracy and the rule of law”. This independent supervisory authority welcomes the initiative taken by the European Commission to authorise negotiations on behalf of the EU regarding the future Council of Europe’s (CoE) Convention on Artificial Intelligence (AI).
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.