Mapping the Use of Facial Recognition in Public Spaces in Europe – Part 1

By Theodore Christakis (project leader), Karine Bannelier, Claude Castelluccia, Daniel Le Métayer.

How to regulate the use of facial recognition in public spaces in Europe? This crucial debate has often been characterised by a lack of clarity and precision. Here is the first of 6 Reports from our big “MAPFRE” research project, a detailed independent study analysing the different ways in which FRT is being used and the related legal issues. It presents the political landscape, dives into definitions of key terms and explains the project’s main objectives and methodological tools, which led to the selection – and detailed study – of 25 representative cases.

Regulating the use of facial recognition and face analysis in public spaces is undoubtedly one of the most pressing issues today when it comes to the regulation of artificial intelligence in democratic societies. There is an important debate going on worldwide about the “red lines” that should be established by regulators in order to prevent people’s freedoms being endangered as the result of the use of facial recognition technologies (FRT). In Europe especially, where privacy, data protection and human rights lie at the very heart of the European integration project, this debate is more necessary and pressing than ever. The importance of this issue is reflected in the ongoing legislative work that has followed the European Commission’s introduction, in April 2021, of the draft AI regulation, which includes several important proposals to regulate the use of facial recognition.

Curiously, though, the debate about these fundamental questions is taking place in the absence of a profound assessment of how existing European law is being applied to these issues. Furthermore, the debate on these issues in Europe is also characterised by a high level of imprecision. Journalists, activists and politicians sometimes have a tendency to treat “facial recognition” as a single monolithic bloc, lumping the different functionalities and uses of facial recognition together. In contrast, in an important Opinion published in 2019 the French DPA, CNIL, stressed the importance of clarity and precision to fostering the conditions necessary for an informed and useful debate. “Behind the catch-all term, there are multiple use cases” said the CNIL, adding that “in this context, a use-by-use approach must be applied”.

This is precisely the main objective of the “MAPping the use of Facial Recognition in public spaces in Europe” (MAPFRE) project. Our intention is to offer a detailed independent study that separately presents and analyses the different categories of FRT use in publicly accessible places in the European Union and the UK. The intention of our project is to publish a series of reports which include:

• the general context and objectives of the project as well as an analysis of the problem of definitions (Part 1);

• a detailed explanation of the different facial processing functionalities and applications in public spaces in Europe using a classification table, illustrations and charts (Part 2);

• a first ever detailed report on the use of facial recognition for authorisation purposes in public spaces in Europe (Part 3);

• a report which focuses on the important issue of the use of FRT in criminal investigations (Part 4);

• a deep dive into the equally important issue of large-scale face matching/identification (what the AI draft regulation calls “real-time remote biometric identification”) (Part 5);

• and, finally, a report which discusses the use of “face analysis” in public spaces (which remains marginal in Europe but is likely to develop in the future) and which also presents the general perspectives and recommendations of the MAPFRE project (Part 6).

At the end of this project, we will also present an analysis of “25 selected cases”, illustrating the different categories in our classification table, as well as analysing other cases more briefly.

The current “Report 1” presents the major positions on the debate surrounding the use of FRT as well as the preliminary positions adopted by Members of the European Council and Parliament during the ongoing legislative process concerning the draft AI regulation.

It then dives into the important issue of definitions. Our study shows that the existing definition of “biometric data” in the GDPR and the LED is problematic and confusing. This has compelled some actors to propose amending it in the draft AI Act. However, the consequences of such an amendment could be significant as it is difficult to imagine how we could have a different definition of “biometric data” in the GDPR and the LED to that in the AI Act. Other stakeholders, especially the Rapporteurs of the European Parliament, have instead proposed creating an entirely new category called “biometrics-based data”. While the intentions of the Rapporteurs are understandable, the creation of a new category so similar to the original one might create further confusion in this field.

Following this important discussion, we explain the scope of our study. We cover the use of both “facial recognition” and “face analysis” in public spaces (and we explain the difference between the two terms). Drawing on the draft EU AI Regulation, we also define how we use the term “public spaces”. Finally, with regard to the territorial scope of our study, we explain why we have decided to include cases that originate not only from EU Member States but also from the UK.

Finally, we explain the methodological tools that we have used. The first tool that we have elaborated is a “Classification Table”, which illustrates the uses of facial recognition/analysis in public spaces. This table, to be published in “Part 2” of our MAPFRE series, tries to present in the most accurate and accessible way the different facial processing functionalities and applications used in public spaces. The second methodological tool that we have elaborated is a detailed analytical framework which asks a number of key questions. The template for this analytical framework is presented in an annex to this paper. To summarise it, it involves 3 series of questions: a series of questions on the facts and technical details of the use case; a second series of questions on Human Rights and the principles relating to the processing of personal data; and a third part which tries to identify whether any additional guarantees were offered by the data controller, focusing on issues such as accountability and transparency, whether a Data Protection Impact Assessment (DPIA) was conducted, and whether there was an evaluation of the effectiveness of the system. We have applied this analytical framework as a means of analysing 25 interesting use cases in detail, covering the various functionalities and applications found in our classification table. Aside from these 25 “selected” case studies, which we will publish at the end of the project, we have extensively analysed several other important cases of FRT deployment in public spaces in Europe.

We hope that our study will be useful not only to policy-makers, stakeholders, scholars and citizens who may be interested in the issue of facial recognition/analysis, but also anyone interested in how major human rights and data protection principles, such as the principle of lawfulness, the principles of necessity and proportionality or other principles relating to the processing of personal data, are interpreted. Indeed, during our research into how facial recognition systems are deployed in Europe, we found a treasure trove of information that includes documents produced by data controllers, legal challenges introduced by civil society, positions of DPAs, judgments of national courts, articles published by scholars and journalists, and other material. We expect that all of this material will be of great interest not only in terms of the regulation of facial recognition, but also in terms of understanding how the GDPR, the LED and European HR Law apply to a number of important fields.

To download the full report, please click here.

The second report entitled “Mapping the Use of Facial Recognition in Public Spaces in Europe – Part 2: Classification” will be published on our website on May 17, 2022.

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email