On November 16th, 2021, the Future of Privacy Forum, in partnership with the Brussels Privacy Hub, organised the 2021 Brussels Privacy Symposium entitled ‘The Age of AI Regulation: Global Strategic Directions’. This event gathered together distinguished scholars, decision makers and AI professionals to discuss the EU’s progress with regard to the AI Act, the different approaches to AI governance worldwide and the various ways in which AI is used, and their lawfulness.
At the conference, the director of the AI Regulation Chair, Professor Théodore Christakis was invited to participate in a panel entitled “Should certain uses of AI be banned?”. This symposium marked one of the first opportunities to discuss the AI Regulation Chair’s current project, which aims to map the use of facial recognition in public places in Europe.
Following the symposium, Sebastião BARROS VALE, Lee MATHESON and Katerina DEMETZOU released a report on March 3rd 2022, summarising the discussions held during the conference.
In his contribution to the report, Professor Théodore Christakis announced the broad outlines of the AI Regulation Chair’s ongoing project, which consists of mapping the use of facial recognition in public places in Europe.
In particular, Professor Christakis referred to the French CNIL (Commission Nationale de l’Informatique et des Libertés) and called for a clarification of the meaning of “Automated Facial Recognition” (AFR), which involves adopting a ‘use by use’ approach in order to avoid confusion.
To achieve such a task, AI Regulation Chair researchers developed two main methodological tools:
- A classification of ‘Automated Facial Recognition’ (AFR) systems which identifies 5 technical functionalities and 12 applications of AFR.
- A questionnaire to be completed for each use case analysed.
Against this background Professor Christakis underlined three issues:
Firstly, verification techniques which only involve a 1 vs 1 comparison do not seem to raise any unsolvable legal difficulties. For instance, the French CNIL did not rule out the use of the PARAFE system in airports.
Secondly, certain legal issues need to be clarified. In particular, article 10 of the Law Enforcement Directive (LED) requires the deployed system to meet the ‘strict necessity’ criterion. Furthermore, this article requires the existence of a legal basis under domestic law to authorise the deployment of an FR system. States deal with these requirements in different ways, which is why there is a need for clarification on this specific topic.
Finally, States and European Data Protection Authorities (DPAs) appear to sometimes interpret exceptions to the prohibition of the processing of biometric data under the GDPR in different ways.
These various topics will be addressed in more detail in forthcoming reports produced by the AI Regulation research team.
To read the full report click Here.