The Chair announces the publication of an article by its research fellow, Dr. Theodoros Karathanasis, entitled “Biometric Data and Facial Recognition Technology in the EU: The Interplay Between Data Protection and Cybersecurity.”
Dr. Karathanasis’s publication follows his participation in the “Next Democratic Frontiers for Facial Recognition Technology” conference held in Florence on 29th September 2023. The event was co-organized by the STG’s Chair on Artificial Intelligence and Democracy with the Dipartimento di Scienze Giuridiche of the University of Florence and The Centre for Cyber Law & Policy (CCLP).
Abstract: In the European Union (EU), where privacy, data protection and human rights are at the very heart of the European integration project, there is an important debate going on about the “red lines” that should be set by regulators to prevent people’s freedoms being endangered. One emblematic example lies in the regulation of the use of facial recognition technology (FRT). Drawing on the requirements imposed by the GDPR on the controller and the processor to carry out appropriate technical and organisational measures to ensure a level of security appropriate to the risk involved, the present article highlights the links between facial image data protection and the cybersecure deployment of FRTs. The results of this analysis seem to point to a double-layered risk approach to the security of processed, stored and transmitted facial image biometric data in the EU, by means of the privacy and cybersecurity legal frameworks.
Resume: The EU debate on facial recognition technology (FRT) shows the need to define “red lines” to safeguard individual freedoms, as privacy, data protection and human rights are central to European integration. FRT encompasses systems ranging from simple face detection to more complex verification, identification, and categorisation of individuals. Its use is becoming more common in important areas like banking (e.g., e-banking authentication), transport (e.g., ticketing), health (e.g., patient screening), and even elections (e.g., e-voting). The analysis suggests a two-pronged approach to the security of biometric facial image data, drawing upon both privacy and cybersecurity legal frameworks.
While it is true that FRT has significant technological capabilities, there are also some concerns regarding the balance between personal data protection, mass surveillance, commercial interests and national security. It is understandable that there are concerns about the storage of facial recognition data, often in databases, due to its vulnerability to security breaches. Such breaches have the potential to lead to identity theft, stalking and harassment, as hackers could access facial scans linked to other sensitive information like phone or banking details, which could exacerbate the impact of the breach. Beyond individual security, compromises involving FRT have the potential to create conditions for mass surveillance and state overreach, which may have a negative effect on political participation and social inequality. It is important to note that security breaches are particularly likely during data transmission in online/mobile FRT and during data storage for identification/authentication. It is therefore crucial to implement appropriate measures and ensure a risk-appropriate level of security.
In an effort to enhance the security of its digital infrastructure, the EU has recently introduced a series of updates to its cybersecurity legal framework. The large-scale processing of biometric data in law enforcement has the potential to pose certain risks to democratic values, including civil liberties, privacy, and human rights, due to the possibility of abuse and widespread surveillance. While the EU has the capacity to regulate big data, national security and most law enforcement activities remain within the competence of individual Member States. This expansion, which could make approximately 60 million facial images accessible, significantly increases the type and amount of biometric data shared cross-borders, highlighting the urgent need for high technical security standards and privacy safeguards.
It is important to note that centralised biometric databases, such as those envisioned under Prüm II, have the potential to become high-value targets for cyberattacks, which could risk identity theft or fraud if compromised. While Prüm II is a helpful tool for the exchange of personal data by building on existing standards, the introduction of facial images may require us to consider additional safeguards. It is also worth noting that Member States often follow national standards rather than adopting common European ones, which could potentially lead to a fragmented approach to cybersecurity.
A continuing challenge in the interplay between GDPR and NIS 2 is the lack of a clearly defined numerical threshold for “large-scale processing” of special categories of data. While the NIS 2 Directive shifted its criteria for covered entities from the number of users to factors like the number of employees and annual turnover, the GDPR’s definition of “large-scale” remains qualitative, based on factors such as the number of data subjects, volume of data, duration, and geographical extent of processing. This ambiguity has the potential to introduce complexity in the context of compliance, especially for entities utilising FRT, as it may not clearly define whether a Data Protection Impact Assessment (DPIA) is necessary.
It is also worth noting that there appears to be a certain “hierarchy” between data privacy and cybersecurity administrative penalties. In the event that an infringement by an essential or important entity under the NIS 2 Directive’s cybersecurity risk management or reporting obligations results in a personal data breach under the GDPR, it is the responsibility of the national Data Protection Authority (DPA) to impose the relevant sanction under the GDPR. It is understood that sanctions concerning personal data breaches will continue to be imposed in accordance with the GDPR, which is seen as a welcome approach given the sensitivity of facial image data. However, this does raise questions about the separation of jurisdiction and the DPA’s ability to conduct cybersecurity assessments related to such breaches.
Cite: Karathanasis, T. (2025). Biometric Data and Facial Recognition Technology in the EU. In: Menéndez González, N., Mobilio, G. (eds) Next Democratic Frontiers for Facial Recognition Technology (FRT). Law, Governance and Technology Series, vol 74. Springer, Cham. https://doi.org/10.1007/978-3-031-89794-8_6
These statements are attributable only to the author, and their publication here does not necessarily reflect the view of the other members of the AI-Regulation Chair or any partner organizations.
This work has been partially supported by MIAI @ Grenoble Alpes, (ANR-19-P3IA-0003) and by the Interdisciplinary Project on Privacy (IPoP) commissioned by the Cybersecurity PEPR (ANR 22-PECY-0002 IPOP).