This cross-sectional field of research enables analysis of how appropriate existing law is in relation with AI applications (e.g., the GDPR) and what AI governance might resemble  in the future. Research focuses on issues such as data and privacy protection - among other human rights - transparency, the audibility of AI systems, accountability/liability and oversight/control, and the fight against bias and discrimination.


A fierce debate rages in Brussels over which AI systems should be considered as “High Risk”, while the systems in Annex II of the EU AI Act have attracted less attention. Here is a guide (with infographics) on the classification of ALL “High Risk” systems in the AI Act, as well as the corresponding conformity assessment procedures.
Data is the fuel of AI systems. Anonymisation has been presented as a panacea to protect personal data while enabling AI innovation. However, the growing efficiency of re-identification attacks on anonymised data raises a series of legal questions. 
On November 23rd, 2022 an article by Le Parisien, a French Newspaper, revealed that the French Government had dropped its project to deploy facial recognition to support security arrangements at the 2024 Paris Olympics. In fact, the debate on the possible implementation of facial recognition systems during the Olympic Games is part of a broader debate which divides political leaders on whether AI-driven biometric systems should be used to monitor public places.
The use of facial recognition technologies for criminal investigation purposes has been under the spotlight for many years in France and in the European Union. In this article accepted for publication in the European Review of Digital Administration & Law, T. Christakis & A. Lodie discuss a major decision issued last year by the French Conseil d’Etat.


On September 7th, 2023, the Court of Justice of the European Union (CJEU) upheld the decision of the General Court according to which the public can partially access documentation on the EU’s emotion recognition project (iBorderCtrl) in which it discusses the general reliability, ethics and legality of such technology.
The use of Artificial Intelligence (AI) systems can undoubtedly bring societal benefits, enhance economic growth, support innovation, and reframe global competitiveness among businesses and governments. At the same time, it is commonly acknowledged that certain characteristics of AI systems are concerning, especially regarding safety, security and fundamental rights protection. The latter has also been acknowledged by EU institutions (such as the European Parliament). The same threats have also been acknowledged at the US level (the White House, fact sheet, September 12, 2023).
The adoption of the negotiating position by the EP at the Stratsbourg’s plenary session of June 14th, 2023, was preceded the previous day by a debate on the report from the leading joint committee on the matter – the Committee on the Internal Market and Consumer Protection (IMCO) and the Committee on Civil Liberties, Justice and Home Affairs (LIBE). According to the debates, it should be stressed that the principle desire of the EP concerning the AI Act is to make the European Union (EU) a leader as regards regulation and innovation in AI as well as a body that takes into consideration open letters published since Open AI first released its groundbreaking AI tool. The debate’s keynotes therefore revolved around three principle themes.
On May 17th, 2023, the European Data Protection Board (EDPB) published its final report on the use of facial recognition technologies (FRTs) by Law Enforcement Authorities (LEAs). This report opposes mass surveillance, and, according to the EDPB, ‘the use of facial recognition by law enforcement agencies must be necessary, limited, and proportionate’.