EU iBorderCtrl : When Commercial Interests outweigh the Public Interest

On September 7th, 2023, the Court of Justice of the European Union (CJEU) upheld the decision of the General Court according to which the public can partially access documentation on the EU’s emotion recognition project (iBorderCtrl) in which it discusses the general reliability, ethics and legality of such technology.

On 5 November 2018, a member of the European Parliament – Patrick Breyer (Verts/ALE) – requested access to the documents related to the authorisation of the iBorderCtrl project and those drawn up in the course of that project, which are held by the European Commission.

Funded by the European Commission since 2016, the iBorderCtrl project involves the use of an AI-powered “lie detector”. Its aim is to automate the security process at the EU’s external borders by replacing border guards with artificial intelligence. In order to achieve this, the AI system asks the traveller a number of questions (name, country of origin, length of stay, reason for travel, luggage contents, etc.) whilst they are positioned in front of a camera and determines the veracity of the information based on the traveller’s facial expressions. The system then delivers a score that determines whether or not the person is potentially dangerous. If they are, a more in-depth check is carried out by a real person.

Tested in Hungary, Greece and Lithuania in 2018, this AI-powered “lie detector” has not met with unanimous approval due to the serious risks to people’s fundamental rights that the deployment of this lie detector technology poses. Patrick Breyer was denied – by the European agency in charge of the iBorderCtrl project, the European Research Executive Agency (REA) – full access to documents, such as the D 1.1 Ethics advisor’s first report or the D 8.5 Periodic Progress Report 2, as this would have reportedly undermined the commercial interests of the consortium, including its intellectual property rights. Patrick Breyer filed a lawsuit on 15 March 2019 for “the release of classified documents on the ethical justifiability and legality of the technology”.

The CJEU delivered its first judgment on 15 December, 2021 (Case T-158/19), according to which the REA’s decision of 17 January 2019 (ARES (2019) 266593) would have to be annulled. This is because the REA had failed to ‘decide on Mr Patrick Breyer’s application for access to the documents relating to authorisation of the iBorderCtrl project and, secondly, in so far as the REA refused to grant full access to document D 1.3, partial access to documents D 1.1, D 1.2, D 2.1, D 2.2 and D 2.3, and more extensive access to documents D 3.1, D 7.3 and D 7.8’, due to the fact that these documents contain information covered by the commercial interests’ exception (Article 4(2) of Regulation (EC) No 1049/2001).

Not satisfied with this decision, MEP Patric Breyer (Verts/ALE) filed an appeal on 25 February 2022 (Case C-135/22P) claiming that ‘the public interest in disclosure outweighs private commercial interests’ and that there should be transparency from the beginning of the research phase.

In its later decision of September 7th, 2023, the CJEU firstly confirmed the General Court’s judgement ruling that “the public interest (…), which in reality concerned a possible future deployment under real conditions of systems based on techniques and technologies developed within the framework of that project, would be satisfied by the dissemination of the results (…)” (Point 108). This is the case even though the CJEU recognises the fact that although the participants in the iBorderCtrl project have an obligation to respect people’s fundamental rights, this does not represent grounds to assume that they will automatically fulfil this obligation (point 106).

This issue rises an interesting question about transparency in relation to the development of AI systems at their research phase. The Commission’s proposal for an EU AI Act characterises AI systems, which are intended to be used by competent public authorities as polygraphs or similar tools, or to detect the emotional state of a natural person, as high-risk. Article 52§2 of the AI Act proposal states that “users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto”. Which is an approach that aligns with the position of the CJUE on ‘dissemination of the results’. However, “this obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences”. It is therefore unclear when those obliged to use an AI-powered ‘lie detector’, the purpose of which is to somehow detect criminal behaviour, will be informed that the system is to be used.

T. Karathanasis

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email