On December 6th, 2022, EU Member States voted on a “general approach” to the upcoming Artificial Intelligence Act (AI Act). On the same day, 192 civil society organisations and individuals published an open letter calling on the EU to modify a number of aspects of the AI Act to protect migrants from the risks that AI systems may pose to their fundamental rights.
The signatories of the letter regret that the original AI Act proposal does not “adequately address and prevent the harms stemming from the use of AI in the migration context”. In particular, the European Digital Rights (EDRi) association has issued a warning about the increasing number of AI systems being developed, tested and deployed at European borders. Such systems perform a number of different tasks such as profiling individuals, biometric identification, migration management, etc. The NGO further claims that “the proposal does not prohibit some of the sharpest and most harmful uses of AI in migration control, despite the significant power imbalance that these systems exacerbate”.
The open letter recommends that Member States and EU institutions modify several provisions in the AI Act, and it contains certain suggestions for achieving this purpose.
The organisations involved ask the EU to “prohibit unacceptable uses of AI systems in the context of migration”. The letter lists unacceptable uses as predictive analytic systems, automated risk assessment and profiling systems, emotion and biometric categorisation, remote biometric identification, etc. For instance, emotion categorisation technology has been tested via an EU-funded Project called Iborderctrl which involved the deployment of an ‘Automatic Deception Detection System’.
In addition, the organisations consider that while Annex III already enumerates ‘high-risk’ systems used in migration and border control, certain additional systems used in the context of migration deserve to be included on the list. They call on the EU to add biometric identification systems, AI systems for border monitoring and surveillance (e.g. scanning drones or thermal cameras), predictive analytic systems, and certain other systems, to the list.
Besides, the AI Act provides in article 83 that its provisions “shall not apply to the AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex IX”, i.e. IT systems which are already used in migration such as Eurodac, the Schengen Information System or the European Travel Information and Authorization System (ETIAS). According to the signatories of the open letter such an exception should be withdrawn so that “Art. 83 applies the same compliance rules for all high-risk systems and protects the fundamental rights of every person, regardless of their migration status”.
Finally, the signatories call on EU institutions and member states to amend the AI act so that transparency and redress measures are implemented, in order to ensure that migrants are informed that they are to be subject to AI systems and to enable them to seek redress when they suffer harm as the result of the deployment of such systems. In order to achieve these goals the consortium proposes to “include the obligation on users of high-risk AI systems to conduct and publish a fundamental rights impact assessment (FRIA) (…) Ensure a requirement for authorities to register the use of high – risk-and all public – uses of AI for migration, asylum and border management in the EU database (…) include rights and redress mechanisms to enable people and groups to understand, seek explanation, complain and achieve remedies when AI systems violate their rights”.
The deployment of AI for migration control is another concern that will certainly be discussed in the European parliament along with other hot topics such as the extent of the ban on remote biometric identification systems.