An algorithmic tool designed to “predict migration flows” and “detect risks of tensions related to migration” is being developed by the EU as part of its security program. Against this background, a group of civil society organisations and individuals published a joint letter highlighting the risks posed by this technology in terms of criminalising migration and undermining human rights.
The ‘EUMigraTool’ (EMT) is part of the European ‘ItFlows’ project which also funded the controversial ‘iBorderCtrl’ project. The EMT project basically involves the use of AI and in particular the analysis of media content, web news and social media to predict migration flows and to “detect risks of tensions related to migration.”
As mentioned previously, the objectives of the project include the creation of models and the development of predictions in relation to migration in the EU, the provision of policy solutions, and the identification of risks in terms of the tension and conflict that may occur between migrants and EU citizens. EMT is being developed by 14 consortium partners from EU Member States, including universities and associations. Apparently, EMT will be used to identify migrants’ needs prior to their arrival and help NGOs “understand the human effort and material resources that need to be allocated” to a particular territory before their arrival.
Civil society organisations fear that the predictive tool may be misused, outlining four main concerns:
- The risk that EMT may be used for purposes other than those that it was deployed for in the first place (humanitarian support), for instance possibly being used for border management and security, leading to migration being criminalised.
- The societal risks this tool could have on fundamental rights such as the right to asylum and non-discrimination.
- Civil Society organisations consider that the project “offers a techno-solutionist answer to migration responses without addressing the structurally oppressive dimension of EU migration policies”.
- There is a possibility that predictive analytics systems such as EMT will be banned in the AI Draft that is currently being negotiated.
Furthermore, the 18 civil society organisations and individuals are calling on the ITFlows Consortium to withdraw EMT and refrain from developing systems that might be subject to future European AI regulation. Additionally, they are urging the Consortium to “reflect on the extent to which [the] project legitimates the use of technological tools in the securitisation of migration and the criminalisation of movement” and to oppose the deployment of technological tools that can be used in such a way that fundamental rights and international human rights laws are violated.
It is not clear what will happen with a project, the system for which may eventually be banned or considered too high risk once the future AI Act comes into force. It remains to be seen how the AI Act will view predictive analytics systems.