EU Fundamental Rights Agency report on AI algorithm biases

On December 8th, 2022, the European Union Agency for Fundamental Rights (FRA) published a report on algorithm biases, in particular when used for predictive policing and offensive speech detection.

Although the draft AI act is currently being discussed by the EU Council, the EU Fundamental Rights Agency has emphasised the need to protect the fundamental rights of EU citizens as regards the deployment of AI-based systems, which can lead to discrimination. More specifically, two main situations can involve discriminatory biases. While the first situation deals with bias in fully automated predictive policing algorithms in relation to neighbourhood crime, another concerns the extent to which ethnic and gender bias occurs with respect to offensive speech detection.

One of the examples mentioned in the report involves an algorithm deployed by the Dutch tax authorities in 2020, which led to more than 26,000 parents being accused of fraud in relation to their benefit applications on behalf of their children. The Dutch Data Protection Authority concluded that the processing of data by the AI system was discriminatory, since a disproportionate amount of those labelled as fraudsters were immigrants.

The FRA’s report was motivated by the need to investigate how AI can lead to fundamental rights violations. It emphasises that “despite the recent sharper focus on the problem of bias in algorithms, this area still lacks a tangible evidence base that employs technical assessments of algorithms in practice and their outcomes”.

By conducting a simulation of a feedback loop in predictive policing, the FRA investigated factors such as “low and varying crime reporting rates, different rates of crime detection and improper use of machine learning”, that could lead to biases in predictive policing. According to the FRA, “a feedback loop occurs when predictions made by a system influence the data that are used to update the same system”. Simulations of feedback loops showed that they “can easily occur in fully automated settings”. Systemic causes, such as differential crime observability and crime reporting rates, should then be thoroughly investigated in order that predictive policing systems can be deployed safely.

Considering that “AI is not yet capable of being used for automated content moderation of hate speech, particularly in relation to illegal hate speech”, the methodology followed by the FRA in the second use case – offensive speech – consisted of developing and testing offensive speech detection algorithms for bias against selected groups. The rapporteurs came up with the result that bias in the training data and the features of the model “can and does lead to false and potentially discriminatory predictions”.

Mainly motivated by the need to monitor the feedback loops in all predictive algorithm cases and to examine to what degree the bias in natural language models is embedded by default, the FRA concludes its report on AI-based algorithmic bias by stating that “more comprehensive and thorough assessments of algorithms in terms of bias” is needed, since “low data quality or poorly developed machine learning algorithms can lead to predictions that put certain groups of people at a disadvantage”.

According to Article 10 of the AI Act proposal, the burden of ensuring that bias monitoring, detection, and correction in relation to high-risk AI systems will fall on the providers of such systems. In cases where high-risk AI systems keep learning after being placed on the market or put into service (Article 15§3), said service providers will have the responsibility of developing such systems “in such a way to ensure that possibly biased outputs due to outputs used as an input for future operations (‘feedback loops’) are duly addressed with appropriate mitigation measures”.

The issue of algorithmic biases is critical when regulating the use of AI-based systems, in particular when it comes to the regulation of Facial Recognition. This is one of the issues that is underlined in the MAPFRE project that we are currently undertaking.

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email