FRA: Getting the Future Right – Artificial Intelligence and Fundamental Rights

On December 14, 2020, during their online conference entitled “Doing AI the European way: Protecting fundamental rights in an era of artificial intelligence”, the European Union Agency for Fundamental Rights (FRA) presented their new report : Getting the future right – Artificial intelligence and fundamental rights

“This report presents concrete examples of how companies and public administrations in the EU are using, or trying to use, AI. It discusses the potential implications for fundamental rights and shows whether and how those using AI are taking rights into account”. 

Michael O’Flaherty (director)

The report contains six main sections: 

  1. AI and fundamental rights – Why it is relevant for policymaking 

The report firstly sets out the motivation for and purpose of examining this area, and then goes on to provide “a fundamental rights-based analysis of concrete ‘use cases’ – or case studies” of AI that might threaten such rights. It also illustrates “some of the ways that companies and the public sector in the EU are looking to use AI to support their work, and whether – and how – they are taking fundamental rights considerations into account”. Given that there is no commonly adopted definition of AI, the report then provides an explanation of its analysis of AI by highlighting that the “FRA’s research did not apply a strict definition of AI on the use cases it presents” but seeked to discuss “the use of AI based on concrete applications. These differ in terms of their complexity, level of automation, potential impact on individuals, and the scale of application”. Finally, the report addresses the issue of artificial intelligence (AI) and fundamental rights within the current EU policy framework. It highlights the list of works which have already been published and which have already addressed the subject. It also highlights that “only a rights-based approach guarantees a high level of protection against possible misuse of new technologies and wrongdoings using them” (Fundamental Rights Report 2019). 

2. Putting fundamental rights in context – Selected use cases of AI in the EU

In this section, the report provides concrete examples of AI use both in public administration and the private sector.  To do so, “FRA collected information on such cases from five EU Member States: Estonia, France, Finland, the Netherlands and Spain” with emphasis on “the use of AI in the areas of social benefits; predictive policing; health services; and targeted advertising”. For each of these areas, this section “provides information on the current use of AI, as well as basic information on EU competence” thus offering a “context for the fundamental rights analysis”.

3. Fundamental rights framework applicable to AI

In this section, the report “introduces the general fundamental rights framework in the EU that governs the use of AI, including selected secondary EU legislation and national law”. The report highlights the instruments that are used for the protection of fundamental rights and underlines that “none of the five EU Member States covered currently have horizontal AI-specific laws, although the countries are looking into the potential need for regulation”. The report then gives four examples of AI use cases in: social welfare (use case 1); predictive policing (use case 2); healthcare (use case 3); and targeted advertising (use case 4). Finally, the report sets out the requirements for justified interferences with fundamental rights and presents “the general steps that need to be followed to determine whether or not a Charter right can be limited”. 

4. Impact of current use of AI on selected fundamental rights

In this section, the report provides an analysis of the impacts on fundamental rights caused by current use of AI by means of “a general overview of risks perceived by interviewees, and their general awareness of fundamental rights implications when using AI”. For instance, when some private sector representatives were asked about the general risks posed by the use of AI, they “often mentioned inaccuracy as a risk of using AI”, whereas “bias was most often highlighted as a risk associated with using AI” for people from the public administration sector. The report then focuses on certain fundamental rights “that are particularly affected by AI” such as: human dignity, right to privacy and data protection, equality and non-discrimination, access to justice, right to social security and social assistance, consumer protection, and right to good administration. 

5. Fundamental rights impact assessment – A practical tool for protecting fundamental rights

In this section, the report analyses “how fundamental rights impact assessments (FRIA) could reduce the negative impacts that using AI can have on fundamental rights”. To begin with, the report provides “a brief overview of the current discussion on the need for fundamental rights impact assessments” in the field of AI whilst underlining that “there is a need for flexible impact assessments that can adapt to different situations given that fundamental rights violations are always contextual”. The report then analyses impact assessment and testing in practice, in spite of the fact that this involves “mainly technical and data protection (impact) assessments. These rarely address potential impacts on other fundamental rights”. Finally, this section provides “suggestions on how to assess the fundamental rights impact when using AI and related technologies”. 

6. Moving forward: Challenges and opportunities 

In the last section, the report provides a quick summary of the current situation, which provides the basis for the report, reminding us that AI “is not the panacea to all problems, and comes with various challenges” and “using AI systems engages a wide range of fundamental rights”. Consequently, the report stresses that the “FRA will continue to look into the fundamental implications of AI by carrying out more focussed analysis of specific use cases” in order to “increase knowledge on what can potentially go wrong and consequently help mitigate and prevent fundamental rights violations”.

Photo credits: 

FRA report cover: HQUALITY/Adobe Stock 

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email