On December 17, 2020, the Ad Hoc Committee on Artificial Intelligence (CAHAI) of the Council of Europe (CoE) adopted their first feasibility study on a legal framework on AI design, development and application based on Council of Europe’s standards.
On 11 September 2019, “the Committee of Ministers mandated an Ad hoc Committee on Artificial Intelligence (CAHAI) to examine, on the basis of broad multi-stakeholder consultations, the feasibility and potential elements of a legal framework for the development, design and application of artificial intelligence, based on Council of Europe standards in the field of human rights, democracy and the rule of law”.
The feasibility study, which is based on standards set by the Council of Europe, was adopted during the CAHAI’s 3rd plenary meeting held from the 15th to the 17th of December 2020 after “in-depth discussions”. During this event, more “than 200 members, participants and observers met online to discuss the feasibility study of a legal instrument on artificial intelligence”.
The study took into account “standards for the design, development and application of AI in the field of human rights, democracy and the rule of law, as well as existing relevant international – universal and regional – legal instruments.” The “scope of application of a Council of Europe legal framework on AI” led the Committee, before providing some general principles aiming to frame a potential future CoE’s legal instrument on AI, to make general observations, from the lack of a common definition of AI to its potential substantial benefits, but also the threats it poses.
1. Lack of a Common Definition for Artificial Intelligence
First, the CoE underlined that “there is no single definition of AI accepted by the scientific community”. Second, even “the various international organisations that have worked on AI have also not found a consensus on the definition of AI.” Accordingly, some international organizations have tried to work on a global and common definition of AI, to no avail. For instance, the High-Level Expert Group on AI mandated by the European Commission (EC), published a comprehensive document on the definition of AI in April 2019. However, “[a]s regards to the non-binding instruments that have been published on this topic by the Council of Europe so far, no uniform definition of AI has been used.” Indeed, the CAHAI has reviewed most of the definitions and concepts that have been used within their work and instruments which may be applicable to AI, while underlining their limits and weaknesses. CAHAI found that “it can be concluded that the term “AI” is used as a “blanket term” for various computer applications based on different techniques, which exhibit capabilities commonly and currently associated with human intelligence.”
“[T]he CAHAI members, participants and observers have also indicated different approaches on the (need for a) definition of AI resulting from different legal traditions and cultures, a consensus has been found on the need to approach AI systems in a technologically neutral (…) way, comprising all the various automated decision-making technologies that fall under this umbrella term, including their broader socio-technical context.”
According to the CAHAI, this ‘umbrella term’ should be able to achieve a balance between “a definition that may be too precise from a technical point of view and might thus be obsolete in the short term, and a definition that is too vague and thus leaves a wide margin of interpretation, potentially resulting in a non-uniform application of the legal framework”. In this sense, the feasibility study relies on “a broad definition (…) in order to ensure that the diversity of the challenges raised by the development and use of AI systems are adequately identified”.
2. AI Opportunities, Benefits and Risks
The feasibility study underlines that “AI systems can foster and strengthen human rights (…) and contribute to the effective application and enforcement of human rights standards”. For instance, the study highlights how AI systems may have a “potential impact on human health and healthcare systems”, improving medical diagnosis and treatment while enhancing the advanced prediction and monitoring of epidemics and chronic diseases. Furthermore, it is noteworthy that AI, where “used responsibility, (…) can also enhance the rule of law and democracy” (§20), for instance by facilitating the detection of biased decisions.
However, it is underlined that despite “these benefits, the increasing use of AI systems in all areas of private and public life also carries significant challenges for human rights, democracy and the rule of law” (§21):
“AI-based tracking techniques can be used in a way which broadly affects ‘general’ privacy, identity and autonomy and which can make it possible to constantly watch, follow, identify and influence individuals (…). As a result, people might feel inclined to adapt their behaviour to a certain norm, which in turn also raises the issue of the balance of power between the state or private organisation using tracking and surveillance technologies on the one hand, and the tracked (group of) individuals on the other.”
3. CAHAI’s Legal Framework Viewpoint
To tackle and mitigate these risks, the CAHAI “recommend[s] that a future Council of Europe legal framework on AI should pursue a risk-based approach targeting the specific application context”. Furthermore, the CAHAI pointed out that:
- On the one hand, “AI applications that promote, strengthen and augment the protection of human rights, democracy and the rule of law, should be fostered”.
- On the other hand, “where based on a context-specific risk assessment it is found that an AI application can pose “significant” or unknown risks to human rights, democracy or the rule of law, and no appropriate mitigation measures exist within existing legal frameworks to adequately mitigate these risks, states should consider the introduction of additional regulatory measures or other restrictions for the exceptional and controlled use of the application and, where essential, a ban or moratorium (red lines)”.
The CAHAI explained in particular that “[b]uilding an international agreement on problematic AI uses and red lines can be essential to anticipate objections around competitive disadvantages and to create a clear and fair level playing field for AI developers and deployers”. Examples of applications “that might fall under red lines are remote biometric recognition systems – or other AI-enabled tracking applications” as these systems could, for instance, lead to “mass surveillance or to social scoring”, or AI-enabled covert manipulation of individuals.
- Finally, “where a certain application of an AI system does not pose any risk to human rights, democracy or the rule of law – it should be exempted from any additional regulatory measures”.
4. The Council of Europe’s Work in the Field of AI to date & International Legal Instruments Mapping
CAHAI’s feasibility study on a legal instrument for AI is not only based on “in-depth discussions” with different multi-stakeholders but also on a mapping that analyzes all the previous AI-related work on the CoE. It involved an in-depth study of the current international legal instruments and ethics guidelines applicable to AI as well as an overview of the relevant national policies, instruments and strategies that have been developed by CAHAI members.
Indeed, the CAHAI underlined that “[t]he significant impact of information technologies on human rights, democracy and the rule of law has [already] led the Council of Europe to develop relevant binding and non-binding mechanisms, which complement and reinforce one another”. The feasibility study recalls this previous work “along with the case law on new technologies of the European Court of Human Rights” (ECtHR).
The study also provides an overview of the “advantages, disadvantages and limitations” of the existing international legal instruments and ethical guidelines on AI. While acknowledging that these documents are “relevant in the context of AI Regulation”, “the CAHAI also support[ed] the conclusions drawn in the analysis that these instruments do not always provide adequate safeguards to the challenges raised by AI systems”. Accordingly, the study stresses that there is a “growing need for a more comprehensive and effective governance framework to address the new challenges and opportunities raised by AI [which] has been acknowledged by a number of intergovernmental actors”. As most of these instruments are non-binding recommendations, the CAHAI emphasized that the first notable, soon to be expected European Commission’s “legislative proposal to tackle fundamental rights challenges related to ensuring trustworthy AI (..) is scheduled for publication in the first quarter of 2021.” In the meantime, the CAHAI suggested what the main elements of the legal framework for the design, development and application of AI should be.
5. Main Elements of a Legal Framework for the design, development and application of AI
In its chapter 7, and in line with its mandate, the CAHAI recalled that “a legal framework on AI should ensure that the design, development, and application of this technology is based on Council of Europe standards on human rights, democracy and the rule of law”. Furthermore, following a risk-based approach, the legal framework for AI “should provide an enabling regulatory setting in which beneficial AI innovation can flourish, all the while addressing the risks set out in Chapter 3, and the substantive and procedural legal gaps identified in Chapter 5, to ensure both its relevance and effectiveness amidst existing instruments”. The main principles established by the CAHAI and considered as “essential” are the following:
- Human dignity
- Prevention of harm to human rights, democracy and the rule of law
- Human Freedom and Human Autonomy
- Non-Discrimination, Gender Equality, Fairness and Diversity
- Principle of Transparency and Explainability of AI systems
- Data protection and the right to privacy
- Accountability and responsibility
- Democracy
- Rule of Law
For each of these principles, the study provides a short description of the “key substantive rights” at stake as well as the “key obligations” resulting from it. The feasibility study also addresses the question of the “role and responsibilities of member states and private actors in the development of applications complying with these requirements” as well as the question of “liability for damage caused” by AI. Furthermore, the study outlines the options available in the event of the CoE creating a legal framework for the design, development and application of AI.
“In order to fill the gaps in legal protection identified in Chapter 5, a number of different options for a legal framework are available within the Council of Europe, including binding and non-binding legal instruments.”
The legal instrument could consist of a modernization of existing binding legal instruments as well as the adoption of a new binding instrument. Certain non-binding legal options, and other types of support for Member States such as the identification of best practice, are also possible. However, according to their conclusions, “[a]n appropriate legal framework will likely consist of a combination of binding and non-binding legal instruments, that complement each other”.
In its concluding comments, the CAHAI asserts that “[t]his study has confirmed that AI systems can provide major opportunities for individual and societal development as well as for human rights, democracy and the rule of law. At the same time, it also confirmed that AI systems can have a negative impact on several human rights protected by the ECHR and other Council of Europe instruments, as well as on democracy and the rule of law.”
“The study has noted that no international legal instrument specifically tailored to the challenges posed by AI exists, and that there are gaps in the current level of protection provided by existing international and national instruments. The study has identified the principles, rights and obligations which could become the main elements of a future legal framework for the design, development and application of AI, based on Council of Europe standards, which the CAHAI has been entrusted to develop.” (§176)
The adopted feasibility study “will be presented in 2021 to the Committee of Ministers of the Council of Europe”. In the meantime, the Ad Hoc Committee will continue to promote “an application of AI based on human rights, the rule of law and democracy”. To this end, on 20 January 2021, the president of the Council of Europe´s Ad hoc Committee on AI (CAHAI), Gregor Strojin, will highlight the CAHAI’s conclusions in its upcoming online conference “Human Rights in the Era of AI – Europe as international Standard Setter for Artificial Intelligence”.
“The conference will address issues such as the impact of AI on human rights, democracy and the rule of law as well as the feasibility of a future legal framework for AI.”