The Council of Europe’s recommendation for a legal framework on AI

From November 30 to December 2nd 2021, the Council of Europe’s Ad hoc Committee on Artificial Intelligence (CAHAI) held its final plenary meeting. In this session, the recommendation on the “Possible elements of a legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law” was adopted. The drafting of such a recommendation followed a consultation launched by the Committee on March 30th 2021. The Council’s Member States, the U.S., Canada, Japan, and Mexico were involved in the initiative.

The proposal contains a series of recommendations for the creation of a legal framework on artificial intelligence. The CAHAI observed that “the application of artificial intelligence (AI) systems has the potential to promote human prosperity and individual and social well-being by enhancing progress and innovation”. It also underlined the risks that AI entails for human rights, democracy and the rule of law. To prevent or mitigate these risks, the Committee considered that “an appropriate legal framework on AI based on the Council of Europe’s standards on human rights, democracy and the rule of law, should take the form of a legally binding transversal instrument.” However, it noted that, in addition to this transversal instrument, other sectoral measures may be needed.

The draft recommendation addresses several issues that should be noted. Firstly, the Committee proposed the idea of imposing a full or partial moratorium or ban on AI systems that are deemed to present an “unacceptable risk”. To this end, the Committee highlighted certain AI uses that require particular attention such as those that use biometrics to identify, categorise or infer characteristics or emotions of individuals, and AI systems used for social scoring. Secondly, the Recommendation proposed that the future legal instrument focuses on the “potential risks emanating from the development, design, and application of AI systems for the purposes of law enforcement, the administration of justice, and public administration.” However, the Committee noted that the legal instrument should “be limited to general prescriptions about the responsible use of AI systems in public administration.” The Recommendation underlines that issues related to specific administrative activities such as health care, education, and social benefits should be addressed through a sectoral instrument. Thirdly, the Committee also advanced the need for the future legal instrument to include certain safeguards such as provisions on access to effective remedy, the right to human review of decisions that are “taken or informed by an AI system”, “adequate and effective guarantees against arbitrary and abusive practices due to the application of an AI system in the public sector” and the “right to know that one is interacting with an AI system rather than with a human.” With regard to this issue, the inclusion of measures to protect whistle-blowers was also proposed. Fourthly, the CAHAI highlighted the necessity of including views about how AI systems are shaping public opinion, and their potential chilling effects.

However, it has been proposed that more specific issues like micro-targeting, election manipulation, profiling, and manipulation of content should be dealt with via a sectoral instrument. Moreover, the Recommendation proposes that the role of private entities such as online platforms should be considered due to their “growing concentration of economic power and data” which “could undermine democratic processes.”

Furthermore, other notable elements of a legally binding transversal instrument were proposed:

  •  The establishment of a methodology for risk classification of AI systems and impact assessments.
  •  The establishment of “regulatory sandboxes” to stimulate responsible innovation while ensuring compliance.
  •  The inclusion of provisions that ensure the necessary level of human oversight over the system and its possible effects.

With regard to the elements that the Committee proposed should be included in an additional legal instrument (which may or may not be legally binding), the Committee proposed that an impact assessment should be undertaken regarding human rights, democracy and rule of law (HUNDERIA).

Furthermore, it is worth mentioning that the document does not cover matters related to national defence, considering them out of the scope of a future CoE legal instrument. Finally, the draft proposes the creation of an instrument that would be made available to non-member States of the Council of Europe to ratify.

Amongst those who responded first, several civil society organisations involved in the work of the CAHAI stated in a joint statement that the recommendations “ fall far short of what is needed in terms of human rights, democracy, and the rule of law.” Moreover, the European Center for Not-for-Profit Law (ECNL) highlighted certain problematic aspects of the recommendation, such as the exclusion of national security and military uses of AI. Francesca Fanucci stated in Politico, that it is ‘particularly worrisome’ that the recommendation allows Member States “to decide whether to include dual use AI”.

The CAHAI’s Recommendation will be discussed in the CoE’s Committee of Ministers in February, and negotiations are planned to start in May. This could mean that it will be ratified by 2024.

page1image7940224page1image7936768page1image7941568

Source: Ad hoc Committee on Artificial Intelligence (CAHAI), “Possible elements of a legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law”, Council of Europe, December 3, 2021, CAHAI(2021)09.


Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email