The Council of Europe is pushing forward with AI Regulation

Over the past few weeks, the Council of Europe (CoE), the Strasbourg-based organisation that promotes and protects human rights, democracy and the rule of law, has been raising awareness about the risks of using AI enabled technologies and preparing a draft proposal for ensuring adequate AI regulation.

Firstly, on March 17, 2021, the CoE’s Committee of Ministers published a new declaration on “the risks of computer-assisted or artificial-intelligence-enabled decision making in the field of the social safety net”.

In its declaration, the Committee of Ministers warns Member States that “social rights become affected by the use of decision-making systems deployed by public authorities and which rely on artificial intelligence (AI) or machine learning”. Correspondingly, the CoE explains that “[c]omputer-assisted or AI-enabled decision-making systems can offer advantages” whilst significant risks and costs could be experienced without proper regulation.

“The unregulated development of such computer-assisted or automated decision-making systems, coupled with a lack of transparency and insufficient public scrutiny, and their incorporation into the administration of social services, pose risks.”

Following these concerns, the declaration highlights that “it should be ensured that public applications are fair and that ethical values are applied for everyone without causing any disparity in respect of social cohesion.” 

Conversely, “[t]hese systems can, if not developed and used in accordance with principles of transparency and legal certainty, amplify bias and increase risks.” Consequently, the CoE fears that “[t]his may lead to higher negative impact for members of the community who are in a situation of vulnerability. Under such circumstances, they [such systems] can replicate entrenched discrimination patterns, including as regards women, and can affect people in low-skilled and poorly paid jobs” and “[b]iased and/or erroneous automated decisions can bring about immediate destitution, extreme poverty or even homelessness and cause serious or irreparable harm to those concerned.”

Consequently, the Committee of Ministers draws the attention of Member States to the “possible risks to human rights, including social rights, that might follow from the use of computer-assisted or AI-enabled decision making by public authorities”, but also highlights “the need to ensure that such technologies are developed and implemented in accordance with the principle of legal certainty, legality, data quality, non-discrimination and transparency”. 

From a practical point of view, the declaration highlights the fact that one of the main concerns about these technologies is the “need for human oversight” in order “to mitigate and/or avoid errors in the management, attribution or revocation of entitlements, assistance and related benefits which could amplify disadvantages and/or disenfranchisement”. 

The Committee emphasizes the “need for effective arrangements to protect vulnerable persons from serious or irreparable harm, including destitution, extreme poverty or homelessness, as a result of the implementation of computer-assisted or AI-enabled decisions in the area of social services”. It also argues for “effective responsibility and accountability processes for AI actors designing, developing, deploying or evaluating AI systems when legal norms are not respected, or any unjust harm occurs”.

Secondly, on March 30, 2021, the CoE Ad Hoc Committee on Artificial Intelligence (CAHAI) launched its consultation on the elements of a legal framework on AI following the publication of a feasibility study in December 2020. This consultation aims “to gather the views of a broad range of institutional representatives” to assist the CAHAI Legal Frameworks Group with “preparing the elements of a legal framework on the design, development and application of artificial intelligence (AI) based on Council of Europe’s standards on human rights, democracy and the rule of law.”

Consequently, some fear that too strict a regulation of AI by the CoE may hamper innovation and the EU’s ambition “to become a global leader in innovation in the data economy and its applications”. However, according to Gregor Strojin, Chair of the CAHAI, who was interviewed by Politico AI : Decoded, “AI regulators can actually learn a lot from past negotiations” such as those involving the fields of pharmaceuticals and bioethics illustrating “how regulation can help innovation and how the future CoE’s draft proposal would become an AI treaty with additional rules for specific sectors” without hampering EU innovation. According to Strojin, this legislative instrument is not finished yet but “[t]he plan is to have a draft ready by the end of the year, after which the Committee of Ministers — national representatives to the Council of Europe — will debate it.”

MEB.

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email