Towards the Final Adoption of the Framework Convention on AI by the Council of Europe

One of the latest Conventions produced by the Council of Europe (CoE), is the ‘Draft Framework Convention on artificial intelligence, human rights, democracy, and the rule of law’.[i] In September 2019 an Ad hoc Committee on Artificial Intelligence took place (CAHAI, the 1353rd meeting, involving Minister’s deputies) which concluded its work in December 2021 with the publication of a paper entitled ‘Possible elements of a legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law’. CAHAI, among others, suggested a risk classification methodology (‘low risk’, ‘high risk’ and ‘unacceptable risk’) for AI systems. The main issues that were discussed were the private actors involved (par. 11, 12, 14, 35, 37 and 56), certain basic principles and norms, as well as certain key definitions that it was proposed should be included in the upcoming international legally binding treaty, such as “Artificial intelligence system”, “lifecycle”, “AI provider”, “AI user”, “AI subject” and “unlawful harm”. The next step was the establishment of the successor to CAHAI, which was to be the Committee on Artificial Intelligence (CAI). The Framework Convention on AI, which was one of the legal options available to the Council of Europe, was drafted by the Committee on Artificial Intelligence (CAI), has 36 articles and is often called the first international legally binding AI treaty (March 2024 edition).

Documents that influenced the negotiations about the

Framework Convention on AI

During the negotiation process several international legal and policy instruments were considered, such as the Declaration of 2019 and the Recommendation CM/Rec(2020)1 by the Committee of Ministers of the CoE, the OECD AI principles (2019), a number of Resolutions and Recommendations put forward by the Parliamentary Assembly of the CoE and the associated responses [Resolution 2341 (2020) and Recommendation 2181 (2021) on democratic governance, Resolution 2343 (2020) and Recommendation 2183 (2021) on discrimination, Resolution 2342 (2020) and Recommendation 2182 (2020) on justice by algorithm, Recommendation 2185 (2020) on health care, Resolution 2345(2020) and Recommendation 2186 (2020) on labour markets, Resolution 2346 (2020) and Recommendation 2187(2020) on the legal aspects of autonomous vehicles, Resolution 2344 (2020) and Recommendation 2184 (2020) on the brain-computer interface], UNESCO’s Recommendation on the Ethics of Artificial Intelligence of 2021, two documents published in 2023 by the Hiroshima AI Process launched by the G7, the International Guiding Principles for Organizations Developing Advanced AI System and the International Code of Conduct for Organizations Developing Advanced AI System, and the Artificial Intelligence Act produced by the EU lawmakers (Commission, Council European Parliament). Furthermore, certain elements within four international political documents produced in 2023 (the Reykjavík Declaration, the Bletchley Declaration, the G7 Leaders’ Statements on the Hiroshima AI Process of 30 October 2023 and 6 December 2023) influenced the negotiations.

Progress made at the Committee on Artificial Intelligence (CAI) 

The CAI invited a variety of actors to the negotiations, which took place over 10 meetings (April 2022 – March 2024), such as the member states of the CoE, observer states (Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America and Uruguay), representatives from other Council of Europe bodies and sectors, representatives from other international and regional organisations working in the field of AI such as the European Union, the UN (in particular UNESCO), the OECD and the OSCE, research and academic institutions, and certain representatives from civil society and the private sector (companies and associations). 

The draft Framework Convention on AI was formally transmitted by the Committee of Ministers on 20 March 2024 to the Parliamentary Assembly (PA). The PA unanimously adopted it (with 66 votes in favour, 0 against and 0 abstentions) on 18 April 2024 (Opinion 303). Among the legal questions at the 1497th meeting of the Ministers’ Deputies in Strasbourg on 30 April 2024 was the ‘Draft Framework Convention on artificial intelligence and human rights, democracy and the rule of law’ and its accompanying document the ‘Draft Explanatory Report’ (working documents). The next step for the Framework Convention on AI and its ‘Draft Explanatory Report’ is for these documents to be transmitted to the Committee of Ministers for adoption at the 133rd Session of the Committee of Ministers in Strasburg (16-17 May 2024).

The Framework Convention aims to ensure that activity that takes place within the ‘lifecycle’ of an ΑΙ system is fully in accordance with human rights, democracy, and the rule of law. This focus on the entire lifecycle of an AI system represents a comprehensive approach, and this is illustrated by how often it is mentioned and is taken into consideration in different contexts (the preamble, general provisions, scope, principles, etc.) throughout the Framework Convention (see also the definition by the OECD). The definition of an AI system is in line with the AI Act [article 3(1) of the AIA], the text of which has almost been completed, and the OECD’s definition. The Framework Convention covers the activity of State parties, public authorities, and private actors acting on their behalf [article 3(1a)]. The term ‘public authority’ is defined as ‘’any entity of public law of any kind or any level (including supranational, state, regional, provincial, municipal, and independent public entity) and any private person when exercising prerogatives of official authority.’’ [Ministers’ deputies, CM Document, CM (2024)52-addprov2, 18 April 2024]. The Framework Convention does not fully define the private sector or its obligations and it is up to each State party whether the private sector should be included in the development of AI, and the obligations that should be imposed on them [article 3(1b)]. The Framework Convention includes an explicit obligation to comply with its provisions whenever public authorities delegate their responsibilities to private actors or direct them to act, i.e. by operating pursuant to a contract with a public authority or other private provision of public services, as well as public procurement and contracting according to the Draft Explanatory Report.

Article 3(1b) stipulates that ‘’Each Party shall address risks and impacts arising from activities within the lifecycle of artificial intelligence systems by private actors to the extent not covered in subparagraph (a) in a manner conforming with the object and purpose of the Convention.’’ Each Party should specify in a ‘declaration’ submitted to the Secretary General of the CoE (‘’at the time of signature, or when depositing an instrument of ratification, acceptance, approval, or accession’’) how it intends to meet this obligation. This declaration can be amended by following the same process described in article 3(1b). According to the Draft Explanatory Report, ‘’For Parties that have chosen not to apply the principles and the obligations of the Framework Convention in relation to activities of other private actors, the Drafters expect the approaches of those Parties to develop over time as their approaches to regulate the private sector evolve.’’

The Framework Convention on AI does not apply to activities carried out by AI systems that concern the protection of national security [article 3(2)], ‘’regardless of the type of entities carrying out the corresponding activities’’. However, the Parties and the activities carried out by their AI systems are still obliged to respect human rights, democratic processes and institutions, and must be consistent with the applicable international law obligations. According to the Draft Explanatory Report, if ‘dual use’ AI systems are employed, which can be used for both peaceful and military purposes,the Parties are obliged to respect the obligations under article 3, ‘’insofar as these are intended to be used for other purposes not related to the protection of the Parties’ national security interests.’’ As is clarified in the Draft Explanatory Report, ‘’all regular law enforcement activities for the prevention, detection, investigation, and prosecution of crimes, including threats to public security, also remain within the scope of the Framework Convention if and insofar as the national security interests of the Parties are not at stake.’’ National defence also does not fall within the scope of the Framework Convention [article 3(4)], based on article 1d of the Statute of the Council of Europe which excludes national defence from the scope of the Council of Europe.

Furthermore, exceptions are also stipulated in article 3(2) regarding when research and development activities are carried out under certain conditions using AI systems that have ‘’not yet (been) made available for use, unless testing or similar activities are undertaken in such a way that they have the potential to interfere with human rights, democracy and the rule of law.’’ Nevertheless, according to the Draft Explanatory Report such activities ‘’should in any case be carried out in accordance with applicable human rights and domestic law as well as recognised ethical and professional standards for scientific research.’’ However, AI systems ‘’that are made available for use as a result of such research and development activities would need in principle to comply with the Framework Convention, including in regard to their design and development.’’ 

Each party’s general obligations are stipulated in article 4 of the Framework Convention on AI regarding the protection of human rights and in article 5 concerning the adoption or maintenance of measures aimed at ensuring that AI systems are not used to undermine the integrity, independence and effectiveness of democratic institutions and processes. Chapter III (articles 6-13) contains general principles concerning activity that takes place within the lifecycle of AI systems. Article 6 specifies that the principles contained in Chapter III should be incorporated into domestic regulation of AI systems. The principles contained in Chapter III are described in general terms (a ‘high level of generality’). This approach offers flexibly to each Party on how to implement them in their domestic legal order and allows ‘’a very broad application to a diverse range of circumstances’’ (Draft Explanatory Report, par. 49).

These principles concern the adoption and maintenance of measures by each party and aim to respect human dignity and individual autonomy (article 7), ensure transparency and oversight based on specific contexts and risks (article 8) as well as accountability and responsibility (article 9), respect equality, including gender equality, and prohibit discrimination (article 10), respect privacy and the protection of personal data (article 11), promote reliability of AI systems and trust in their outputs (article 12), and foster innovation (article 13). 

Chapter IV contains two articles. Article 14 is entitled ‘Remedies’. It explicitly stipulates that each Party should adopt and maintain accessible and effective remedies that address violations of human rights that are committed as the result of activities carried out during the lifecycle of an AI system. Therefore, sufficient information is required to be provided to the affected persons. The related information should be context-appropriate, sufficiently clear, meaningful, and effective, in order to give the persons concerned the ability to exercise their rights in terms of the applicable procedures, including the right to lodge a complaint to competent authorities. Article 15 refers to procedural guarantees, safeguards and rights, in accordance with the relevant international and domestic law, which are also applicable to the persons affected. Both articles include an exception, since they require that the violations committed by the AI systems would have to have a ‘significant effect’ (article 14) or ‘significant impact’ (article 15) on human rights (Draft Explanatory Report, par. 95-104).

Article 16 stipulates that each party should adopt or maintain measures aimed at identifying, assessing, preventing, and mitigating ex ante and as appropriate the relevant risks according to the principles in Chapter III. Articles 17-22 contain a number of a specific obligations that every Party must meet, such as prohibiting discrimination, taking into consideration the specific needs, rights and vulnerabilities of disabled people and children, and safeguarding human rights. Article 19 refers to provisions regarding public discussion and multi-stakeholder consultation. Article 20 stipulates that each Party should promote digital literacy and skills to all segments of the population, i.e. ‘’the ability to use, understand, and engage with digital, including artificial intelligence and other data-based technologies.’’ This provision aims to promote awareness and understanding, therefore assisting with the prevention and mitigation of risks or adverse impacts on human rights, democracy, or the rule of law.

Articles 23-26 concern details about follow-up and cooperation mechanisms, and include reporting and international cooperation requirements, and effective oversight mechanisms to oversee compliance with the Framework Convention’s obligations. The follow-up mechanism or the so-called ‘Conference of the Parties’ will be convened by the Secretary General of the Council of Europe as appropriate (‘whenever the latter finds it necessary’) and periodically or at the request of the majority of the parties, or if the Committee of Ministers of the Council of Europe request it (article 23). The final Chapter VIII (articles 27-36) encompasses the final clauses which include inter alia provisions that concern other agreements or treaties that the Parties concluded prior to the Framework Convention, including a specific provision that applies to members of the European Union [article 27(2)], amendments (article 28), details around dispute settlement (article 29), provisions concerning its signing and entry into force (article 30), its territorial application (article 32), the right to denunciate the Convention (article 35).


One of the main issues that provoked criticism of the Framework Convention on AI was that it was drafted ‘behind closed doors. For instance, civil society organisations have been excluded despite the guidelines for open civil participation in political decision making. Another source of criticism is that it will not fully cover the private sector or to the extent that it does the public sector, and ‘’it introduces a system where each Party will be able to determine in a declaration how it intends to address the risks and impacts arising from the use of AI by private actors. This is far from ideal for legal certainty and predictability of the obligations. […]. It also goes against the principle that States have positive obligations to protect individuals against human rights abuses by private actors, in accordance with the case law of the European Court of Human Rights, the United Nations Guiding Principles on Business and Human Rights and relevant recommendations of the Committee of Ministers of the Council of Europe. Many AI systems are developed and deployed by private entities, and introducing a differentiated approach for the private sector creates a significant loophole.’’ [Parliamentary Assembly, Opinion 303 (2024), par. 7, 18 April 2024]. As mentioned in the Report by the Committee on Legal Affairs and Human Rights [Report, Doc. 15971, section 4, 16 April 2024], which echoed the criticism from civil society organisations, ‘’leaving the private sector out of the scope would result in ‘giving these companies a blank check rather than effectively protecting human rights, democracy and the rule of law’’’. The European Commission also tried to keep private companies within the scope of the Framework Convention on AI. 

Furthermore, the European Data Protection Supervisor stated in its press release of 11 March 2024 that is understandable why the Framework Convention ‘’will not be directly applicable (self-executing)’’ and that ‘’there is a need for certain flexibility to accommodate the specificities of national legal systems.’’ However, the EDPS ‘’is concerned that the very high level of generality of the legal provisions of the draft Framework Convention, together with their largely declarative nature, would inevitably lead to divergent application of the Convention, thus undermining legal certainty, and more generally its added value. In this regard, the EDPS recalls that one of the key objectives of the future binding legal instrument should be to ensure a common legal framework and a level playing field for AI actors in Europe and beyond.’’

[i] See the January 2023 AI Regulation post regarding the so-called “zero” draft of the Framework Convention. 

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email