OECD Revises its AI principles in the Era of Generative AI Proliferation

The OECD Recommendation on Artificial intelligence (AI) published in 2019 established the first intergovernmental standard for AI. Although not legally binding, the principles it lays out were first introduced by the Digital Policy Committee on May 22, 2019, during a ministerial-level council meeting. More recently, on November 8, 2023, the OECD Council refined the definition of an “AI system” in order to enhance its specificity and applicability. On May 3, 2024, in response to evolving technologies in the field of general purpose AI and generative AI, the OECD Ministerial Council adopted further revisions to these guidelines. The revised principles are designed to guide policymakers in developing robust AI policies and frameworks that ensure interoperability across different jurisdictions.

In order to update the Recommendation, the revision proposes a common understanding of key terms, such as “AI system”, “AI system lifecycle”, and “AI actors”. The scope of the regulation is detailed across 2 sections. The first section outlines principles concerning the responsible administration of ‘trustworthy’ AI (featuring five principles applicable to all stakeholders). The second section focuses on the role of national policies and international cooperation in fostering ‘trustworthy AI’ and will apply to members of and adherents to the convention in terms of how they implement national policies and approach international cooperation. The updated value-based principles now explicitly reference environmental sustainability, ‘reflecting its growing importance’ since the adoption of the 2019 recommendation. 

Additionally, some of the headers of the revised principles and recommendations guidelines have been expanded for clarity, such as the title of principle 1.2, which has changed from “Human centered values and fairness” to “Respect for the rule of law, human rights and democratic values, including fairness and privacy”. The rationale behind this change is to encourage AI actors to respect human centred values, including a number of human rights such as fairness and social justice by, for example, facilitating human intervention where necessary. Another reason for the changes is to address misinformation and disinformation, which have been amplified as a result of generative AI being used outside of its intended purpose; and in order to safeguard information integrity. 

The third principle enhances transparency by ensuring that the information that AI actors provide is clear and meaningful. It affirms that makers of AI systems should act transparently and responsibly in terms of disclosing how their systems work, to ensure that people are able to challenge outcomes. The information provided by AI actors to those affected by AI systems should be appropriate to the context, which implies consistency with the State of the Art (SOTA). In the context of generative AI, alignment with the State of the Art refers to using the most advanced and recently developed ideas and methods at any particular time. A notable complementary tool for advancement is the use of explainable AI to help achieve compliance with transparency requirements by making the logic and outputs of algorithms accessible and interpretable by non-experts. 

 If AI systems pose a risk of causing undue harm or exhibit undesired behaviour, robust mechanisms and safeguards should be put in place to ensure they can be overridden, repaired, and/or decommissioned safely by means of human intervention (principle 1.4: Robustness, security and safety). Furthermore, the discussions on traceability and risk management were moved from the functioning of the AI system lifecycle (see point 1.4) to the principle of “Accountability”, deemed as more appropriate for these concepts. According to the Recommendation, AI actors should, according to their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on an ongoing basis. Risks include those that concern harmful bias, and human rights such as safety, security, and privacy, as well as labour and intellectual property rights. The final value-based principle of accountability mandates that AI actors ‘should’ be accountable for the proper functioning of AI systems, by ensuring traceability and transparency of AI outputs. 

As far as the updated recommendations for policy makers are concerned, they aim to foster an inclusive AI-enabling ecosystem (as opposed to what was previously called ‘fostering a digital ecosystem for AI’). This involves creating accessible AI ecosystems and promoting fair data sharing “as appropriate”, with an emphasis on the potential of data trusts. The recommendation aims to facilitate an ‘interoperable governance and policy environment’ that supports the deployment of trustworthy AI systems. This will be achieved by transitioning from the research and development phase to testing and experimentation within controlled environments. Such a process enables jurisdictions to scale up their efforts and collaborate in order to enhance interoperable AI governance. 

The focus of the text extends beyond the technological aspects of AI systems to include significant shifts in the labour market. Previously, the Recommendation mentioned building human capacity for a labour market ‘transition’ (point 2.4). It has now shifted its focus to a comprehensive ‘transformation’. This transformation involves equipping individuals with AI skills and supporting displaced workers. The slight difference between the two concepts is interesting. A ‘transition’ mainly concerns a change from one state to another, whilst ensuring sustainability, minimal disruption and a gradual implementation. However, whilst a transition may address specific issues, it is limited in scope, and does not address the fundamental issues, which perpetuates the existing structure. The shift from ‘transition’ to ‘transformation’ underscores a commitment to addressing broader, large-scale challenges and ensures long-term viability around addressing emerging risks. 

The implementation of these principles has been supported globally, with several countries establishing national ethical frameworks that resonate with the updated OECD guidelines. In Europe, the European Union and the Council of Europe have taken action by adopting the EU AI Act – detailed in our Table of Contents (ToC) – and the Framework Convention on AI, respectively. On a broader scale, the United Nations is pushing for Global AI Governance through the adoption of a non-binding resolution. 


Shavine Menaf


Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email