This cross-sectional field of research enables analysis of how appropriate existing law is in relation with AI applications (e.g., the GDPR) and what AI governance might resemble  in the future. Research focuses on issues such as data and privacy protection - among other human rights - transparency, the audibility of AI systems, accountability/liability and oversight/control, and the fight against bias and discrimination.


The European Commission’s April 2021 proposal for a Regulation aiming at harmonized rules across the EU for AI is a major legal development and the negotiations at the EU level will be particularly interesting and tough. Renaissance Numérique and AI-Regulation contribute to the debate.
The European Union’s proposed artificial intelligence (AI) regulation, released on April 21, is a direct challenge to Silicon Valley’s common view that law should leave emerging technology alone. The proposal sets out a nuanced regulatory structure that bans some uses of AI, heavily regulates high-risk uses and lightly regulates less risky AI systems.
Artificial intelligence will be a major issue in the very near future, and Brussels has understood this. On October 20th, the European Parliament has adopted a series of three resolutions on how best to regulate artificial intelligence in order to boost innovation and confidence in the technology
This Study discusses extensively the concept of “European Digital Sovereignty”. It presents the opportunities opened by the concept but also the risks and pitfalls.


On January 4th, 2022, the regulation “Provisions on the Management of Algorithmic Recommendations in Internet Information Services” was adopted by China. This text will enter into force on March 1st. It follows an initial draft presented by China’s Cyberspace Administration in August 2021 and it seeks to regulate algorithms, especially those that will be employed for ‘recommendation’ purposes such as those used in search filters, social media, online stores, content services or gig work platforms.
A few weeks ago, the 120 State parties to the Convention on Certain Conventional Weapons (CCW) met in Geneve for the 6th Review of the Convention. The discussion about the use of lethal autonomous weapons systems (LAWS) was on the top of the agenda.
From November 30 to December 2nd 2021, the Council of Europe’s Ad hoc Committee on Artificial Intelligence (CAHAI) held its final plenary meeting. In this session, the recommendation on the “Possible elements of a legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law” was adopted.
On November 24, 2021, during the 41st session of UNESCO’s General Conference, the “Recommendation on the Ethics of Artificial Intelligence” was adopted. This is the first global “standard-setting instrument” that seeks to regulate the use of AI in an ethical way, although some initiatives were taken in the European context. The project came about due to a decision made at the General Conference at its 40th session in 2019.