AI GOVERNANCE AND REGULATION

This cross-sectional field of research enables analysis of how appropriate existing law is in relation with AI applications (e.g., the GDPR) and what AI governance might resemble  in the future. Research focuses on issues such as data and privacy protection - among other human rights - transparency, the audibility of AI systems, accountability/liability and oversight/control, and the fight against bias and discrimination.

ARTICLES

-
22/02/2023
A fierce debate rages in Brussels over which AI systems should be considered as “High Risk”, while the systems in Annex II of the EU AI Act have attracted less attention. Here is a guide (with infographics) on the classification of ALL “High Risk” systems in the AI Act, as well as the corresponding conformity assessment procedures.
-
06/02/2023
Data is the fuel of AI systems. Anonymisation has been presented as a panacea to protect personal data while enabling AI innovation. However, the growing efficiency of re-identification attacks on anonymised data raises a series of legal questions. 
-
27/01/2023
On November 23rd, 2022 an article by Le Parisien, a French Newspaper, revealed that the French Government had dropped its project to deploy facial recognition to support security arrangements at the 2024 Paris Olympics. In fact, the debate on the possible implementation of facial recognition systems during the Olympic Games is part of a broader debate which divides political leaders on whether AI-driven biometric systems should be used to monitor public places.
-
11/01/2023
The use of facial recognition technologies for criminal investigation purposes has been under the spotlight for many years in France and in the European Union. In this article accepted for publication in the European Review of Digital Administration & Law, T. Christakis & A. Lodie discuss a major decision issued last year by the French Conseil d’Etat.

NEWS

06/03/2023
On February 21th, 2023, the United States Copyright Office (USCO) cancelled the registration certificate previously issued to Ms. Kristina Kashtanova for the “comic book” entitled “Zarya of the Dawn”, considering that the images in the comic book are not the product of human authorship – since they were generated by an AI system – Midjourney – and were therefore not copyrightable.
24/02/2023
Using innovative AI techniques, Berkeley’s researchers have analysed more than 2.5 million fully anonymized metaverse data recordings and found that individual users could be uniquely identified. The study stresses the need to enhance security and privacy awareness in relation to these platforms.
24/02/2023
The legal and ethical risks that the use of ChatGPT poses, as well as the need to regulate the deployment of similar generative chatbots, is currently being debated across the world. The European Parliament is considering placing the use of generative AI models, such as ChatGPT, in a “high risk” category in its upcoming compromise text on the AI Act (Parliament Approach), thereby intending to subject such tools to burdensome conformity assessment requirements.
23/02/2023
Are you interested in the societal impacts and major legal issues posed by the development of Artificial Intelligence and new technologies, including the Metaverse? Do you have a PhD in legal studies (preferably digital law, intellectual property law or European/International/Human Rights law)? Are you ready to dive into the issues that concern protection of personal data and privacy, freedom of expression and other human rights in the era of AI? Do you have an open and curious mind?