-
21/05/2023
On May 16th, 2023, the French data control agency – Commission Nationale de l’Informatique et des Libertés (CNIL) – published an action plan aimed at ensuring respect for the privacy of people in relation to Artificial Intelligence (AI) systems and more specifically generative AI (e.g. Midjourney and ChatGPT from the company OpenAI). This action plan follows the 2017 CNIL’s first global approach on these new tools.  
-
28/03/2023
On March 27th, 2023, the European Union Agency for Law and Enforcement Cooperation (EUROPOL) released a report on the Impact of Large Language Models on Law Enforcement. Having considered the outcomes that resulted from a series of workshops organised by the Europol Innovation Lab on how criminals can abuse LLMs such as ChatGPT, as well as how it may assist investigators in their daily work, the report provided useful recommendations on enhancing law enforcement preparedness.
-
06/03/2023
On February 21th, 2023, the United States Copyright Office (USCO) cancelled the registration certificate previously issued to Ms. Kristina Kashtanova for the “comic book” entitled “Zarya of the Dawn”, considering that the images in the comic book are not the product of human authorship – since they were generated by an AI system – Midjourney – and were therefore not copyrightable.
-
24/02/2023
Using innovative AI techniques, Berkeley’s researchers have analysed more than 2.5 million fully anonymized metaverse data recordings and found that individual users could be uniquely identified. The study stresses the need to enhance security and privacy awareness in relation to these platforms.
-
24/02/2023
The legal and ethical risks that the use of ChatGPT poses, as well as the need to regulate the deployment of similar generative chatbots, is currently being debated across the world. The European Parliament is considering placing the use of generative AI models, such as ChatGPT, in a “high risk” category in its upcoming compromise text on the AI Act (Parliament Approach), thereby intending to subject such tools to burdensome conformity assessment requirements.
-
23/02/2023
Are you interested in the societal impacts and major legal issues posed by the development of Artificial Intelligence and new technologies, including the Metaverse? Do you have a PhD in legal studies (preferably digital law, intellectual property law or European/International/Human Rights law)? Are you ready to dive into the issues that concern protection of personal data and privacy, freedom of expression and other human rights in the era of AI? Do you have an open and curious mind?
-
22/02/2023
A fierce debate rages in Brussels over which AI systems should be considered as “High Risk”, while the systems in Annex II of the EU AI Act have attracted less attention. Here is a guide (with infographics) on the classification of ALL “High Risk” systems in the AI Act, as well as the corresponding conformity assessment procedures.
-
16/02/2023
On February 2nd, 2023, the Italian Data Protection Agency (Garante Per La Protezione Dei Dati Personali) urgently ordered a temporary limitation “on the processing of personal data relating to users in the Italian territory as performed by Luka Inc., the US-based developer and operator of Replika, in its capacity as controller of the processing of personal data that is carried out via the said app”.
-
06/02/2023
The National Institute of Standards and Technology (NIST) of the U.S. Department of Commerce released, on January 26, 2023, the AI Risk Management Framework (AI RMF 1.0.) in collaboration with the private and public sectors.
-
06/02/2023
Data is the fuel of AI systems. Anonymisation has been presented as a panacea to protect personal data while enabling AI innovation. However, the growing efficiency of re-identification attacks on anonymised data raises a series of legal questions. 
-
19/01/2023
The purpose of the Convention is to ensure that during their lifecycle, AI systems fully comply with human rights, respect the functioning of democracy and observe the rule of law, regardless of whether these activities are undertaken by public or private actors. The design, development and application of AI systems used for purposes related to national defence are expressly excluded from the scope of this Convention. The negotiators seem to agree that such a Convention must be seen first and foremost as a broad framework which might be supplemented by further obligations in more specific fields.
-
17/01/2023
On December 8th, 2022, the European Union Agency for Fundamental Rights (FRA) published a report on algorithm biases, in particular when used for predictive policing and offensive speech detection.