The purpose of this article is to explore the existing data portability rights under EU law, and assess the potential gaps among the GDPR, the DMA and the Data Act in the light of the new development of autonomous AI agents.
In anticipation of AI Act’s publication on the Official Journal of the EU, the MIAI AI-Regulation Chair publishes an interactive Table of Contents (ToC) to help practitioners and the academic community navigating the lengthy and complex text of 252 pages, by enabling users to “click” and be directly transferred to different Titles, Chapters, and Articles.
This article delves into the EU’s groundbreaking rules for general-purpose AI (GPAI) models, as outlined in the politically agreed-upon AI Act on December 8th. It scrutinizes key questions, including whether this approach deviates from the original risk-based proposal, navigates the complexities of risk management in foundational models, and grapples with the uncertainties in benchmarking methods.
On November 8th, 2023, in the midst of the stalled inter-institutional negotiations between the Council of the EU and the European Parliament (EP) on the regulation of foundation models in relation to the future AI law, the Organisation for Economic Co-operation and Development (OECD) announced that it had updated its definition of AI systems.
On October 13th, 2023, the European Commission launched a stakeholder survey on the eleven draft guiding principles for Generative AI (GAI) and other advanced AI systems. This initiative comes a few days after the 8th annual meeting of the Internet Governance Forum, organised by the United Nations.
Drawing on intense criticism from online publishers across the European Union (EU) against Generative AI (GAI), the present article aims to highlight the highly debated copyright issue of data collection for Generative AI training. Three questions are therefore addressed: To what extent is scraping data for GAI training considered to be a copyright issue; How Data scraping and data mining are regulated under EU Law and; How the future AI Act intends to deal with the use of training data.
On September 7th, 2023, the Court of Justice of the European Union (CJEU) upheld the decision of the General Court according to which the public can partially access documentation on the EU’s emotion recognition project (iBorderCtrl) in which it discusses the general reliability, ethics and legality of such technology.
Eight more American tech companies (Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability) signed up to President Joe Biden’s voluntary commitments governing AI (second round of voluntary commitments). In the meantime, a third trilogue will take place on the other site of the Atlantic in relation to the EU AI Act proposal.
The adoption of the negotiating position by the European Parliament sets the stage for the trilogues between the EU institutions, while the European Commission is pushing for the AI Act to be finalised by the end of 2023. The European Parliament’s position on this legislative file reflects its members’ fundamental desire to make the EU a leader in AI regulation and innovation.
On May 17th, 2023, the European Data Protection Board (EDPB) published its final report on the use of facial recognition technologies (FRTs) by Law Enforcement Authorities (LEAs). This report opposes mass surveillance, and, according to the EDPB, ‘the use of facial recognition by law enforcement agencies must be necessary, limited, and proportionate’.
Between 21 and 23 April, the European Commission (Commission) held the closing session of the European citizens’ panels, to debate and propose recommendations on virtual worlds in the EU. As a result, a panel of 150 citizens has contributed to the provision of 23 recommendations on ‘fair and human-centric virtual worlds in the EU’.
The European Consumer Organisation (BEUC) is calling ‘for EU and national authorities to launch an investigation into ChatGPT and similar chatbots’, following the filing of a complaint on March 30th, 2023, on the other side of the Atlantic by the Center for Artificial Intelligence and Digital Policy (CAIDP) in relation to ChatGPT-4.
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.