The Chair AI Regulation is pleased to share information about a new paper titled ‘Fitting “Systemic Risks” into a Taxonomy in the GPAI Code of Practice: Will the Resulting Ambiguity be Exploited by GPAI Model Providers?’ by its research fellow, Dr. Theodoros Karathanasis.
Abstract:
The paper delves in the definition and implications of the concept “systemic risk” under Article 3(65) of the EU AI Act, delving into the ambiguities arising from the use of unclear words and redundant terms such as “high-impact” and “significant impact,” each of which presents its own set of challenges. These challenges include the potential for confusion regarding the source of risk and the extent of its impact on the EU’s internal market. A key aspect explored in the paper is the utilization of the Code of Practice (CoP) for General-Purpose AI models (GPAI) by the EU AI Act to establish a taxonomy of systemic risks. By investigating the CoP’s attempts to categorize these systemic risks using a specific taxonomy, the paper aims to contribute to the ongoing dialogue on this important subject. The central inquiry of the paper, as reflected in its title, is whether the inherent ambiguity in the definition of systemic risk and its integration into the GPAI CoP taxonomy will be exploited by GPAI model providers. While acknowledging the potential usefulness of this taxonomy, the paper raises questions about its alignment with existing legislative and practical risk management procedures.
Brief:
The working document explores the critical aspect of defining “systemic risk” within the EU AI Act, emphasising the inherent difficulties arising from the Act’s phrasing. Article 3(65) provides a definition that Dr. Karathanasis describes as ambiguous, due to the inclusion of potentially unclear and repetitive language such as “high-impact” and “significant impact”. This fundamental lack of precision is a key concern, as it could lead to differing understandings of what qualifies as a systemic risk, thereby creating obstacles for consistent implementation and enforcement of the regulations. The document highlights that two interpretations of systemic risk arise from this vague definition, each presenting its own challenges, particularly concerning the origin of the risk and the extent of its influence on the EU’s internal market.
Building upon this foundational ambiguity, the document analyses the EU AI Act’s strategy of employing the General-Purpose AI Code of Practice (GPAI CoP) to create a structured categorisation of these systemic risks. This taxonomy is intended to offer a more tangible framework for comprehending and managing the potential harms linked to general-purpose AI models. The document indicates that this taxonomy classifies potential threats, providing examples such as cyber offences, discrimination, and loss of control. These categories represent broad domains where the application of AI models could have extensive and damaging consequences.
Significantly, the working document details the methodological approach employed to evaluate this GPAI CoP taxonomy, which involves assessing it against five specific factors: market impact, societal impact, dual impacts, propagation, and context-specificity. Market impact relates to the potential for widespread disruption and instability within the economic landscape resulting from the deployment or misuse of AI models. Societal impact examines the broader effects on communities, social structures, fundamental rights, and individual well-being. Dual impacts recognises that certain risks can simultaneously and significantly affect both market stability and societal welfare. Propagation assesses the potential for a risk originating in a specific application or sector to spread and cause harm across interconnected systems, underscoring the systemic nature of the threat. Context-specificity acknowledges that the significance and the precise nature of a systemic risk can vary considerably depending on the particular application, the specific environment in which the AI model is deployed, and the broader socio-economic context.
The central concern raised by the document, as indicated in its title, is whether the initial lack of precision in defining systemic risk within the EU AI Act, particularly the ambiguous wording identified in Article 3(65), will be exacerbated or clarified by its integration into the GPAI CoP’s taxonomy. The author appears concerned that instead of resolving the ambiguity, the taxonomy might, in practice, generate opportunities for GPAI model providers to exploit these definitional weaknesses. The discussion regarding the necessary alignment of the GPAI CoP with both the overarching legislative requirements of the EU AI Act and established risk management procedures highlights a fear that discrepancies or continued ambiguities between the legal definition and the operational taxonomy could be strategically used by model providers to potentially circumvent stricter interpretations or regulatory obligations. For example, providers might argue that a particular risk, while significant, does not meet the threshold of “systemic” under the Act’s ambiguous definition or does not neatly fit into the specific categories outlined in the GPAI CoP taxonomy. The working document strongly suggests that the author likely concludes by emphasising the potential for significant challenges and complications arising from the ongoing lack of clarity surrounding systemic risk and its taxonomic representation, and questions whether this inherent ambiguity will be strategically leveraged by GPAI model providers to their advantage, potentially undermining the intended regulatory oversight.
Cite: Karathanasis T. (2025) ‘Fitting “Systemic Risks” into a Taxonomy in the GPAI Code of Practice: Will the Resulting Ambiguity be Exploited by GPAI Model Providers?’. 28 Journal of Internet Law 6
These statements are attributable only to the author, and their publication here does not necessarily reflect the view of the other members of the AI-Regulation Chair or any partner organizations.
This work has been partially supported by MIAI @ Grenoble Alpes, (ANR-19-P3IA-0003) and by the Interdisciplinary Project on Privacy (IPoP) commissioned by the Cybersecurity PEPR (ANR 22-PECY-0002 IPOP).