AI Act and DSA: Challenges for EU Innovation

The Chair is pleased to share information about a new paper published by its research fellow, Dr. Theodoros Karathanasis, which is entitled ‘The Regulatory Interplay Between the AI Act and the DSA: Challenges, Burden, and Rationalization for AI Innovation in the EU’.

Abstract:

This paper investigates the intricate challenge the European Union (EU) faces in balancing AI innovation with robust fundamental rights protection, specifically under the AI Act and the Digital Services Act (DSA). A key conceptual distinction is highlighted: the AI Act defines systemic risk for General-Purpose AI (GPAI) models based on their intrinsic capabilities and propagation across the value chain, while the DSA addresses platform-level systemic risks arising from operational environments. This analytical divergence reveals uncovered systemic risks inherent to GPAI models, such as intrinsic misalignment and the facilitation of harmful capabilities, which are not fully addressed by the DSA. A significant consequence identified is over-assessment, where regulatory ambiguity in both Acts necessitates exhaustive and costly risk evaluations, particularly stifling innovation for Small and Medium-sized Enterprises (SMEs). The paper advocates for strategic simplification, including refining risk taxonomy, streamlining compliance, providing clear and timely guidance, and fostering multi-stakeholder governance, to balance safety concerns with the imperative to maintain EU leadership in AI innovation.

Brief:

The EU is navigating a complex regulatory landscape for Artificial Intelligence, aiming to foster innovation by reducing regulatory burdens, especially for smaller entities, while simultaneously safeguarding fundamental rights, public safety, and democratic processes from the widespread negative impacts of advanced AI systems. The integration of AI systems into services covered by the DSA, particularly those of Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs), is especially pertinent.

The paper’s core analysis distinguishes how the AI Act and DSA approach “systemic risks”:

• The AI Act provides a precise and formal definition of systemic risk tied exclusively to GPAI models. This risk is linked to their high-impact capabilities and potential for “significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain”. Criteria for designation include computational power (e.g., exceeding 10^25 FLOPs), dataset size, adaptability, and scalability. Obligations for GPAI providers with systemic risks include rigorous model evaluations, adversarial testing, continuous risk mitigation, and cybersecurity protection for the model itself.

• In contrast, the DSA focuses on ensuring a safe, predictable, and trustworthy online environment, establishing a risk-management framework for VLOPs and VLOSEs without a direct formal definition of systemic risk. It obliges these entities to assess potential systemic risks stemming from the design, functioning, and use of their services, including algorithmic systems like recommender and advertising systems, as well as risks from user misuses. The “systemic” nature here implies broad reach due to their extensive user base (over 45 million active recipients).

This distinction highlights that AI Act risks originate from the inherent capabilities of the foundational AI model, even before specific applications are built, whereas DSA risks arise from the operational environment and user interactions within large online platforms. A crucial gap identified is uncovered systemic risks. While the DSA effectively manages risks from platform operations (e.g., how a recommender system promotes content), it may not comprehensively address risks intrinsic to the underlying AI model’s capabilities and development, independent of specific service contexts. The AI Act‘s “unless clause” (Recital 118) is key here: if GPAI models embedded in VLOPs/VLOSEs introduce significant systemic risks not covered by the DSA, direct AI Act obligations on those models will apply, overriding any presumed DSA compliance.

The paper also exposes a significant challenge: over-assessment driven by regulatory ambiguity. Both the DSA‘s broad and abstract risk categories (e.g., “civic discourse,” “electoral processes”) and the AI Act‘s undefined terms and lack of specific thresholds for GPAI models contribute to legal uncertainty. This compels regulated entities, particularly GPAI model providers, to undertake exhaustive and costly risk evaluations to demonstrate compliance or rebut presumptions, rather than genuinely preventing harm. This burden is exacerbated by the inherent complexity of EU law, characterized by its volume, linguistic ambiguity, and extensive cross-references, making efficient understanding and compliance difficult, especially for SMEs.

To address these challenges, the European Commission’s commitment to simplifying its regulatory framework is seen by the author as a pivotal opportunity. The paper offers several recommendations to simplify the interplay between the AI Act and DSA, and optimize regulatory effectiveness:

Refine systemic risk taxonomy and clarify proportionality principles: Develop more granular, sector-specific guidelines that quantify or categorize risk severity and probability, reducing the incentive for exhaustive, undifferentiated “over-assessment”.

Streamline compliance pathways and standardize documentation: Integrate procedural obligations for risk management, monitoring, and documentation where possible, pursuing the “once only” principle to prevent redundant reporting.

Prioritize proactive guidance and structured updates: Accelerate the issuance of clear guidelines, including examples of high-risk and non-high-risk AI system use cases, to address textual ambiguities and interpretive latitude.

Nurture effective governance and multi-stakeholder collaboration: Strengthen inter-institutional commitment to better law-making, ensure thorough stakeholder engagement, and implement systematic ex-post reviews to capture cumulative regulatory impacts.

Cite: Karathanasis, Theodoros, The FRIA in EU AI Act: Governance, Rights, and Global Jurisdiction (July 05, 2025). Available at SSRN: https://ssrn.com/abstract=5340079 or http://dx.doi.org/10.2139/ssrn.5340079

These statements are attributable only to the author, and their publication here does not necessarily reflect the view of the other members of the AI-Regulation Chair or any partner organizations.

This work has been partially supported by MIAI @ Grenoble Alpes, (ANR-19-P3IA-0003) and by the Interdisciplinary Project on Privacy (IPoP) commissioned by the Cybersecurity PEPR (ANR 22-PECY-0002 IPOP).

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email