How should Europe enforce data protection when AI chatbots operate across borders? In a notable move, Berlin’s Data Protection Authority invoked the Digital Services Act (DSA) Article 16 to ask Apple and Google to delist DeepSeek, the famous Chinese AI chatbot, over alleged GDPR-breaching transfers of EU users’ data to China. The request uses a tool designed for illegal online content to tackle a complex AI data-flow issue—raising big questions for AI governance.
Why this matters for AI: modern chatbots don’t just “answer questions.” They typically transmit prompts and metadata to remote servers, log interactions, may retain outputs, and sometimes reuse data for model improvement. For globally hosted systems, cross-border data flows are the norm, not the exception. That technical reality sits at the heart of the DeepSeek dispute and shows how AI deployment choices (where inference happens, what is logged, how long it is kept) can trigger GDPR and now DSA dynamics.
In his new European Law Blog article, “From Transfers to Takedowns: Can Article 16 DSA Police GDPR Violations?”, Prof. Theodore Christakis argues that Berlin’s path is a risky shortcut:
- Shaky GDPR hook. The “illegal transfer” allegation under Chapter V looks fragile where non-EU controllers collect data directly from EU users—a scenario the EDPB treats as not a transfer at the collection stage. Other GDPR duties may still apply, but Chapter V is the wrong peg.
- Wrong DSA tool. Article 16 notices are non-binding and nudge platforms into arbitrating complex privacy disputes. If a binding platform action is truly needed, the DSA provides Article 9 orders—reasoned, specific, proportionate, and appealable.
A two-track framework for AI cases. The piece proposes a practical approach that keeps lanes clear while recognising AI’s cross-border reality:
- Track 1 (default): Where an AI provider has an EU presence or cooperates, fix issues via GDPR tools (investigation, corrective measures, fines), with courts as backstop.
- Track 2 (ultima ratio): Where a provider is non-established in the EU and non-cooperative, and GDPR avenues have failed, use a binding, narrowly tailored Article 9 DSA order—not a mere notice—to target specific distribution points (e.g., app listings).
This balanced path respects both regimes—GDPR for privacy enforcement, DSA for intermediary obligations—and avoids turning platforms into default privacy tribunals for AI. It also squarely addresses the AI dimension: large-scale chatbots entail continuous data flows that must be governed without collapsing distinct legal frameworks.
🔗 Read the full analysis on the European Law Blog: From Transfers to Takedowns: Can Article 16 DSA Police GDPR Violations?