New Study: “You Trust Your Chatbot With Everything. Should You?” 

Hundreds of millions of people confide their most intimate secrets to AI chatbots every day. The interface invites intimacy; the fine print reserves broad rights most users will never read. This first-of-its-kind study maps what really happens to your words across ChatGPT, Gemini, Claude, Grok, and DeepSeek. 

You Trust Your Chatbot With Everything. Should You? Part 1: How The Controller Uses Your Chat Data

Theodore Christakis Chair Responsible AI – Legal, March 2026

Consumer chatbots have become the world’s most trusted strangers. Hundreds of millions of people now confide health symptoms, legal strategies, financial anxieties, and moments of acute emotional distress to systems that feel private but are governed by nothing resembling professional secrecy. This study offers the first comprehensive academic attempt to map the internal privacy boundary of consumer chatbot conversations: how providers handle the data users entrust to them, where the protections fall short of what the interface invites users to expect, and what constraint-based alternatives could look like.

Through a comparative policy-and-interface analysis of five major services (ChatGPT, Gemini, Claude, Grok, DeepSeek), including four detailed and unique comparative tables, the present Part 1 examines the internal boundary across four dimensions: 

  1. training and improvement use; 
  2. human review of conversations; 
  3. advertising and monetisation; and 
  4. operational sharing and ecosystem spillover.

The findings do not reveal a landscape of abuse. They reveal a landscape of structural opacity. Every major provider now trains on consumer chats by default. Every provider reserves human access to conversations. Advertising has entered the chat, with personalisation enabled by default. And “no sale” commitments, however genuine, do not disclose the full scope of who may access a conversation inside the provider’s own supply chain.

The study advances ten practical recommendations. At their centre sits Sealed Mode: a clearly labelled consumer pathway for high-stakes topics (starting with health and wellbeing) where the default architecture materially constrains downstream reuse and insider access, combining no training, no advertising, siloed personalisation, strict retention, minimised human review, and cryptographic hardening. The feasibility of this approach is no longer speculative: Apple’s Private Cloud Compute and Meta’s Private Processing for WhatsApp demonstrate that constraint-based privacy for cloud AI is already deployed at consumer scale. Sealed Mode shifts the privacy boundary from promise-based to constraint-based, because the most sensitive conversations deserve protections commensurate with the trust users place in them.

Part 2 (forthcoming) will examine the external boundary: civil discovery, government-compelled access, and how the retention choices documented here amplify breach exposure.

To download the paper, [click here].


To cite this article: T. Christakis, You Trust Your Chatbot With Everything. Should You? Part 1: How The Controller Uses Your Chat Data, AI Regulation Papers, 26-03-2, AI-Regulation.com, March 2026.

These statements are attributable only to the author, and their publication here does not necessarily reflect the view of the other members of the AI-Regulation Chair or any partner organisations. 

This work has been partially supported by MIAI @ Grenoble Alpes, (ANR-23-IACL-0006) and by the Interdisciplinary Project on Privacy (IPoP) of the Cybersecurity PEPR (ANR 22-PECY-0002 IPOP).


Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email