Between January and March 2026, five major technology companies (OpenAI, Anthropic, Microsoft, Amazon, and Perplexity) launched health-specific AI products connecting medical records, wearables, and wellness apps to their chatbot platforms. Every one followed the same playbook. Everyone is restricted to the United States. This article is the first comparative academic analysis of what is arguably the most consequential AI development of 2026 that has not yet received serious academic or regulatory scrutiny. Read the Executive Summary below and download the full study.
Context. This follow-up article builds on my March 2026 study You Trust Your Chatbot With Everything. Should You? Part 1: How The Controller Uses Your Chat Data, which proposed Sealed Mode: a confidentiality framework for sensitive chatbot conversations built into product architecture (no training, no advertising, siloed personalisation, strict retention, minimised human review, cryptographic hardening). The article analyses all five health AI products against Sealed Mode’s six components through a comparative table covering sixteen dimensions, and maps the broader data protection/governance questions this convergence creates.
Key Findings and Questions the Article Raises
1. The market has validated Sealed Mode’s core intuition. Five companies independently concluded that health conversations cannot be governed by the same defaults as ordinary chat. But every product has converged on health data integration hubs (connecting records, apps, wearables) rather than on protecting the conversational disclosure that hundreds of millions of users already make without connecting anything.
2. The European exclusion reveals a missed opportunity. Every product bundles privacy protections with data integration features. It is the integration dimension that triggers regulatory hurdles in Europe (GDPR Article 9, the Medical Device Regulation, the AI Act). Providers could have offered the privacy-protective dimension as a standalone feature globally. By bundling both, they excluded Europeans from both. The paradox: the users most protected by data protection law are the ones denied access to the most privacy-protective chatbot feature currently available.
3. None of the five products meets the full Sealed Mode standard. All five offer no-training commitments and data isolation. But other protective elements are missing. And the governance frameworks remain under each provider’s unilateral control, and have not been tested by time, commercial pressure, or independent scrutiny.
4. There is no free lunch: Several of these products are free or bundled with subscriptions. But the strategic logic is clear: Amazon channels users to One Medical and Amazon Pharmacy; OpenAI deepens engagement on a platform it is simultaneously monetising through advertising; Perplexity and Anthropic restrict health features to paying subscribers. Once a user connects years of medical records and wearable data to one platform, switching costs become prohibitive. The question is whether privacy commitments made during the trust-building phase of launch will survive the monetisation phase that follows.
5. The gold mine question. “Not used to train foundation models” does not mean “not used.” It does not preclude product analytics, quality improvement, or what Amazon calls training on “abstracted patterns.” These platforms could constitute the largest aggregation of health data in history. Under US law, since these consumer products are not HIPAA-covered entities, there is no federal constraint on secondary use beyond each company’s voluntary commitments. Privacy policies can be modified at any time.
6. Private health hubs vs. the European Health Data Space. The EU has spent years building the EHDS, a public infrastructure for secondary use of health data with strict institutional gatekeeping. A private AI platform aggregating consumer health data under consent could derive comparable epidemiological and pharmaceutical insights without passing through that governance. Even under GDPR, this structural asymmetry would persist. It risks creating a two-track system where the most demanding governance requirements apply to public institutions and academic researchers, while the largest health data aggregations sit in private hands under less structured, consent-based frameworks.
7. Cybersecurity concentration risk. Five companies are centralising health data linked to personal identities, medical records, and years of behavioural patterns. A single breach could expose health data at a scale never seen before. Security architectures remain undisclosed; third-party intermediaries (b.well, HealthEx, Terra API) each add a point of vulnerability.
8. Structural tensions with core GDPR principles. The architectural choices behind these products raise questions under several foundational principles of EU data protection law, including data minimisation, storage limitation, purpose limitation, controller and processor complexity, data portability, and the protection of children’s data.
***
These questions are not reasons to block health AI products whose potential to help people better understand and manage their health is real. They are reasons to get the governance right before the gap between the pace of product launches and the pace of oversight becomes impossible to close.
The article also acknowledges that data integration in the health context is not driven solely by business logic; it can serve a genuine clinical function, since generic health guidance that ignores a user’s medical history, medications, or pre-existing conditions may be not merely unhelpful but actively harmful. This reinforces rather than undermines the case for strong governance: the more data these systems ingest, the more consequential their outputs become.
Policymakers and regulators in Europe should work proactively, and in dialogue with providers, to find constructive and protective solutions. One avenue worth exploring is the use of regulatory sandboxes, already provided for under the AI Act, to allow health AI products to operate in Europe under supervised conditions with strong, verifiable privacy safeguards, including the kind of architectural protections that Sealed Mode envisions. The current outcome, where Europeans are excluded entirely, serves neither innovation nor protection.
The real question is no longer whether differentiated privacy for sensitive chatbot conversations is conceivable. Five companies answered that in three months. The question is whether the privacy-protective core can be extracted from the integration products it is bundled with and offered as a standalone standard, available to every user, everywhere.
To download the full paper [click here].
To cite this article: T. Christakis, “The Health AI Agent Rush: Five Companies, Your Health Data, and the Governance Questions Nobody is Asking”, AI Regulation Papers, 26-03-3, AI-Regulation.com, March 2026.
These statements are attributable only to the author, and their publication here does not necessarily reflect the view of the other members of the AI-Regulation Chair or any partner organisations.
This work has been partially supported by MIAI @ Grenoble Alpes, (ANR-23-IACL-0006) and by the Interdisciplinary Project on Privacy (IPoP) of the Cybersecurity PEPR (ANR 22-PECY-0002 IPOP).
