From X-Rays to MRIs: AI Wants a Much Deeper Look at You

Since January, every major AI company has unveiled ways to connect your health data directly to their chatbots.The results are impressive, but he privacy implications are not well understood.

Nicole Nguyen's experiment in the Wall Street Journal this week ("I Uploaded My Blood Work to AI. Am I Oversharing?") was personal, and it scratched at something most coverage of AI and health data doesn't touch: the awkward, ill-defined boundary between a person, their private information, and whatever's really going on behind that screen. AI is providing us with deeply personal experiences, and it's easy to forget that as it becomes more attuned to us. If social media and cloud platforms had an X-ray of us, AI stands to generate MRI scans of who we are as people, not just how we behave online and in public.

Nguyen connected her medical records and Apple Watch data to Claude and Perplexity, asked real health questions, had a UCSF doctor grade the responses, and then — after getting genuinely useful results — decided to purge all of it. Her instinct wasn't "don't do this." It was more like: I can see why people do this, and the advice was mostly good, but I don't want my data caught up in a version of the technology we haven't met yet.

That's a reasonable place to land. But we think the broader conversation is missing a question that matters just as much as "should I share?" The question is: how much of my identity does the AI actually need?

---

The upside is legitimate

Let's start with what the article gets right, because it's important.

AI health tools are useful. A third of U.S. adults have already used AI for health advice, and 41% of them uploaded personal medical information to get it. That's not recklessness. It's a response to a healthcare system where your doctor doesn't call about "normal" results, where getting a specialist appointment takes weeks, and where nobody has a comprehensive view of your full health picture.

When Nguyen asked Claude and Perplexity to assess her cardiovascular risk, recommend supplements, and help with fatigue symptoms, both bots gave advice a doctor called "practical." They cross-referenced lab results with wearable data and flagged things worth investigating. That's a meaningful service, especially for people who don't have easy access to a clinician who'll spend 30 minutes walking through their numbers.

We're not here to tell you that using AI for health questions is a bad idea. It's often a good one.

---

But the connectors change the equation

There's an important difference between pasting a lab result into a chat window and connecting a platform directly to your entire medical record.

When you paste, you choose what to share. You can leave out your name. You can strip the header with your date of birth and insurance ID. You control the boundaries.

When you connect via something like HealthEx or b.well, you're granting access to a much larger dataset, and you're tying it to a verified identity. As Nguyen described, HealthEx required her birth date, phone number, face scan, and a driver's license just to set up the connection. That's not anonymized access. That's a persistent, identity-linked health profile living on someone else's infrastructure.

And these platforms, as the article notes, are not covered by HIPAA. They say they won't train on your health data. They say you can disconnect anytime. But "trust our current policy" is not the same as a legal or architectural guarantee, and policies change, companies get acquired, and servers get breached.

It's worth drawing a comparison to the vitamins and supplements industry. There are probably genuine health benefits in many of those products. But the industry is largely unregulated, the claims aren't independently verified, and consumers are asked to take companies at their word about what's inside the bottle. AI health platforms are in a similar position right now: likely beneficial, probably well-intentioned, but operating without the regulatory framework that would make "trust us" unnecessary. The difference is that if a supplement company mishandles your trust, you've wasted money. If an AI health platform mishandles your data, you can't get it back.

Nguyen herself landed on this: "AI is moving fast, and I don't want my data caught up in a version of the technology we haven't met yet." That's the right frame. The risk isn't today's privacy policy. It's the accumulation of sensitive data on platforms whose future you can't predict.

---

What does the AI actually need to help you?

This is the question we keep coming back to.

To interpret a lipid panel, an AI needs the numbers: your LDL, HDL, triglycerides, maybe your age and sex for context. It does not need to know your name. It doesn't need your insurance carrier, your home address, or the fact that it's linked to your specific medical record at a specific hospital.

To assess cardiovascular risk, it needs your heart-rate data, maybe your blood pressure history, your family history if you choose to share it. It does not need a face scan and a driver's license to deliver that analysis.

The useful part of the interaction is the medical data. The dangerous part is the identity layer wrapped around it.

Most of what makes AI health advice work, the pattern-matching, the cross-referencing of metrics, the translation of clinical jargon into plain English, doesn't require the AI to know who you are. It requires the AI to know what your numbers are. Those are very different things.

---

A spectrum of approaches

Not every health question carries the same risk. The right level of caution depends on what you're sharing, and there's a range of options between "connect everything" and "share nothing."

For general health questions, "what does a high TSH level mean," "what supplements support sleep," you don't need to connect anything. Just ask. The AI can answer from its training data, and no personal information changes hands. For interpreting your own results, consider stripping the identifying information first. Download your lab report, remove the header with your name and DOB, and paste the values into the chat. You get the same analysis with none of the identity exposure. This is what some of the more careful users are already doing: downloading from MyChart and sharing selectively rather than granting direct access. If you do use the connectors, treat them like a temporary tool, not a permanent relationship. Connect when you have a specific question, get your answer, and disconnect. Don't leave a live pipeline to your medical records open indefinitely. And check whether the platform offers zero-data-retention (ZDR) mode. Some API-level access tiers don't store your prompts or responses at all. If you're sharing sensitive health information, ZDR should be your default, not an afterthought. Keep your health memory local. One of the subtler risks is that Claude and others store health conversations as part of a personalized memory system, and past conversations can influence future responses. That means an offhand question about homeopathic remedies could skew your future medical advice. Your health context is valuable. It should live on your machine, under your control, not accumulate on a platform where you can't fully see or manage what it's retained about you, or, even if you can, where you may not be the only one with access in certain scenarios.

---

The missing layer

Right now, the consumer AI landscape gives you two options for health questions: share everything through a connector, or manually redact things yourself before pasting. The first is convenient but risky. The second is safe but tedious, and most people won't bother.

What's missing is the layer in between: tools that let you get personalized AI health insights without attaching your identity to the data that leaves your device. Tools that handle the scrubbing for you, stripping names, dates of birth, account numbers, and identifying details while preserving the clinical content that makes the AI's response useful.

That's why we're building Scrubbit. Not because AI health tools are bad, they're not, but because there should be a way to use them without building a permanent, identity-linked medical profile on a platform that might look very different in two years.

Your health data should be an asset you control, not a deposit you make on someone else's servers and hope for the best.

---

Scrubbit is in early access. We anonymize sensitive documents locally before they touch any AI, so the model gets the context it needs and your identity stays on your machine. Join the waitlist →