There's No Such Thing as Robot-Client Privilege (Yet). Here's What That Means for You.

If you've ever used ChatGPT, Claude, or Gemini to research a legal question like a divorce, an employment dispute, or a business disagreement, a recent court ruling says those conversations could be used against you. Here's how to keep using AI wisely.

This article is educational, not legal advice. If you're facing a legal situation, you need a lawyer, ideally before you start typing anything into an AI chatbot. What follows is a framework for thinking about risk, based on recent court rulings and publicly available provider policies. Your specific situation may be different.

---

In February 2026, a federal judge in Manhattan made a ruling that matters for anyone who has ever used an AI chatbot to think through a legal problem.

In United States v. Heppner, Judge Jed Rakoff of the Southern District of New York ruled that 31 documents a criminal defendant generated through conversations with Anthropic's Claude AI were not protected by attorney-client privilege, and could be used against him in court. (O'Melveny analysis; Debevoise analysis)

What did Heppner do wrong? After being investigated for securities fraud, he used the free consumer version of Claude to research his legal situation. That part isn't unusual. Millions of people use AI chatbots to understand legal issues. But Heppner went further. He typed in information his lawyers had told him, asked Claude to help build defense strategies, and later shared those AI-generated documents with his attorneys. (Proskauer summary)

The government found these conversations during a search of his devices, and the court ruled them fully discoverable. On these facts (consumer AI, clear third-party terms, no attorney involvement), no privilege, no protection.

The reasoning rested on principles that extend well beyond Heppner's situation:

An AI chatbot is not your lawyer. "Because Claude is not an attorney," Judge Rakoff wrote, "that alone disposes of Heppner's claim of privilege." It doesn't matter how sophisticated the AI's response is. A conversation with a chatbot is not a conversation with counsel. (Jones Walker analysis) You agreed to the company's terms. When you sign up for any consumer AI service, you accept a privacy policy that governs what the company can do with your data. In Heppner's case, Judge Rakoff pointed directly to Anthropic's consumer terms, which state that Anthropic collects user inputs and outputs and may disclose data in response to valid legal process, including subpoenas, warrants, or claimed emergencies involving imminent harm. That's standard language across most consumer software companies, not unique to AI. But the court treated it as evidence that Heppner had no reasonable expectation of confidentiality, and that's the part that matters for privilege. Working alone isn't "work product." Because Heppner used AI on his own initiative, without his attorneys directing him to do so, the court found no basis for work-product protection either. (Goodwin analysis)

---

Why This Matters Beyond Heppner

Heppner was a criminal securities fraud case, and another court might reach a different result on different facts, as Warner showed the same week. But the underlying principles are not unique to securities law: an AI isn't counsel, consumer terms undermine confidentiality, and self-directed AI research isn't attorney work product. They can surface in any dispute where opposing counsel decides to look.

Going through a divorce and asking ChatGPT about custody strategies. In an employment dispute and describing your employer's behavior to Claude. Running a small business and pasting a contract into Gemini to understand your obligations. Dealing with a landlord-tenant conflict, an insurance claim, or a partnership disagreement. In any of these situations, those AI conversations are sitting on a corporation's servers, subject to subpoena, and potentially discoverable. How a specific judge in a specific jurisdiction treats them will depend on the facts, but the risk is real.

In January 2026, a judge in the New York Times v. OpenAI copyright case affirmed an order compelling OpenAI to produce approximately 20 million ChatGPT conversation logs, including deleted conversations and "temporary" chats that users believed had disappeared. (Bloomberg Law) Sam Altman, OpenAI's CEO, responded by calling for a new legal concept of "AI privilege," arguing that "talking to an AI should be like talking to a lawyer or a doctor." (Quartz) But that concept doesn't exist yet, and there's no guarantee it ever will.

Two courts, opposite results, same week. The same week as Heppner, a Michigan federal court in Warner v. Gilbarco reached the opposite conclusion, protecting a pro se plaintiff's AI-assisted work product. Judge Patti warned that treating any upload to an AI platform as a waiver "would nullify work-product protection in nearly every modern drafting environment." (National Law Review comparison) The fact that courts are reaching contradictory conclusions means you cannot assume your AI conversations are safe.

---

What's Actually Happening to Your Data

Most people assume that paying for a subscription buys them privacy. It doesn't.

A 2025 Stanford study found that all six major U.S. AI companies use consumer chat data for model training by default. Some providers now offer opt-out toggles. Anthropic, for example, made training opt-in via a user-controlled toggle in August 2025, and OpenAI lets you disable training in settings. But the default for most providers is still opt-in to training, and many users never change their settings. (IAPP analysis)

What surprises most people is that paying $20 a month doesn't materially change this picture. ChatGPT Plus, Claude Pro, Gemini Advanced: the privacy terms at the paid consumer tier are functionally similar to free. You're paying for a smarter model and faster responses, not for a different privacy posture.

How long is your data kept?

> Google Gemini — up to 36 months, with routine human review

OpenAI ChatGPT — indefinitely until you delete, then 30 more days

Anthropic Claude — 30 days if training is off, up to 5 years if on

> These are consumer-tier retention periods as of early 2026. Paid subscriptions ($20/mo) don't change them. Policies update frequently, so always check the current terms.

Deleting your conversations may not actually delete them. AI memory features, now standard in ChatGPT, Claude, and Gemini, create persistent data stores that survive conversation deletion. The AI remembers what you told it even after you delete the chat where you said it. And as the New York Times v. OpenAI case showed, courts can order preservation of conversations you thought you'd erased. Your data can be disclosed in response to legal process. Every major provider's privacy policy allows disclosure of user data in response to valid legal process (subpoenas, warrants, court orders) or in emergencies involving imminent harm. This is standard language across SaaS companies, not a blanket data-sharing arrangement. But for privilege analysis, the point isn't whether the company will hand over your data. It's that the terms permit it, which is enough for a court to find you had no reasonable expectation of confidentiality.

The privacy upgrade doesn't happen until you move to business or enterprise tiers, and those typically require organizational accounts, not individual subscriptions. For a regular person using AI in their personal life, the consumer tier is what you've got.

Unless you know where else to look.

---

The AI Privacy Spectrum

Most people think there are two options: use a consumer chatbot, or don't use AI. In reality, there's a spectrum of privacy postures, and the options most people don't know about sit in the middle.

| Tier | What happens to your data | Examples |

|------|---------------------------|----------|

| Local AI | Nothing leaves your machine. No server, no retention, no subpoena target. | Ollama, LM Studio, Jan.ai |

| Zero Retention Cloud | Processed and immediately forgotten. No logs, no storage. | Groq, Fireworks, Together AI |

| API (Short Retention) | No training. 7–30 day retention for abuse monitoring. | Anthropic API (7d), OpenAI API (30d) |

| Consumer Chatbot | Training by default. Weeks to years of retention. Human review possible. | chatgpt.com, claude.ai, gemini |

---

A Layered Approach to Using AI Wisely

The answer isn't to stop using AI. These tools are genuinely powerful for understanding your situation, researching your options, and preparing smarter questions for your attorney. The answer is to match how you use AI to the sensitivity of what you're asking.

Think of it like conversations in physical space. Asking a librarian to help you understand divorce law is fine. Discussing your specific case strategy at full volume in a crowded restaurant is not. And sharing what your lawyer told you with a stranger? That's how you lose privilege. Right now, most people are treating AI like a private conversation when it's actually closer to the crowded restaurant, and sometimes the stranger.

1. Change your habits — Free. Immediate. Ask hypotheticals, never type in what your lawyer told you.

2. Anonymize your inputs — Strip names, dates, and details before they reach the AI.

3. Use a better pipeline — API-tier or zero-retention services instead of consumer chatbots.

4. Go local for the sensitive stuff — Run AI on your own machine. Nothing leaves your computer.

Layer 1: Change Your Habits

These behavioral changes cost nothing, require no technology, and dramatically reduce your exposure.

Ask hypotheticals, not your facts. There's a world of difference between "What are common defenses in wrongful termination cases involving whistleblower retaliation?" and "I was fired by Acme Corp on March 3rd after I reported safety violations to my manager Dave Chen. What should I do?" The first is a library research question. The second is a detailed admission sitting on a corporate server. Train yourself to ask about categories of situations, not your specific one. Never type in what your lawyer told you. This is what specifically destroyed Heppner. He took privileged communications from his attorneys, including their strategic advice and their analysis of the facts and law, and typed them into Claude. Once he did that, the privilege was gone. Your attorney's advice is protected only as long as you keep it between you and your attorney. A chatbot is not part of that circle. If your lawyer tells you to use AI, get it in writing. The Heppner court left one door open: it suggested the analysis might differ if counsel had directed the AI use, with the chatbot functioning as an agent of the attorney under what's called a Kovel arrangement. If your lawyer wants you to use AI to research something, ask them to document that direction. This theory is grounded in how courts treat human agents and contractors, and multiple Heppner commentators have flagged it as a plausible argument, but it has not been tested with AI tools in court. It's a narrow path, not a settled safe harbor. (McDermott Will & Emery analysis) Check your training and privacy settings. Most providers now offer toggles to prevent your conversations from being used to train AI models. Anthropic and OpenAI both offer this, though the defaults vary. Opting out of training doesn't stop your data from being stored or make it immune to subpoena, but it can shorten the retention window and limit who inside the company accesses it.

The simplest rule: Treat every AI conversation as if it were a permanent, discoverable record. Not every provider retains data the same way, but as a risk-management habit, this is the right default. Type accordingly.

---

Layer 2: Anonymize Your Inputs

If you need to discuss something case-related with AI (and there are good reasons you might, like understanding your options or preparing questions for your lawyer), you can strip out the identifying details first.

What you might type:

My employer ==Acme Corp== fired me on ==March 3rd, 2026== after I reported ==OSHA violations== to my manager ==Dave Chen==, and HR director ==Sarah Park== refused to investigate.

After anonymization:

My employer ORG_1 fired me on DATE_1 after I reported VIOLATION_TYPE_1 to my manager PERSON_1, and HR director PERSON_2 refused to investigate.

You still get useful, situation-relevant answers. But the conversation on the provider's servers no longer contains the names, dates, and details that would make it useful evidence in litigation.

Tools like Scrubbit automate this process. They detect names, dates, organizations, and other personally identifiable information in your text and replace them with anonymous tokens before anything reaches the AI. This is faster and more reliable than doing it by hand, especially with longer documents.

What anonymization can do: Sever the link between a conversation and your specific case. Strip out the PII that makes a conversation damaging. Turn "evidence" into something much less useful to opposing counsel. What anonymization can't do: Protect the strategic substance of what you're discussing. "Should PERSON_1 argue the transactions were authorized?" still reveals a line of legal thinking, even without real names. Anonymization doesn't create attorney-client privilege, and it doesn't make a conversation undiscoverable. It limits the damage if conversations are discovered, which is meaningfully different from preventing discovery altogether.

---

Layer 3: Use a Better Pipeline

There's a tier of AI access between "consumer chatbot" and "run your own server" that most people don't know exists, and it offers dramatically better privacy.

When you use chatgpt.com or claude.ai, you're on the consumer tier. Your data may train models, gets retained for months or years, and may be reviewed by humans. But the same AI models are also available through developer APIs with fundamentally different terms: no training on your data, short retention periods (7 days for Anthropic's API, 30 days for OpenAI's), and no human review.

Some providers go even further. Services like Groq, Fireworks AI, and Together AI process your requests and immediately discard them. Zero data retention by default. Your conversation is never stored, never logged, never available to be subpoenaed, because it simply doesn't exist after the response is generated.

The catch is that most of these services are designed for developers, not regular consumers. You can't just go to groq.com and start chatting the way you would at chatgpt.com. But a growing number of consumer-friendly tools, including Scrubbit, route your conversations through these API-tier services on your behalf. Scrubbit's built-in chat connects through OpenRouter to API-level models, meaning your conversations get API-tier privacy (no training, short or zero retention) without requiring you to set up anything technical. And because Scrubbit anonymizes your content before it ever reaches the API, even the short retention window holds tokens like PERSON_1 and ORG_2, not your actual names and details.

Two layers of protection. Anonymized content on a no-training, minimal-retention pipeline is a fundamentally different risk profile from typing raw personal details into a consumer chatbot.

---

Layer 4: Local AI for Your Most Sensitive Thinking

For conversations you truly don't want leaving your computer, local AI is the gold standard. Applications like Ollama, LM Studio, and Jan.ai let you download an AI model and run it entirely on your own machine. No internet connection required. No servers involved. No corporate privacy policy to worry about. Nothing to subpoena, because no third party ever had your data. Scrubbit connects directly to Ollama, so you can combine anonymization with fully local AI. Your content is anonymized and it never leaves your computer.

The tradeoff is real: local models require a reasonably powerful computer, and they're noticeably less capable than frontier cloud models like GPT-4 or Claude. They're slower, less nuanced, and more prone to errors. But for organizing your thoughts, drafting questions for your attorney, or processing sensitive documents, they're perfectly serviceable, and the privacy is absolute.

You don't need local AI for everything. You need it for the small subset of questions where absolute privacy matters more than answer quality.

---

What the Law Hasn't Caught Up to Yet

We're in an awkward interim period. Hundreds of millions of people are using AI chatbots for sensitive personal matters, but the legal frameworks governing that use are still being written, often in contradictory ways by different courts in different jurisdictions.

The Heppner ruling says your AI conversations aren't privileged. The Warner ruling says AI-assisted work product may be protected. The NYT v. OpenAI preservation order says even deleted conversations can be compelled. Bar associations across the country are racing to issue guidance, with the ABA's 2025 AI Task Force and the New York City Bar's Formal Opinion 2025-6 representing early but not definitive efforts to set norms. Meanwhile, courts are beginning to require litigants to disclose their use of AI, and the International Bar Association has flagged the global implications of sharing privileged information with "digital strangers."

Sam Altman has called for a new legal concept of "AI privilege," arguing that "talking to an AI should be like talking to a lawyer or a doctor." (Quartz analysis) It's an interesting idea. But it doesn't exist today, and you shouldn't use AI as if it does. As Altman himself acknowledged in 2025, there is currently no legal confidentiality when using ChatGPT, even for deeply personal conversations.

---

The Bottom Line

AI is an extraordinary tool for understanding your legal situation, researching your options, and walking into your lawyer's office with better questions. You should use it. But if you're involved in a legal dispute, or could end up in one, you should use it with the same care you'd bring to any conversation that might be overheard.

There's no robot-client privilege. Not yet, anyway. Until there is, the layered approach isn't complicated: ask hypotheticals for general research, anonymize when you need situation-specific answers, use zero-retention or API-tier services when the stakes are high, and go local for the truly sensitive stuff. Never type in what your lawyer told you.

And remember: this article exists because someone used AI instead of talking to their attorney. Let AI help you prepare for that conversation, not replace it.

---

Sources & Further Reading

Court Rulings

United States v. Heppner — Harvard Law Review (March 2026)

SDNY First-of-its-Kind Ruling: AI-Generated Documents Not Privileged — O'Melveny

SDNY Rules AI-Generated Documents Not Protected by Privilege — Debevoise

AI, Privilege, and the Heppner Ruling — Venable

Lessons from Heppner — McDermott Will & Emery

Warner v. Gilbarco — Justia (E.D. Mich. 2026)

Michigan Court Protects AI-Assisted Work Product — Proskauer

Same Week, Different Frameworks — National Law Review

OpenAI Must Turn Over 20M ChatGPT Logs — Bloomberg Law

Legal Analysis

AI Chatbots, Privilege, and Pitfalls — Goodwin

Your AI Conversations Are Not Privileged — Jones Walker

Loose AI Prompts Sink Ships — NYSBA

Can Your AI Chat History Be Used Against You? — Fisher Phillips

Digital Strangers in Litigation — International Bar Association

When AI Conversations Are a Privilege Bomb — Ward and Smith

Courts Grapple with Privilege Implications — Cleary Gottlieb

Bar Association Guidance

ABA First Ethics Guidance on AI (2024)

NYC Bar Formal Opinion 2025-6

AI and Attorney Ethics: 50-State Survey — Justia

AI Privacy & Data Retention

Be Careful What You Tell Your AI Chatbot — Stanford HAI

Privacy Gap in Consumer AI — IAPP

Anthropic's Policy on Government Requests

Consumer Terms Update — Anthropic (Aug. 2025)

Chat Retention Policies — OpenAI

Data Retention — Anthropic

Privacy Hub — Google Gemini

Sam Altman Wants "AI Privilege" — Quartz

No Legal Confidentiality for ChatGPT — TechCrunch

Zero Data Retention Providers

Your Data in GroqCloud — Groq

Data Handling — Fireworks AI

Privacy Policy — Together AI

Zero Data Retention — OpenRouter