Overview
On February 10, 2026, Judge Jed Rakoff of the US District Court for the Southern District of New York issued a significant bench ruling in US v. Heppner, which he memorialized by order on February 17, 2026. The court held that approximately 31 documents that the defendant, Bradley Heppner, generated using a public AI tool (Claude) and then shared with his attorneys were not protected by attorney-client privilege or the work-product doctrine and therefore must be disclosed to the government. Although Heppner did not use the tool at the direction of counsel, he input information learned from his counsel into his prompts.
The documents were seized pursuant to a search warrant and were therefore already in the government's possession. Soon after the search, the defense informed prosecutors that Heppner had run queries about the investigation through Claude and argued that the resulting documents and information were protected by attorney-client privilege and the work-product doctrine. The defense sought to shield the documents, claiming Heppner created them to consult with his lawyers. Heppner's lawyers nonetheless conceded they never directed Heppner to use the AI tool and played no part in the documents' creation.
On February 6, the government moved for a ruling on whether the AI-generated documents were protected by any applicable privilege. The motion argued that the documents met none of the requirements for either attorney-client privilege or the work-product doctrine because 1) the documents failed to meet each element of attorney-client privilege, 2) retroactively sharing documents with counsel does not render them privileged, and 3) Heppner created the documents on his own, without counsel's direction. Heppner did not file a response before the February 10 hearing, four days later.
At that hearing, the court agreed with the government, finding that the very act of using a public AI platform destroyed any expectation of confidentiality. The court also decided that since the documents were created by the defendant on his own initiative and did not reflect his counsel's litigation strategy, they did not qualify as protected work product. Following the ruling, on February 17, Judge Rakoff entered an order further explaining his decision.
Why Public AI Use Isn't Privileged
Judge Rakoff's decision is one of the first major rulings on the topic, but it aligns with a growing body of ethics opinions (including NYC Bar Formal Op. 2025-6) across the United States warning against potential privilege waiver from using public AI. The consensus is clear: Public AI is not a client's confidante, nor an extension of a law firm; it is a third-party service provider.
This may seem like common sense: in general, privilege requires a confidential communication between an attorney and client in anticipation of litigation or in connection with legal advice.
Judge Rakoff's decision explained that no attorney-client privilege applied because the defendant's interactions with Claude failed "at least two, if not all three" core elements of attorney‑client privilege:
- Not a communication with an attorney. Claude is not a lawyer, and no attorney directed the defendant to use it.
- No confidentiality. Claude's privacy policy expressly states that user inputs and outputs may be collected, used for training, and disclosed to third parties—including government authorities. Because users voluntarily submit information to a third‑party platform with such terms, they cannot have a reasonable expectation of confidentiality.
- No intent to obtain legal advice from counsel. Even though the defendant later shared the AI outputs with his lawyers, that does not retroactively create privilege.
Judge Rakoff further explained that to the extent the defendant input information received from counsel into Claude, that waived any existing privilege, just as sharing confidential information with any other third party would.
The decision further held that the documents didn't qualify as attorney work product because they 1) were not prepared by or at the direction of counsel, but entirely on the defendant's own initiative, and 2) they did not reflect counsel's mental impressions or strategy at the time they were created.
Be Strategic
It is unclear whether other courts will follow the lead of Judge Rakoff's opinion when confronted with similar circumstances. Indeed, Judge Rakoff's opinion leaves some questions unanswered. First, it is difficult to reconcile his position that the defendant had no reasonable expectation of privacy in his interactions with Claude with the common practice of permitting privilege assertions over communications made through third-party email servers, many of which have disclaimers similar to Claude.
Second, although the decision notes that the output documents in question do not receive privilege protection because defendant's sharing of privileged information Claude effected a waiver, it does not address the extent of that waiver, including whether the waiver covers the privileged communications input into Claude themselves, or whether the waiver is as to the subject matter of those communications.
Finally, although the court notes that the resulting Claude output was the result of the defendant feeding into Claude certain information he received from his lawyers, it does not discuss at all whether the substance of that privileged material is disclosed in the output, or whether the output is merely based on the privileged material without revealing any of it on its face. While that distinction might not have mattered given Judge Rakoff's reliance on the notion that the defendant had no reasonable expectation of privacy in what he shared with Claude, it could make a difference if another court would adopt the contrary position, but nonetheless find the output non-privileged because it did not reveal privileged communications on its face even if it was generated based on privileged documents.
In an even more recent opinion, the US District Court for the Eastern District of Michigan reached the opposite conclusion in Warner v. Gilbarco, Inc. There, the court held that a litigant's use of a consumer AI tool did not waive work‑product protection. The court reasoned those AI‑generated materials prepared by a pro se plaintiff in anticipation of litigation qualified as work product under Federal Rule of Civil Procedure 26(b)(3), even though they were created using a public AI platform. It further explained that work‑product protection is not lost unless the disclosure is made to an adversary or in a manner likely to reach one—conditions the court found were not met through the plaintiff's use of ChatGPT. While Heppner was a criminal case and Warner is a civil case, the work-product doctrine is applied similarly in the criminal and civil contexts, so it is difficult to square the two cases' holdings.
While the legal landscape is still unsettled, given the potential risk of waiver and absence of work-product protection, clients and their agents should take care when using—or considering using—public AI tools in connection with litigation. We highlight some best practices below.
Deploy Proprietary/Enterprise M
"Enterprise" AI models (with a "zero retention" or "no training" agreement) retain in-network confidentiality. Parties should utilize proprietary, walled-off tools where the data never leaves the encrypted environment to protect against waiver.
Update Protective Orders & Confidentiality Agreements.
Ensure your litigation documents specifically address AI. For protective orders and confidentiality agreements, incorporate clauses that define "authorized AI use." Parties should stipulate that the use of private, vetted, enterprise AI tools does not constitute a waiver.
Further, in line with AAA-ICDR Guidance on AI Tools, ensure that arbitration confidentiality agreements explicitly prohibit the use of public AI by arbitrators for drafting awards or summarizing evidence without party consent.
Ethical Obligations
Lawyers have an ethical duty of technological competence. Understanding the limits of emerging technology, and advising clients accordingly, is critical. Clients and agents should be warned to strip identifying facts and privileged advice from any prompt used in a non-vetted system.
Going on the Offensive: Discovery of Opposing AI
Counsel in litigation should also consider sending discovery requests regarding opposing parties' use of AI. The Heppner ruling provides a powerful roadmap for this type of discovery. Leverage this discovery opportunity by:
- Requesting the Prompts: Discovery requests should now specifically target "all prompts, inputs, and results generated by AI tools."
- Challenge Privilege Logs: Scrutinize privilege logs for AI-generated summaries. If the tool used was a public version, the privilege has likely been waived.
- Continuous Risk: Remember that the risk of waiver persists throughout the life of a case. A single employee "cleaning up" a memo using a public AI chatbot could waive privilege over the entire underlying subject matter.
- Preparing Your Own Team: With many discovery requests, you may expect requests in kind regarding your own AI use. Safeguard against these requests by practicing good AI hygiene and avoiding public model use when related to active cases or confidential information.
- Broaden Litigation Holds: Because your client's AI use may be discoverable, litigation holds should include prompts, responses, and conversations with AI tools.