Summary
In the first ruling of its kind, a New York federal district court held that information entered into a publicly available AI platform was not protected by the attorney-client privilege or the work product doctrine. Specifically, the court found that standard AI privacy policies precluded any reasonable expectation of confidentiality. The decision in United States v. Heppner, along with recent subsequent federal court decisions, indicate broad implications for how organizations and individuals use consumer AI tools in the legal context, especially when companies later want to protect those communications and work product in litigation.
A federal court in New York recently issued what appears to be the first ruling in the country by a district court judge to directly address a question that has been quietly looming over the growing use of AI tools in the legal context: Can your conversations with a publicly available AI platform be protected by attorney-client privilege or the work product doctrine? According to Judge Jed S. Rakoff in the Southern District of New York, the answer is no. While the ruling is a bit more nuanced, the court’s reasoning nonetheless has broad implications for clients, in-house counsel, and businesses alike.
Background of Heppner
In United States v. Heppner, Bradley Heppner, a former corporate executive charged with securities fraud and related offenses, used an AI platform to prepare approximately 31 documents after he learned he was the target of a federal grand jury investigation. 25 Cr. 503, slip op. at 2–3 (S.D.N.Y. Feb. 17, 2026). The documents consisted of reports outlining potential defense strategies and legal arguments. He later argued those documents were protected from government review under the attorney-client privilege and the work product doctrine, claiming he created them in anticipation of sharing them with his lawyers. The court rejected both of Heppner’s proffered arguments in an effort to keep his AI searches confidential, and its analysis is worth understanding in some depth.
Attorney-Client Privilege: Three Strikes
To qualify for attorney-client protection, a communication must be (1) between a client and an attorney, (2) kept confidential, and (3) made for the purpose of obtaining legal advice. The court found the AI documents failed on all three prongs:
1. Communications Between a Client and an Attorney. AI tools are not lawyers. That point alone, the court said, was enough to dispose of the privilege claim. No attorney-client relationship can exist with an AI platform, and recognized privileges depend on “a trusting human relationship” with a licensed professional who owes fiduciary duties.
2. Confidential Communications. When analyzing the second prong, the court held that the communications were not confidential. Critically, the platform’s privacy policy, which users agree to when using its public consumer services, discloses that the company collects both user inputs and AI outputs, may use them to train the model, and reserves the right to share data with third parties, including government regulators. The court emphasized that Heppner could have had no reasonable expectation of confidentiality given what he agreed to. This point is especially important: it isn’t just that Heppner spoke to a third party; it is that the terms he accepted affirmatively contemplated disclosure to authorities.
3. Communications Made for the Purpose of Obtaining Legal Advice. Although the question was close, the court found Heppner did not use AI to obtain legal advice. Critically, Heppner’s lawyers admitted they never directed him to use the AI platform. And the AI platform disclaims providing legal advice and recommends consulting a qualified attorney.
Work Product Doctrine: Same Result
The work product doctrine protects materials prepared by or at the direction of an attorney in anticipation of litigation.
Here, Heppner’s counsel conceded the documents were prepared entirely on Heppner’s own initiative, not at his counsel’s direction. And while the AI documents may have later influenced defense strategy, the court found they did not reflect counsel’s strategy at the time they were created. Without that connection to counsel’s mental processes, the doctrine did not apply.
A Split in Federal Court Rulings
The law here is not settled. In Warner v. Gilbarco, Inc., No. 2:24-cv-12333-GAD-APP (E.D. Mich. Feb. 10, 2026), coincidentally decided around the same time as Heppner, a federal Magistrate Judge held that a pro se plaintiff’s use of another consumer AI chatbot was protected work product, denying the defendants’ motion to compel plaintiff’s AI materials and rejecting the argument that using an AI chatbot waived work product protection. The court explained work product waiver requires disclosure “to an adversary or in a way likely to get in an adversary’s hand,” and generative AI programs are “tools, not persons, even if they may have administrators somewhere in the background.” As these divergent cases show, these situations warrant careful analysis and caution.
The Confidentiality Problem Is the Most Urgent Takeaway
Of the Heppner court’s three grounds for denying privilege, the confidentiality analysis has the widest potential reach. It applies regardless of whether a user has a lawyer involved at all. If you are using a publicly available AI platform and have agreed to its standard privacy policy, you may well have consented to the platform retaining your inputs and potentially sharing them. That means anything you type into a consumer AI tool (such as sensitive business strategy, internal deliberations, legal concerns, confidential facts, financial information) may not be as private as you assume.
As Ballard Spahr’s HR Law Watch blog recently noted, the Department of Labor’s (DOL) new “AI Literacy” Framework calls on workers and employers alike to responsibly use AI to protect confidential information, prevent unethical applications or other misuse, and comply with both organizational and legal rules. The DOL framework explicitly acknowledges that both AI inputs and outputs carry risk, and that safeguards should be observed. Heppner illustrates what happens when those safeguards are not in place. Given the framework’s call for responsible AI use by employees, employers should implement appropriate training for their personnel.
Another federal district court decision in March further emphasizes the need to consider confidentiality in use of AI technology. In Jeffries v. Harcros Chemicals, Inc., No. 25-2569-KHV-ADM, 2026 WL 820218 (D. Kan. Mar. 25, 2026), the court addressed a plaintiff’s proposal to use an open AI tool to assist with eDiscovery tasks, specifically, to process and review the large volume of documents produced in discovery. The court declined to permit the use of an open AI tool and instead ordered that only closed AI tools could be used, granting the defendants’ request for an amended protective order. The court’s reasoning is instructive for any company considering the use of AI in the context of litigation.
The court’s decision rested on two key concerns. First, submitting discovery materials to an open AI tool risked exposing massive amounts of data in violation of U.S. data privacy laws and the General Data Protection Regulation (GDPR), which set a high standard for protecting individuals’ information and requires documented consent that the defendant’s employees, contractors, and correspondents had never given. Second, the court noted that the documents at issue came from entities in a critical infrastructure industry, making the risk of disclosure particularly acute.
The court also flagged a broader concern: Allowing parties to upload litigation documents to an open AI platform could discourage fulsome document productions, as parties might respond by under-producing or over-redacting to avoid exposure. The court’s solution was not to ban AI in discovery, but to draw a clear line between open and closed AI platforms, ultimately finding that limiting use to closed tools would better facilitate the discovery process.
A Note for Clients Anticipating or Currently Involved in Litigation
If your organization anticipates or is currently involved in litigation, recent court cases dealing with AI provide a clear warning: Do not use a publicly available consumer version of an AI platform to organize your thoughts, draft strategy documents, or work through legal questions on your own. Those communications are at significant risk of being found not privileged, and under the privacy policies you have already agreed to, they may not even be confidential.
Consult your attorney before using AI in any context related to pending or anticipated legal proceedings. Before deploying any AI tool in connection with litigation, utilize and work with legal counsel to assess whether:
- the platform is open or closed, and what that means for your data;
- your use implicates data privacy obligations, including GDPR;
- a protective order should be in place before AI tools enter the workflow; and
- the individuals whose data may be submitted to the platform have consented.
Looking Ahead
As Judge Rakoff noted in closing in Heppner, AI’s novelty does not exempt its use from longstanding legal principles. Courts will continue applying existing privilege frameworks to new technology, and the results may not favor those who assumed their AI conversations were private. The courts are still in the early days of grappling with these questions, but the result in Heppner makes clear that the answers matter now, and the consequences carry real legal risks.
Related Insights
Subscribe to Ballard Spahr Mailing Lists
Copyright © 2026 by Ballard Spahr LLP.
www.ballardspahr.com
(No claim to original U.S. government material.)
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, including electronic, mechanical, photocopying, recording, or otherwise, without prior written permission of the author and publisher.
This alert is a periodic publication of Ballard Spahr LLP and is intended to notify recipients of new developments in the law. It should not be construed as legal advice or legal opinion on any specific facts or circumstances. The contents are intended for general informational purposes only, and you are urged to consult your own attorney concerning your situation and specific legal questions you have.