✨ Discover this must-read post from Hacker News 📖
📂 **Category**:
✅ **What You’ll Learn**:
Management consultants are pushing the promise of materially increased profits due to AI-created efficiencies. Businesses big and small across a wide range of industries, including commercial real estate, are simultaneously reducing their workforces and encouraging their remaining employees to harness AI for sensitive and menial tasks alike. And AI is becoming ubiquitous, especially with younger members of the workforce trying to prove their worth in the new AI age.
But what if the problems that inexperienced members of the workforce ask AI to solve were subject to court scrutiny? Would this not be the business equivalent of having one’s online search history revealed to the masses? Is this type of transparency good or dangerous?
However one chooses to answer these questions for oneself, the legal landscape surrounding AI use is beginning to take shape. On February 10, 2026, Judge Jed S. Rakoff of the Southern District of New York ruled that extremely sensitive and potentially incriminating open AI searches were not protected by either the attorney-client privilege or the work product doctrine.[1]
The theory underpinning Judge Rakoff’s decision is not novel: a defendant — while aware he was under investigation and without the advice of counsel — could not protect his sensitive communications to a third party (an open AI platform called Claude) by later trying to cloak those communications in either the attorney-client privilege or the work product doctrine. But the application of old legal theories to new technologies has a history of posing great risks to early adopters. And, in the wake of this ruling, it is imperative that businesses act quickly to create AI acceptable use policies to protect against those risks. Failure to do so will lead to increased litigation risk, disclosure of trade secrets and other sensitive information, and embarrassment.
Background
In United States v. Heppner, during a search of the defendant’s home, the FBI discovered multiple documents memorializing communications between a criminal defendant and the consumer generative AI platform Claude. The defendant’s communications with AI were made (i) to create possible strategies to defend against the government’s indictment; (ii) after the defendant learned that he was the subject of the government’s investigation; and (iii) without the prompting of counsel.[2]
Given the sensitive nature of the defendant’s AI searches regarding available legal strategies, the defendant’s counsel attempted to assert privilege over the defendant’s AI communications. The government, in turn, moved for a ruling that the defendant’s AI communications were not protected by either the attorney-client privilege or the work product doctrine.[3] In a landmark decision, the court agreed with the government and ruled that the defendant’s AI communications were not privileged.[4]
Attorney-Client Privilege
Although the court recognized that the defendant later sent his counsel his AI communications, the court held that the defendant’s communications with AI were not protected by the attorney-client privilege.[5] For the attorney-client privilege to apply to a communication, the communication must be (i) between a client and their attorney; (ii) intended to be, and actually be, kept confidential; and (iii) for the purpose of obtaining or providing legal advice.[6]
Here, the defendant’s communications with AI failed to satisfy each of these elements. First, Claude (or any generative AI program) is not an attorney. The court held that for the privilege to be invoked, “a trusting human relationship” must exist, and such relationship cannot exist between an AI user and the AI platform.[7]
Second, the court held that the defendant’s communications with Claude were not confidential — and that the defendant could not have expected them to be confidential — due to the AI platform’s policies of retaining the data generated in the normal course of business and using the user’s inputs and its outputs to train the AI.[8] The court also stated that the defendant’s conversations with AI could not be equated to when a client prepares notes to share with his attorney because the defendant prepared notes but shared them with a third-party, Claude, before sharing them with his counsel.[9]
Finally, the defendant’s communications with AI could not be protected by the attorney-client privilege because he did not communicate with the AI for the purpose of obtaining legal advice.[10] While he may have communicated with Claude for the “express purpose of talking to counsel,” the defendant did not do so at the direction of counsel, and Claude itself states that it does not provide legal advice.[11]
Because the defendant’s AI communications failed to satisfy any element of the attorney-client privilege, the court held that the privilege could not be invoked and that the government could discover the defendant’s AI communications in a criminal case with the defendant’s liberty at stake.
Work Product Doctrine
The court also held that the work product doctrine did not apply to the defendant’s communications with AI. Whether or not they were prepared in anticipation of litigation, the defendant’s AI communications were not made “by or at the behest of counsel” and did not reflect the defendant’s counsel’s strategy.[12]
The court stated that while the work product doctrine can sometimes be extended to apply to documents created by non-lawyers, the court’s view is that the work product doctrine’s true purpose is to protect lawyers’ mental processes.[13] Here, the defendant “acted on his own” in creating the AI communications, which did not disclose his counsel’s strategy.[14]
Therefore, the court ruled that the work product doctrine does not protect AI communications as work product, and the defendant’s communications with Claude were not protected.
Impact on the Commercial Real Estate Industry
While its ruling examined the defendant’s specific communications with Claude, the court’s reasoning could be read to apply, at a minimum, to all communications with open AI systems without the express involvement of counsel. Scarier still, at a maximum, the court’s reasoning could be read to apply to all communications with open AI systems. That is because, even if made in concert with counsel, the court’s analysis of the second privilege element — that the communication be kept confidential — suggests that it can never be met due to open AI’s policies of retaining the data generated in the normal course of business and using the user’s inputs and its outputs to train the AI. Therefore, AI users should be cautious when and how they use AI, especially regarding topics that could be subject to litigation.
Enter the special servicing of commercial real estate in this economic and political climate. Depending on the reporting, there is approximately $1 trillion in commercial real estate debt coming due in the next year.[15] While many people remain optimistic that this “maturity wall” will be cleared, the fact remains that flesh-and-blood human beings will be the ones doing the clearing. That means pressure — pressure on everyone from entry level analysts up to the top-level credit committees making decisions on large, complex and/or voluminous resolutions. That pressure corresponds to a high turnover rate with hundreds of billions of dollars at stake where litigation occurs often. In an effort to quickly respond to some of those pressures, people may turn to AI as a panacea only to find themselves having turned over copious amounts of proprietary information when later sued in connection with a failed workout. Acting quickly and decisively to enact protocols for the use of AI in this context remains essential.
While AI policies differ greatly, given this recent decision, it is best practice to avoid disclosing any confidential information in AI prompts. At a minimum, business leaders and their counsel must train employees on how to best prompt AI, provide sample acceptable prompts, and clarify the businesses’ expectations for information that can and cannot be shared with AI. Furthermore, it is important to emphasize that while generative AI models can appear as trusted advisors, AI systems cannot provide legal advice and do not keep the information users provide confidential unless those are specific, closed and/or proprietary systems.
AI is not your attorney. The Southern District of New York affirmed this principle in United States v. Heppner. And AI itself discloses it when users attempt to solicit legal advice. So, think twice before sharing sensitive information with AI and remember that what you tell AI can and will be used against you.
[1] United States v. Heppner, No. 25 CR. 503 (JSR), 2026 WL 436479 at *1 (S.D.N.Y. Feb. 17, 2026).
[2] Id.
[3] Id. at *2.
[4] Id.
[5] Id. at *3.
[6] Id. at *2.
[7] Id
[8] Id.
[9] Id.
[10] Id. at *3.
[11] Id.
[12] Id.
[13] Id. at *4.
[14] Id.
[15] Mark Heschmeyer, Why Commercial Property Pros Say a Looming $1.26 Trillion Debt Wall Can Be Scaled, CoStar (Sept. 24, 2025, 4:43 P.M.), https://www.costar.com/article/1122236114/why-commercial-property-pros-say-a-looming-1-26-trillion-debt-wall-can-be-scaled.
🔥 **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#OpenAI #Searches #Protected #Privilege**
🕒 **Posted on**: 1773998074
🌟 **Want more?** Click here for more info! 🌟
