Skip to Main Content

Insights

Thought Leadership

Publisher: Day Pitney Advisory
March 23, 2026

AI in Litigation: Promoting Innovation While Managing Risk

Artificial intelligence is transforming litigation practice. From reviewing and summarizing documents to generating chronologies, drafting discovery requests, preparing deposition outlines, and crafting legal arguments, AI tools facilitate tasks that would otherwise take attorneys many more hours or days. When used properly, these tools increase efficiency, reduce costs, enhance performance, and allow attorneys to focus on strategy, judgment, and advocacy.

Numerous court decisions addressing AI use have caused concern in the legal community, particularly regarding the reliability of AI-generated materials and, most recently, the attorney-client privilege and the work-product protection. But these rulings are fact-specific applications of long-standing legal principles, and the lesson they teach is not that legal professionals should retreat from using AI tools in litigation but that attorneys and those they supervise and represent must deploy them responsibly.

Recent Decisions: Context Matters

The U.S. Court of Appeals for the Fifth Circuit recently became one of the first federal appellate courts to discuss the use of AI tools in a public decision. In Fletcher v. Experian Information Solutions, Inc., No. 25-20086, 2026 WL 456842 (5th Cir. Feb. 18, 2026), the court addressed counsel's use of generative AI in drafting a court filing. The court sanctioned the plaintiff's counsel for failing to verify the accuracy of AI-generated content in her brief, which included quotations, citations, and assertions that were not supported by the case law. As many district courts and others had previously noted, the Fifth Circuit recognized that existing rules suffice to address such misconduct, even though it arises from misuse of a new technology. For example, Federal Rule of Civil Procedure 11 already requires that attorneys ensure that factual contentions and legal arguments have appropriate support. Likewise, applicable Rules of Professional Conduct preclude attorneys from asserting frivolous positions, making false statements of material fact or law to a tribunal, or engaging in dishonest conduct. And courts have always maintained the inherent authority to sanction abuses of the judicial process.

There are now hundreds of cases in state and federal courts in the U.S. in which counsel have submitted documents running afoul of these obligations because of the improper use of a generative AI tool. But we are aware of no case in which a court determined that the error lay in the use of the tool itself. Rather, the courts—with increasing frustration—have been identifying misuses of these tools resulting from insufficient understanding and calling on the bar to take appropriate care in the use of these tools.

Similarly, the ramifications of a well-publicized recent decision concerning the potential implications for the attorney-client privilege and the work-product protection of using AI tools in litigation have frequently been overstated. In United States v. Heppner, No. 25-cr-503, 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026), Judge Jed Rakoff held that a criminal defendant's interactions with the public version of a generative AI platform (Anthropic's Claude® platform) during a pending criminal investigation were not protected from discovery. This case is better understood as being dependent on its specific facts and circumstances than as a shot across the bow with respect to all use of generative AI tools in litigation. 

The court's analysis of the privilege issue rested on three traditional legal principles:

  • An attorney-client relationship requires an attorney. The defendant's interactions with the AI platform were not communications between the defendant and his attorney; to the contrary, his attorneys were unaware of the interactions until presented with the outputs after the fact. 
  • The attorney-client privilege requires an expectation of confidentiality. The defendant used the public version of the AI platform. Anthropic's written privacy policy for that platform expressly states that the platform collects data on user inputs and platform outputs and uses such data to train its model and that Anthropic reserves the right to disclose such data to third parties, including governmental authorities.
  • The privilege protects communications made for the purpose of obtaining legal advice. The defendant's interactions with the AI tool were not made for the purpose of obtaining legal advice. The AI platform is not an attorney, and Anthropic expressly disclaims that it renders legal advice. Moreover, the defendant did not communicate with the AI platform at the direction of his counsel.

Each of these three issues likely would have been resolved differently if it had been the defense attorney using an AI tool with appropriate contractual provisions protecting an expectation of confidentiality. Indeed, Heppner might have been resolved differently but for the defendant's concession that his activities were conducted entirely independently of his counsel and without their knowledge, let alone direction. 

Judge Rakoff's conclusions with respect to the work-product doctrine might similarly have been avoided if counsel had been involved. Heppner held that the defendant's communications were not protected by the work-product doctrine because they were not prepared by or at the direction of counsel and did not reflect counsel's strategy. Heppner was a criminal case that concerned documents the government seized pursuant to a search warrant, which caused Judge Rakoff to conclude that the protections of Federal Rule of Criminal Procedure 16(b)(2)(A) did not apply. Civil litigation is different. Under Federal Rule of Civil Procedure 26(b)(3), "[o]rdinarily, a party may not discover documents and tangible things that are prepared in anticipation of litigation or for trial by or for another party or its representative (including the other party's attorney, consultant, surety, indemnitor, insurer, or agent)." (Emphasis added.) This rule thus expressly protects party work product and is not limited to the work product of attorneys. While Judge Rakoff concluded that materials created on the defendant's own initiative, without attorney involvement, were not protected—in part because the policy impetus for the work-product doctrine is the protection of attorney mental impressions—he did not address the civil rules. In any event, Heppner did not hold that using AI always destroys traditional work-product protections. Rather, it held that under the particular facts of that criminal case—using a public AI platform without confidentiality assurances or attorney involvement—such protections did not apply. 

In a decision that was less well publicized, Magistrate Judge Anthony Patti reached a different conclusion the week before in Warner v. Gilbarco, Inc., No. 2:24-cv-12333, 2026 WL 373043 (E.D. Mich. Feb. 10, 2026). Judge Patti denied a defendant's request for documents and information concerning the plaintiff's use of third-party AI tools in connection with that civil case. The court concluded that such information was protected by the work-product doctrine because it was prepared by a party in anticipation of litigation. Significantly, the court rejected the argument that using an AI platform (OpenAI's ChatGPT) waived the work-product protection. The court reasoned that waiver of the protection requires either disclosure to an adversary or conduct that substantially increases the likelihood that an adversary would obtain the information and that generative AI programs are tools, not persons or adversaries. 

None of these recent decisions reflects hostility toward all use of AI tools in litigation, and together, they counsel care and caution rather than avoidance. The divergent outcomes in Heppner and Warner are more fairly understood to reflect either their different fact patterns or the variations inherent in the field of jurisprudence rather than departure from the application of traditional legal principles. 

The Lesson Learned: Using AI Responsibly Reduces Risk

The lesson to be learned from these and similar cases is not to abandon or retreat from using AI in litigation. The power of these tools is too great and their future potential is exponentially more so. Rather, attorneys should employ these tools responsibly to minimize risks while preserving applicable protections. When integrating AI tools into their practice, litigators should take the following practical steps: 

  1. Use confidential and secure AI platforms. Select enterprise-grade AI platforms with contractual commitments of confidentiality. The platforms should not use workspace data to train their models, should offer end-to-end encryption and strong data security protections, and should restrict third-party data sharing. Access to both inputs and AI-generated outputs should be limited to those individuals within the attorney-client relationship. Preserving the confidentiality of AI interactions strengthens any basis for asserting the attorney-client privilege or the work-product protection. 
  2. Carefully review and verify all AI-generated outputs. AI-generated outputs must be carefully reviewed and verified to ensure that they are accurate and bias-free. Counsel are responsible for the truthfulness and accuracy of all representations they make, especially in court filings.
  3. Consider counseling clients regarding their use of AI tools for litigation. Because, as Heppner illustrates, clients may run additional risks when they use AI tools, especially independently, litigators should consider providing guidance to their clients. Clients' uses of AI tools, too, can be better protected when done at counsel's direction or at least in close coordination with counsel and for the express purpose of litigation. Further, litigators may wish to counsel their clients to use paid tools with contractual confidentiality assurances.

Conclusion: Concern Is Not a Reason for Retreat

Calls to restrict AI use in litigation are misguided. So, too, are edicts requiring attorneys to disclose their use of AI tools in their work. Such demands overlook an important point: fundamental rules and ethical duties already govern the practice of law. AI tools, used responsibly, may improve the practice of law—just as online research databases, electronic discovery software, and remote meeting platforms, all of which raised their own concerns, have done before. Litigators who integrate AI tools thoughtfully into their practice will deliver more efficient and effective representation, while those who shun these tools will fall behind and those who use these tools without sufficient care will fare worse. Recent court decisions do not suggest that using AI tools is incompatible with existing legal principles or ethical duties. Rather, they underscore that these principles and duties require attorneys to exercise care—and those who do so can harness the promise of AI while minimizing its risks.

Authors

Jonathan B. Tropp
Jonathan B. Tropp
Partner
New Haven, CT
| (203) 977-7337
Jeffrey P. Mueller
Jeffrey P. Mueller
Partner
New Haven, CT
| (860) 275-0164
New York, NY
| (212) 297-5800

Explore Day Pitney's latest media mentions and speaking appearances.

Press Contact

Elyse Blazey Gentile
Director of Communications

EMAIL DISCLAIMER

Thank you for your interest in contacting us by email.

Your e-mail to this individual should not contain any confidential information and should be for general information purposes only. An attorney-client relationship will not be created by your e-mail to this individual. Information in your e-mail may not be entitled to any protections commonly associated with communications with attorneys. If you are in doubt about any information, please exclude it.

If you accept the terms of this notice and would like to send an email, click on the "I Agree" button below. Otherwise, please click "I Don't Agree".