Legal Alert

Standing Orders: Courts’ Nascent Governance of AI Practices

by Charley F. Brown, Neal Walters, Casey G. Watkins, and Erin Blasberg
February 20, 2024


With AI use on the rise, judges across the country are updating their standing orders to address how the technology may (or may not) be used in their courtrooms. As rules regarding AI use make their way into more standing orders, attorneys and litigants before a court must familiarize themselves with the local rules and any applicable standing order(s), and be sure to check these rules frequently.

As artificial intelligence (AI) continues to reshape more facets of everyday life—including litigation—the judiciary has taken notice. Following several incidents last year where lawyers faced admonishment for improperly using generative AI in court filings, judges across the country have issued standing orders addressing AI. This article offers a snapshot of how judges have addressed AI usage to date, highlighting the practical implications for litigators.

Courts are not uniform in their stance on AI: the standing orders issued to date vary widely with respect to applicability, requirements, and repercussions.

Scope of Standing Orders

While standing orders addressing the use of generative AI seem to dominate, some orders apply to AI generally. For instance, the orders of Judge Michael Baylson of the Eastern District of Pennsylvania and Magistrate Judge Jeffery Cole of the Northern District of Illinois refer generally to “using AI.” Meanwhile, Northern District of California Judge Araceli Martínez-Olguín’s standing order explicitly addresses the use of “AI-generated content.” While rare, complete prohibitions on AI use are not unheard of. Judge Michael J. Newman of the Southern District of Ohio prohibits any AI usage across the board with an exception for “information gathered from legal search engines.”

AI for Drafting Versus Research

Orders also vary as to whether they apply to AI used for drafting filings, research, or any AI usage at all. Judge Cole obligates disclosure of whether a litigant uses any “AI tool” for “research and/or drafting.” On the other hand, orders may only apply to the use of generative AI for drafting as seen in U.S. Court of International Trade Judge Stephen Alexander Vaden’s order, which applies to “submission[s] . . . contain[ing] text drafted with the assistance of a generative artificial intelligence program.”

Disclosures and Certifications

Some orders require disclosure of AI usage, others certification, some require both, and some nothing at all. For filings drafted using generative AI, Judge Vaden calls for an accompanying disclosure identifying which AI tool was used and where such AI was used in the filing. For these filings, Judge Vaden also requires a certification that no confidential or proprietary information was disclosed to an unauthorized party through the use of AI tools. On the other hand, judges like Judge Iain D. Johnston for the Northern District of Illinois may simply remind those before the court of their duties under Federal Rules of Civil Procedure 11(b) and 26(g), without imposing additional requirements such as disclosure or certification.

Persons Subject to Orders

Further, while some orders apply to everyone appearing before the court, others limit their applicability to specific groups. Several judges, including Judge Donald W. Molloy of the District of Montana, limit their orders to counsel appearing pro hac vice. The Eastern District of Missouri’s district-wide prohibition on generative AI usage applies only to pro se litigants.

To date, only the Eastern District of Texas has addressed the use of generative AI in its local rules, which prohibit the use of generative AI for pro se litigants, and obligate counsel to review and verify any AI-generated content. On January 4, 2024, the United States Court of Appeals for the Fifth Circuit closed the comment period for a new rule that would require counsel to certify either the nonuse of generative AI in drafting or that anything drafted by generative AI has been reviewed by a human. If implemented, a “material misrepresentation” under the rule could lead to the document being stricken or even sanctions.

Penalties for Non-Compliance

Non-compliance with AI usage rules could lead to serious consequences. Judge Martínez-Olguín’s order provides for sanctions for failure to comply with the certification requirement. Violators of Judge Newman’s AI rules could face not only economic sanctions, but also stricken pleadings, contempt, or dismissal of the suit in its entirety.

Risks and Considerations

AI tools have the potential to provide efficiencies and benefits to clients and attorneys on a large scale, but they also presents risks. Inputting information into AI tools can present privacy, confidentiality, and attorney-client privilege concerns. Also, some ambiguity exists surrounding what exactly is subject to disclosure and/or certification requirements. New AI-enabled offerings from legal research engines use AI to generate case summaries and search results. Unless specifically addressed in the standing order, it is unclear as to whether the use of these search engines is subject to any AI rules.

As rules regarding AI use make their way into more standing orders, it is crucial for litigators to adapt and act proactively. Indeed, as Comment 8 to Model Rule 1.1. notes, “a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” Litigators must regularly review and familiarize themselves with the local rules and any applicable standing orders to ensure they are up to date with current requirements and expectations.

Careful attention must be paid to any disclosure and certification requirements, whether that involves disclosing the use of AI tools for drafting or certifying that no confidential or proprietary information has been compromised. The consequences of non-compliance, ranging from sanctions to dismissal, underscore the importance of adhering to these requirements, and, it remains critical for litigators utilizing AI tools to discharge their ethical obligation to provide competent representation to their client by exercising reasonable diligence in ensuring the accuracy and reliability of the work product generated by such tools.

Ultimately, change is hard. While these Authors did not use generative AI to prepare this Alert, it is inevitable that the responsible use of AI will be a meaningful component of legal practice in the not-too-distant future.

Subscribe to Ballard Spahr Mailing Lists

Get the latest significant legal alerts, news, webinars, and insights that affect your industry. 

Copyright © 2024 by Ballard Spahr LLP.
(No claim to original U.S. government material.)

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, including electronic, mechanical, photocopying, recording, or otherwise, without prior written permission of the author and publisher.

This alert is a periodic publication of Ballard Spahr LLP and is intended to notify recipients of new developments in the law. It should not be construed as legal advice or legal opinion on any specific facts or circumstances. The contents are intended for general informational purposes only, and you are urged to consult your own attorney concerning your situation and specific legal questions you have.