Article · L2.1
Litigation drafting with AI: verification for standing orders
Litigation drafting with AI: verification that survives standing orders
How to build an AI-assisted drafting workflow your associates use, your partners trust, and your federal-district standing orders accept, without becoming the next Mata v. Avianca story.
A litigation associate at a 28-attorney firm spends 20 hours on a motion to dismiss in a complex commercial case. A national firm produces the same motion in 5 hours. The associate at the midsize firm is not less capable. The national firm has built an AI-assisted drafting workflow that the midsize firm has not, and built the verification architecture that lets the workflow survive a federal-district standing order, an ABA Formal Opinion 512 review, and a Rule 5.3 supervision audit.
The midsize firm’s litigation practice group leader has watched this gap widen over 24 months. Every conversation with the managing partner about closing it ends in the same place: we cannot afford to be the next sanctions story. The Mata v. Avianca case ($5,000 Rule 11 sanction, S.D.N.Y., June 22, 2023) is the case every managing partner remembers. The K&L Gates and follow-on sanctions stories of 2023–2024 reinforce the fear. The firm has not built the workflow because building it badly is worse than not building it at all.
This piece walks through what the verification architecture for AI-assisted litigation drafting actually requires: the workflow stages, the supervision discipline, the standing-order compliance protocol, and the engagement-letter language that holds the system together.
The problem in litigation-PG vocabulary
The litigation practice group leader sees three pressures converge:
- Associate productivity. The firm’s associates produce motions, briefs, and discovery responses at a pace set by partner-availability for review, not by the work itself. A junior associate’s first-draft brief takes 18–25 hours; the same brief at a national firm with AI tooling takes 5–8 hours plus equivalent partner review.
- Client AI pressure. General counsel at the firm’s institutional clients are asking — sometimes politely, sometimes pointedly — what the firm is doing on AI. The conversation is moving from “are you using AI” to “what does your AI workflow look like and how is it supervised.”
- Sanctions exposure. Every federal district has at least one judge with a standing order on AI use — typically requiring disclosure when AI was used in drafting, sometimes prohibiting AI-assisted citations without independent verification. The disclosure is not the problem; the unverified citation is.
The litigation PGL needs a workflow that compresses associate drafting time, satisfies the client conversation, and survives both ABA Opinion 512 review and the most stringent federal-district standing order in the firm’s geography.
Why this is harder than it looks
Three failure modes account for most of the firms that have abandoned AI-assisted drafting after a pilot:
-
The associate is given the tool without the verification discipline. The associate uses Harvey or CoCounsel to generate a draft, the draft includes a citation that turns out to be a hallucination, the verifier (often the same associate) does not know how to spot the failure mode, and the citation lands in the partner’s review draft. Sometimes the partner catches it. Sometimes the partner does not. The first time the latter happens in a filing, the firm’s pilot ends.
-
The partner reviews the AI-generated draft as if it were a human first draft. The partner’s review pattern was calibrated to associate drafts: the partner trusts certain things (the structure, the framing, the citation count) and challenges others (the strategic argument, the tactical sequencing). AI-generated drafts invert this. The structure can be well-formed while the citations are unsound. A partner reviewing an AI draft with their associate-draft review pattern misses the AI-specific failure modes.
-
The standing-order compliance protocol does not exist. The firm deploys the tool, an associate uses it on a matter, the assigned judge has a standing order requiring AI disclosure, and the disclosure is missed because the firm has no process to surface the standing order at matter intake. The Rule 3.3 candor exposure is real, even though the underlying work product was sound.
The mechanism
A defensible AI-assisted litigation drafting workflow has six stages. Every stage has a named role and a documented output.
The two stages that distinguish a defensible workflow from a fragile one are stages 3 (independent cite-check) and 7 (standing-order check).
Stage 3 — independent cite-check. Every citation in the AI-generated draft is verified against the primary source by the associate. “Verified” means the associate has read the cited case, confirmed the holding, and confirmed the cite supports the proposition for which it is cited. The cite-check is documented in the matter file with a one-line confirmation per cite. This is the discipline that closes the Mata exposure.
Stage 7 — standing-order check. Before filing, the associate or the responsible partner confirms (a) the assigned judge’s standing order on AI use, (b) the firm’s disclosure obligations under that order, (c) the engagement-letter consent in place for the matter, and (d) any client-imposed AI restrictions. Disclosure is included in the filing where required. The check is documented.
Evidence
The vendor landscape:
- Harvey AI — deployed at 28% of Am Law 100 by 2024 (named: A&O Shearman, Macfarlanes). Midsize-firm pricing tier in 2025 is roughly $50K–$150K per year depending on attorney count and module mix.
- Casetext / Thomson Reuters CoCounsel — named deployments at Fisher Phillips, DLA Piper, Eversheds Sutherland, Bowman and Brooke, Orrick. Midsize-firm tier accessible.
- Lexis+ AI with Protege — increasingly competitive at midsize-firm scale.
- Paxton, Alexi (litigation-specific), Briefpoint (discovery-specific) — credible alternatives for narrower workflows.
Vendor ROI claims have wide variance — CoCounsel reports roughly 30–80% time savings on research and 15–40% on drafting depending on matter type and verification depth. Field observation: midsize-firm litigation practices that have deployed verification-disciplined workflows report 35–55% drafting-time reduction at sustained quality. Practices that deployed without the verification discipline report either much larger time savings (often paired with quality erosion) or pilot abandonment within 6 months.
What to do next
The decision is not whether to build this workflow. The decision is whether to build it now, with verification discipline, or to wait for a competitor to take more of the firm’s institutional clients first.
-
Week 1–2: Workflow design
Map the firm's actual litigation drafting workflow. Identify the stages where AI fits. Design the verification discipline per stage. Draft the supervision protocol per Rule 5.3.
-
Week 3–4: Standing-order infrastructure
Build the standing-order tracking process. Integrate with matter-intake. Draft the disclosure language pack adaptable to the typical standing orders in the firm's geography.
-
Week 5–6: Engagement-letter integration
Integrate the AI consent language into the firm's engagement-letter pack. Run the active-matter amendment cycle for matters in flight.
-
Week 7–8: Pilot deployment with associate training
Pilot on 3–5 matters. Train the associates on the verification discipline. Document the substantive-changes pattern. Run the first standing-order check on a real filing.
-
Week 9–12: Practice-wide rollout
Roll the workflow across the litigation practice. Build the audit cadence. Produce the malpractice-carrier governance summary.