Pillar · L1

ABA Formal Opinion 512 in practice: the midsize-firm playbook

ABA Formal Opinion 512 in practice: the midsize-firm playbook

What Opinion 512 actually requires of a 5–50 attorney firm, and how to translate it into engagement-letter language, supervision discipline, and a malpractice-carrier-ready governance summary your ethics partner can sign without hedging.

The ethics partner at a 28-attorney firm reads ABA Formal Opinion 512 on the morning it drops, July 29, 2024. Two things are clear within ten minutes. First, the Opinion is not anti-AI. It is a structured map of how the existing Model Rules apply to AI use, and the map is workable. Second, every standard engagement letter the firm has signed in the last five years probably does not satisfy the Opinion’s specific-consent standard for AI use on client confidences. The boilerplate “we may use technology to assist in our representation” language the firm has been relying on is explicitly insufficient.

She walks down the hall to the managing partner’s office. The conversation lasts three minutes. The managing partner asks: what do we need to do? The ethics partner answers: I need a week. It is now twenty months later and the engagement-letter language pack still does not exist. Two associates use Harvey through a personal subscription. One paralegal is using ChatGPT to summarize depositions. The malpractice carrier has added an AI-governance question to the renewal questionnaire. The managing partner has not yet found a way to walk into the partners meeting with a complete answer.

This is the situation almost every 5–50 attorney firm is sitting in right now. What follows: what Opinion 512 actually requires of a firm at that scale, what the operative state-bar opinions add, what a defensible practice looks like in 2026, and what to do in the next 60 days.

The problem in ethics-partner vocabulary

The ethics partner at a midsize firm is being asked the same three things, in roughly this order:

  1. Are we Opinion 512 compliant? — usually from the managing partner, in advance of a partners meeting where the firm is being asked to approve an AI tool deployment.
  2. Will the malpractice carrier renew us at the same premium? — usually from the COO, in advance of the next renewal cycle.
  3. Are we one bad citation away from being the next Mata v. Avianca story? — usually from the litigation practice group leader, in the wake of every fresh sanctions story that runs in Above the Law.

Each question is a different version of the same underlying issue: associates and paralegals are using AI tools on client work, the firm has no consolidated practice policy, the engagement letters in active matters do not reflect the AI use, and the supervision discipline that the Model Rules require is operating informally — which means it does not exist in any way the firm could demonstrate.

The risk is not that the AI is bad. The risk is that if the firm faced a sanctions hearing, a malpractice claim, or a state-bar inquiry, the firm could not produce the artifacts that demonstrate it had operated within the Opinion’s framework — the policy, the engagement-letter language, the supervision documentation, the training records.

What Opinion 512 actually says

The Opinion runs to 19 pages. The substance is six Model Rules and a small number of operative requirements per rule.

The six Model Rules and what each requires:

Rule 1.1 (Competence), including Comment 8 on technology competence. The lawyer must understand the AI tool’s capabilities and limitations as relevant to the matter. This is not a requirement to be a machine learning expert. It is a requirement to know what the tool does, where it fails, and when to rely on it versus when to verify independently. The duty applies to every lawyer using the tool (partner, associate, of counsel), not just the deploying partner.

Rule 1.6 (Confidentiality). The lawyer’s duty under Rule 1.6(c) to make “reasonable efforts” to prevent unauthorized disclosure applies to AI tool use. Inputs to the tool that include client confidences require the tool’s data-handling practices to support confidentiality — meaning the lawyer must know whether the tool trains on inputs, whether inputs are stored, whether they are accessible to the vendor or to other tenants. The Opinion explicitly notes that some tools are appropriate for confidential inputs and some are not, and the lawyer must know which is which.

Rule 1.4 (Communication). The Opinion requires lawyers to consider whether to inform clients about AI use in their matter. The Opinion does not require disclosure in every case, but it requires the lawyer to consider it — and notes that disclosure is often the right answer, particularly when the AI use is material to the work product or to the fee.

Rules 3.1 (Meritorious Claims) and 3.3 (Candor Toward the Tribunal). The lawyer must verify AI-generated content before filing or asserting it. AI hallucinations — fabricated case citations, mis-stated holdings, invented facts — that reach a tribunal violate Rule 3.3. The Mata v. Avianca sanction (S.D.N.Y., June 2023) is the on-point precedent and the case every managing partner remembers.

Rules 5.1 (Responsibilities of Partners) and 5.3 (Responsibilities Regarding Nonlawyer Assistance). This is the most consequential interpretive move in the Opinion. The Opinion treats AI as a nonlawyer assistant for Rule 5.3 purposes. This means partners have direct supervisory obligations for AI use by the firm — comparable to supervisory obligations for paralegals, secretaries, contract reviewers, and outside services. The supervision must be reasonable and effective, must include training, and must produce documentation sufficient to demonstrate the supervision occurred.

Rule 1.5 (Reasonable Fees). A lawyer billing a client for time spent using AI must do so consistent with the fee agreement. The Opinion permits the lawyer to bill for the time spent reviewing AI output, learning the tool, and integrating the tool into the matter — but rejects billing for time the AI saved (e.g., billing for the four hours an AI tool replaced). State bar opinions extend this: the California State Bar Practical Guidance (November 2023) is most explicit that a lawyer may not bill for AI-saved time but may bill for time spent reviewing AI-assisted work product.

The Opinion also addresses engagement-letter consent directly, and this is the part most firms misread.

The engagement-letter language pack

Opinion 512 explicitly rejects boilerplate consent. The exact language: boilerplate consent included in engagement letters will not be adequate to satisfy a lawyer’s obligations under Rule 1.6 when the lawyer uses GAI tools that involve the disclosure of client confidences.

What this means in practice: the engagement letter must be specific about (a) which AI tools the firm intends to use in the matter, (b) what types of work the AI will be used for, and (c) the data-handling practices that protect client confidences. The Opinion is silent on the exact language but clear on the standard: a generic “we may use technology” clause does not meet the bar.

A defensible engagement-letter language pack for a midsize firm has three components:

  1. The base AI consent clause

    Identifies the firm's AI tools by name (or by class with the names accessible on request), identifies the categories of work the tools will be used for (first-draft preparation, document review, deposition summaries, contract clause comparisons, etc.), describes the data-handling architecture (tenant isolation, no training on inputs, retention policies), and obtains the client's specific consent.

  2. The carve-out clause

    Names the specific AI uses that require *separate* written consent before deployment in the matter — typically uses involving particularly sensitive categories of client information (privileged communications, trade secrets, medical records, etc.). The carve-out is specific to the firm's practice areas and the client types the firm represents.

  3. The disclosure-when-material clause

    Establishes the firm's commitment to disclose specific AI uses that materially shape the work product or the fee. Aligns with Rule 1.4 communication duties and the Opinion's framing that disclosure is often the right answer when AI is material.

The engagement-letter pack is reviewed and approved by the firm’s ethics partner, deployed across all new matters, and integrated into the firm’s matter-intake workflow. For matters in flight when the pack is deployed, the firm conducts a one-time engagement-letter amendment cycle — a short, signed addendum that brings the existing matter under the new framework.

The state-bar overlay

Opinion 512 is the federal-architecture document. Several state bars have issued opinions that layer on top of it. A midsize firm needs to know which apply in its jurisdictions.

State / BarOpinionDateMost consequential addition
FloridaFlorida Bar Ethics Opinion 24-1January 2024Bars input of confidential information to AI tools that train on inputs. Aggressive interpretation of Rule 1.6.
CaliforniaState Bar Practical GuidanceNovember 2023Most explicit fee guidance: may not bill AI-saved time, may bill review time. Influences fee analysis nationwide.
DCDC Bar Opinion 388April 2024Frames AI as analogous to outsourcing under DC Rule 5.3. Heavy supervision emphasis.
New YorkNYSBA Task Force on AI ReportApril 2024Recommends client disclosure as default in matters where AI is used. Most disclosure-forward of any state guidance.
Pennsylvania / PhiladelphiaJoint Formal Opinion 2024-200Fall 2024Practical playbook with policy templates. Useful even for non-PA firms as a model.

A midsize firm with multi-state practice must build the policy and the engagement-letter pack to satisfy the *most* restrictive jurisdiction in which the firm practices. Florida 24-1 is the typical binding constraint.

The state opinions reinforce, refine, and occasionally tighten Opinion 512. They do not displace it. A firm that has built its practice around Opinion 512 and reviewed it against the applicable state opinions has a posture that satisfies the operative regulatory environment in 2026.

What a defensible practice looks like

A defensible midsize-firm AI practice is built around three artifacts. Not a hire. Not a platform. Three artifacts and the disciplines that maintain them.

  1. The AI practice policy

    One document, 6–12 pages. Names the firm's approved AI tools, the categories of work they may be used for, the carve-out categories requiring separate consent, the supervision protocol, the training requirement for attorneys and staff, the engagement-letter standard, and the incident-reporting cadence. Signed by the managing partner and the ethics partner. Reviewed annually. Distributed to every attorney and staff member.

  2. The engagement-letter language pack

    Three clauses (base AI consent, carve-out, disclosure-when-material) plus the matter-amendment template for in-flight matters. Integrated into the firm's matter-intake workflow. Reviewed when the firm adopts a new AI tool or expands the categories of work.

  3. The supervision and training protocol

    Documents (a) the responsible-attorney review chain for each AI use, (b) the training every attorney and staff member completes before being authorized to use the tools, (c) the periodic re-training cadence (annual minimum), and (d) the incident-reporting protocol for any AI use that produces a substantive error. Maintained as records the firm could produce on request.

Three artifacts. The discipline is in the maintenance — the policy stays current, the engagement letters reflect the policy, the supervision and training records are real. None of this requires a Chief AI Officer or a six-figure governance platform. It requires a managing partner and an ethics partner running the discipline together.

Field evidence from 2024–2026

The evidence base in 2024–2026 is substantive. Three observations from named events, regulator publications, and engagement field data:

The sanctions environment is real but bounded. Mata v. Avianca (Judge Castel, S.D.N.Y., June 22, 2023) imposed a $5,000 Rule 11 sanction for AI-fabricated case citations. K&L Gates and Park Avenue Bank cases in 2023–2024 reinforced. Federal districts have proliferated standing orders requiring AI disclosure or prohibition. The pattern is consistent: the sanctioned conduct is unverified AI output reaching a tribunal, not AI use itself. Verification architecture eliminates the exposure.

Adoption is uneven and accelerating. ILTA’s 2024 Technology Survey reports 42% of firms cite litigation support as a top AI use case and 73% expect generative AI to be the top legal-research use case within 18 months. Harvey reported deployment at 28% of Am Law 100 by 2024. Casetext CoCounsel reports named deployments at Fisher Phillips, DLA Piper, Eversheds Sutherland, Bowman and Brooke, and Orrick. The midsize tier is 12–18 months behind Am Law deployment and accelerating.

The malpractice-carrier conversation is shifting. Lawyers Mutual and other carriers have begun adding AI-governance questions to renewal questionnaires in 2024–2025. The questions are typically high-level (does the firm have an AI policy; is there training; is there supervision documentation) — but the trend is toward more specific questions and toward premium adjustments tied to governance posture.

What most firms get wrong

Five failure modes the field shows produce the worst outcomes. Avoiding these matters more than choosing the right vendor.

The five failure modes:

  1. Deploying tools before the engagement-letter pack exists. Every matter the tool touches is operating outside the Opinion 512 specific-consent framework. The exposure is not theoretical — a single client complaint or sanctions exposure converts the gap into a live ethics issue.

  2. Treating AI as separate from the firm’s existing supervision discipline. The Rule 5.3 supervision protocol the firm uses for paralegals and contract reviewers is the foundation. AI use should plug into the same discipline, not run in parallel.

  3. Vendor-supplied engagement-letter language accepted without review. Vendor templates are starting points. They are written for a generic firm and rarely satisfy the specific-consent standard for a midsize firm with specific practice areas and client types.

  4. Treating the Opinion’s “consider disclosure” as “you must always disclose.” The Opinion requires consideration, not blanket disclosure. The firm that discloses AI use in every matter regardless of materiality creates client confusion and undermines its own positioning. The discipline is to consider, document the consideration, and disclose when material.

  5. Treating the malpractice-carrier conversation as a year-end task. The carrier conversation is most productive when the firm enters renewal with the governance summary already prepared. Carriers asked for governance documentation generally accept it without question; carriers required to chase it often surface premium adjustments.

What to do in the next 60 days

A 60-day sequence that produces a defensible practice without overreach.

  1. Days 1–7: AI inventory

    Surface every AI tool in use across the firm, including individual subscriptions, ChatGPT and Claude on personal accounts used for client work, and any vendor product that has added AI features. Interview practice group leaders. Build the inventory in a single document.

  2. Days 8–14: Practice policy draft

    Anchored in Opinion 512 and applicable state-bar opinions. 6–12 pages. Names approved tools, categories of work, carve-outs, supervision protocol, training requirement, engagement-letter standard, incident reporting.

  3. Days 15–21: Engagement-letter language pack

    Three clauses (base AI consent, carve-out, disclosure-when-material) plus the matter-amendment template. Reviewed by the ethics partner. Integrated with the matter-intake workflow.

  4. Days 22–28: Supervision and training protocol

    Document the responsible-attorney review chain per AI tool. Build the training cycle (initial plus annual). Build the incident-reporting protocol. Build the audit-readiness records.

  5. Days 29–35: Partner approval and rollout

    Present the policy, the engagement-letter pack, and the supervision protocol to the partnership. Ethics partner is the presenter. Vote and sign-off. Rollout begins.

  6. Days 36–45: Active-matter amendment cycle

    For matters in flight, send the engagement-letter amendment to clients. Track signatures. The active-matter amendment closes the consent gap on existing matters and demonstrates the firm has operationalized the new framework.

  7. Days 46–60: Malpractice-carrier governance summary and training cycle

    Prepare the governance summary the carrier requests at renewal — the policy, the engagement-letter standard, the supervision protocol, the training records. Run the first round of attorney training. The summary is calendar-ready 60 days before renewal.

A firm that completes this 60-day sequence has converted the most exposed governance gap in midsize legal practice into a calendar-ready posture. The cost is internal time plus optional outside review of the engagement-letter pack and the supervision protocol. The outcome is a posture the ethics partner can defend, an engagement-letter standard the firm operates against, a training regimen that satisfies Rule 5.3, and a malpractice-carrier governance summary that produces shorter renewal cycles.

What this engagement looks like

For most midsize firms the choice is whether to run this 60-day sequence with an outside reviewer participating in the engagement-letter pack and the supervision protocol, or to run it entirely internally and bring in a reviewer only if the carrier or a state bar surfaces a question.

The argument for the outside reviewer: the engagement-letter pack and the supervision protocol are the artifacts most often produced in a register that does not reflect the firm’s actual practice — vendor templates copied without thought, supervision protocols that satisfy the policy on paper but not the discipline. An outside reviewer who knows the Opinion, the state-bar overlay, and the practice patterns of midsize firms produces artifacts that survive scrutiny. The cost is bounded ($25K–$60K for a typical four-to-eight-week diagnostic). The output is the policy, the engagement-letter pack, the supervision protocol, and the malpractice-carrier governance summary — assets the firm owns and maintains after the engagement ends.

The argument for running it internally: an ethics partner with the time to commit and a willingness to bring in an external party for one engagement-letter pack review can produce a defensible posture without engaging a consultant. Many firms at the 10–25 attorney scale do this successfully.