Article · L1.1
What ABA Opinion 512 actually requires of a midsize firm
What ABA Opinion 512 actually requires of a midsize firm
The six Model Rules in Opinion 512, rule by rule, translated into the operational test an ethics partner at a 20–30 attorney firm can actually apply.
The ethics partner at a 24-attorney firm is reading ABA Opinion 512 for the third time because the managing partner has asked her to sign a one-page governance summary the malpractice carrier requested at renewal. The Opinion is 19 pages and covers six Model Rules, and the summary the managing partner drafted collapses all of that into a paragraph that says the firm “complies with applicable ethics guidance on AI use.” She cannot sign it — not because it is wrong, but because it does not describe anything she could defend if the carrier came back with a follow-up question. This piece walks through the six Model Rules ABA Opinion 512 addresses, the specific midsize-firm requirements each imposes, and the operational test the ethics partner can run for each one.
The problem in ethics-partner vocabulary
The ABA opinion 512 midsize firm requirements are not one requirement. They are six, layered, with different operational tests per Rule. Most of what gets written about Opinion 512 treats it as a single compliance question — are we 512-compliant? — and that framing does not survive contact with a real carrier renewal, a real sanctions hearing, or a real state-bar inquiry. The carrier will not ask whether the firm is 512-compliant. The carrier will ask whether the firm has a supervision protocol under Rule 5.3, whether the engagement letters satisfy specific consent under Rule 1.6, whether the litigation practice verifies AI output before filing under Rule 3.3, and whether the fee arrangements handle AI-saved time under Rule 1.5.
The Opinion is a structured map. The ethics partner’s job is to translate the map into six operational tests the firm can run and document. What follows is the rule-by-rule translation. For each Rule: what it requires, the specific application at a 5–50 attorney firm, the most common failure mode, and the operational test the ethics partner applies.
Rule 1.1 (Competence) and Comment 8
What the Rule requires. A lawyer must provide competent representation. Comment 8 extends competence to the benefits and risks of relevant technology. Opinion 512 applies this to AI: the lawyer must understand the tool’s capabilities and limitations as relevant to the matter.
The midsize-firm application. Competence is not a requirement to understand machine learning architecture. It is a requirement the partner deploying Harvey in a litigation practice or Spellbook in a transactional practice can pass this test: she can describe, in plain English, what the tool does, what it does not do, where it fails, and which categories of work require human judgment regardless of what the tool produces. The test applies to every attorney who touches the tool — the associate running the first draft, the partner reviewing it, the paralegal summarizing a deposition on it. Training the associates and leaving the partner untrained fails the Rule because the partner is the supervisor.
The most common failure mode. The firm buys the tool, runs a 45-minute vendor demo, and treats deployment as training. Three months later an associate cites a hallucinated case because no one taught the associate what hallucination looks like in this specific tool on this specific task. Comment 8 competence is what the associate lacked, and the supervising partner lacked it too.
The operational test. Every attorney authorized to use an AI tool has completed training specific to that tool and signed an acknowledgment the firm retains. The training covers the tool’s known failure modes, the verification discipline, and the categories of work the tool is not used for. The ethics partner can produce the training records on request.
Rule 1.6 (Confidentiality)
What the Rule requires. A lawyer must not reveal information relating to the representation without informed consent, and under Rule 1.6(c) must make reasonable efforts to prevent unauthorized disclosure. Opinion 512 applies this to AI: inputs to the tool that contain client confidences require the tool’s data-handling practices to support confidentiality.
The midsize-firm application. Two moves are required. First, the firm must know — for every AI tool in use — whether the tool trains on inputs, whether inputs are stored, whether they are accessible to the vendor or to other tenants, and what the retention policy is. Second, the firm must satisfy the specific-consent standard in the engagement letter. Opinion 512 explicitly rejects boilerplate consent: boilerplate consent included in engagement letters will not be adequate when AI tools involve disclosure of client confidences. This is the aba 512 rule 1.6 requirement that catches most firms flat-footed.
The most common failure mode. Two associates use ChatGPT on personal accounts to summarize depositions. The engagement letters in those matters say “we may use technology to assist in our representation.” The consent is legally insufficient under Opinion 512 and the data-handling is outside the firm’s governance. The inputs are in a consumer tool that may train on them, and the firm cannot demonstrate otherwise.
The operational test. For every active AI tool, the firm holds a vendor data-handling attestation in writing (tenant isolation, no training on inputs, retention policy). For every active matter where AI is used on client confidences, the engagement letter names the tool, the categories of work, and obtains specific consent. The ethics partner can point to the clause that satisfies the standard.
Rule 1.4 (Communication)
What the Rule requires. A lawyer must keep the client reasonably informed and explain matters to the extent reasonably necessary for the client to make informed decisions. Opinion 512 applies this to AI: the lawyer must consider whether to inform the client about AI use in the matter.
The midsize-firm application. Opinion 512 does not require disclosure in every matter. It requires the lawyer to consider disclosure and notes that disclosure is often the right answer when AI use is material to work product or to the fee. The NYSBA Task Force Report (April 2024) is the most disclosure-forward state guidance and recommends disclosure as the default in AI-assisted matters. The operational question at a midsize firm is how the responsible partner documents the consideration and the decision.
The most common failure mode. The firm treats the Opinion as requiring universal disclosure and drafts a letter that goes to every client announcing AI use on their matter. The letters generate 40 client calls in a week, several clients ask questions the firm is not ready to answer, and two clients ask the firm to stop using AI on their matter with no process for what that means. The opposite failure: the firm decides disclosure is never required and never documents the consideration. Both fail the Rule.
The operational test. For every matter where AI is material to the work product or the fee, the matter file contains a short note from the responsible partner recording (a) that disclosure was considered, (b) the decision, and (c) the basis. For matters where disclosure was made, the client communication is in the file. The ethics partner can sample matters at random and find the record.
Rules 3.1 and 3.3 (Meritorious Claims and Candor Toward the Tribunal)
What the Rules require. Rule 3.1 prohibits asserting claims without a basis in law and fact. Rule 3.3 requires candor toward the tribunal and prohibits knowingly making false statements of law or fact. Opinion 512 applies these to AI: the lawyer must verify AI-generated content before filing or asserting it.
The midsize-firm application. This is the Rule most directly tied to the sanctions environment. Mata v. Avianca (S.D.N.Y., June 22, 2023, Case No. 22-cv-1461) is the on-point precedent: a $5,000 Rule 11 sanction for fabricated ChatGPT case citations. Post-Mata federal-district standing orders requiring AI disclosure or prohibition have proliferated. The midsize-firm requirement is a verification architecture that sits between the AI output and the filing — cite-check, source-verify, partner review, disclosure assessment — with documentation sufficient to demonstrate verification occurred.
The most common failure mode. The associate runs a research assignment through the AI tool, the tool returns a draft memo with six citations, the associate skims the citations, the partner reviews the memo assuming the citations were verified, the memo becomes part of a brief, and a fabricated citation reaches the tribunal. The failure is not that AI produced the hallucination. The failure is that no one in the chain owned verification.
The operational test. Every AI-assisted filing has a documented cite-check and source-verification record in the matter file. The responsible attorney’s sign-off is explicit. For any federal district with a standing order, the firm’s compliance with the order is in the file before the filing. The ethics partner can produce the verification record for any filing on request.
Rules 5.1 and 5.3 (Supervision)
What the Rules require. Rule 5.1 imposes supervisory responsibilities on partners for other lawyers in the firm. Rule 5.3 imposes supervisory responsibilities for nonlawyer assistants. Opinion 512’s most consequential interpretive move: AI is treated as a nonlawyer assistant under Rule 5.3. Partners have direct supervisory obligations for the firm’s AI use, comparable to supervisory obligations for paralegals, secretaries, and outside services.
The midsize-firm application. The aba 512 rule 5.3 framing changes the supervision question from “do we check AI output” to “do we supervise AI as we supervise a paralegal.” Supervision means training before authorization, defined scope of permitted use, a responsible-attorney review chain, incident reporting when something goes wrong, and documentation sufficient to demonstrate the supervision occurred. Rule 5.1 layers on: the firm’s partners are responsible for ensuring the firm has measures giving reasonable assurance that all lawyers conform to the Rules, which includes the Rule 5.3 AI supervision regime.
The most common failure mode. The firm’s existing supervision discipline for paralegals is strong and documented. The AI deployment runs in parallel. IT picks the tool, the vendor provides training, the associates use it, and the firm’s Rule 5.3 discipline never touches it. A supervision question from a carrier or a state bar would find the paralegal supervision records in order and the AI supervision records nonexistent.
-
Identify the responsible attorney per AI tool
For each active AI tool, name the partner who is the responsible attorney under Rule 5.3. This is the partner who would answer a carrier's supervision question.
-
Define the authorized-use scope
Name the categories of work the tool is authorized for and the categories carved out. Name the attorneys and staff authorized to use it.
-
Require training before authorization
Every attorney and staff member completes training specific to the tool before being authorized. The firm retains the acknowledgment.
-
Build the review chain
Every AI-assisted work product has a responsible-attorney review before use, filing, or transmission. The review is documented in the matter file.
-
Operate the incident-reporting protocol
Any AI use that produces a substantive error is reported and logged. The log is the demonstration that supervision is live, not paper.
The operational test. The firm can produce, for any AI tool in use, the responsible attorney, the authorized-use scope, the training records, the review-chain documentation, and the incident log. The records exist because the supervision is real, not because the documentation was assembled before a carrier renewal.
Rule 1.5 (Reasonable Fees)
What the Rule requires. A lawyer’s fees must be reasonable and must be communicated to the client. Opinion 512 applies this to AI billing: the lawyer may bill for time spent reviewing AI output, learning the tool, and integrating it into the matter, but not for time the AI saved (the hours the tool replaced).
The midsize-firm application. The California State Bar Practical Guidance (November 2023) is the most explicit fee-billing authority and informs Rule 1.5 application nationwide: a lawyer may not charge for AI-saved time but may charge for time spent reviewing AI-assisted work product. For a midsize firm, this translates to a time-entry discipline that distinguishes AI-assisted work from AI-replaced work, and to fee arrangements (hourly, flat-fee, AFA) that handle the distinction consistently.
The most common failure mode. The associate saves four hours on a research assignment using AI, bills the client for six hours as if the research had taken the pre-AI duration, and records the entry without indicating AI was used. The bill is outside Rule 1.5 and outside California’s guidance. The second failure mode is the mirror: the associate uses AI extensively, writes off the saved time, and the firm loses realization on work that should have been billed at review-time rates.
The operational test. Time entries on AI-assisted matters distinguish AI-review time from AI-replaced time. Fee arrangements reflect the distinction. A sampling of bills would survive a client inquiry or a state-bar review. The ethics partner can pull a sample and confirm the discipline holds.
What a defensible posture looks like
A midsize firm that has run the six operational tests above has a posture that produces three artifacts: a practice policy anchored to the six Rules, an engagement-letter pack that satisfies Rule 1.6’s specific-consent standard, and a supervision protocol that operationalizes Rule 5.3. These are the same three artifacts the pillar page described. Here each one is defined by the rule-by-rule test it must pass. A governance summary the malpractice carrier asks for at renewal is the one-page synthesis of those artifacts. The ethics partner who can sign the summary is the one who has confirmed each of the six tests produces an answer the firm could defend.
What to do next
The work of translating Opinion 512 into six operational tests, building the artifacts that pass each test, and producing the carrier-ready governance summary is a 60-day effort for a firm with the ethics partner’s commitment. The alternative is a four-to-eight-week diagnostic with outside review on the engagement-letter pack and the supervision protocol. The 60-day internal sequence is in the L1 pillar page. The diagnostic version is in phase-7/one_pager_law_ethics_diagnostic.md, with the companion phase-7/aba_opinion_512_alignment.md brief the ethics partner reviews first.