Article · L1.3

Engagement-letter language for AI use: samples with commentary

Engagement-letter language for AI use: samples with commentary

Five working clauses the ethics partner can adapt this week, with commentary on the regulatory standard each one satisfies.

The ethics partner at a 24-attorney firm has been waiting fourteen months for someone to finalize the firm’s AI engagement letter language. In those fourteen months, associates have input client confidences to three AI tools under engagement letters that say “we may use technology to assist in our representation” — language that ABA Formal Opinion 512 (July 2024) identifies explicitly as insufficient for specific consent under Rule 1.6. This piece delivers five working engagement-letter clauses — base AI consent, carve-out, disclosure-when-material, active-matter amendment, and malpractice-carrier attestation — with commentary on why each satisfies Opinion 512’s specific-consent standard, what Florida Bar 24-1 adds, and what the California State Bar Practical Guidance requires on fees.

The problem in ethics-partner vocabulary

The AI engagement letter language problem at most midsize firms is a deployment problem, not a drafting problem. The standard engagement letter template predates Opinion 512. The boilerplate technology clause has been in place for years, unchanged because no single partner owns the update. The managing partner knows the update is needed. The ethics partner knows what the update requires. The update has not happened.

This is the gap Opinion 512 exposes when something goes wrong. An associate uses Harvey to draft a brief. The engagement letter says “we may use technology to assist in our representation.” The client later raises a question. The firm cannot produce the specific-consent documentation Opinion 512 requires because the engagement letter never obtained it.

The exposure is on every active matter where AI touches client confidences without specific consent in the record. Closing it requires language the firm can deploy — not a policy discussion, not a partners meeting, but actual clause text reviewed and approved by the ethics partner.

Why boilerplate consent fails the specific-consent test

Opinion 512’s language is direct: boilerplate consent included in engagement letters will not be adequate for Rule 1.6 when AI tools involve disclosure of client confidences. Five things must be present in specific consent that a generic technology clause cannot provide: the specific AI tools (or tool classes) the firm uses; the categories of work the AI is used for; the data-handling architecture, including whether inputs train the model and how they are stored; the categories of work carved out from AI use absent separate consent; and the client’s specific, signed consent.

The most common drafting error is treating disclosure as equivalent to consent. Disclosure is a Rule 1.4 question — whether and when the client must be informed about AI use. Consent is a Rule 1.6 question — whether the client has authorized the disclosure of confidences to an AI tool. Opinion 512 requires both, and each is satisfied by different language. The five clauses below address both, separately.

The second common error is accepting the vendor’s template without review. Vendor-supplied engagement-letter language is drafted for a generic firm. It does not reflect the firm’s actual tool inventory, the practice area-specific carve-outs the ethics partner would require, or the client types the firm represents. The ethics partner’s review of vendor templates before deployment is not optional under Opinion 512 — it is the discipline the specific-consent standard demands.

The five clauses

Each clause below is a starting point. The ethics partner adapts the bracketed fields to the firm’s actual tool inventory, practice areas, and client types before deployment.

Clause 1: Base AI consent

The foundation of the language pack. Every matter where AI tools process client confidences needs this clause in the signed engagement letter.

This clause satisfies Opinion 512’s specific-consent standard by naming the tools, the categories of work, and the data-handling architecture, then obtaining signed consent. The tenant-isolation and no-training-on-inputs language also satisfies Florida Bar 24-1, which bars input of confidential information to AI tools that train on inputs — a stricter standard than Opinion 512’s. A multi-state firm with Florida exposure builds Clause 1 to the Florida standard by default.

Clause 2: Carve-out

Not every AI use belongs under the base consent. Categories involving highly sensitive client information — trade secrets, privileged strategy communications, medical records in health-care matters — warrant a separate consent event before AI processing begins.

The carve-out clause closes the consent gap that opens when AI use expands beyond what the base clause covers. Opinion 512 requires specific consent — not a blanket authorization for all AI use in the matter. A client who has consented to AI-assisted brief drafting has not consented to AI processing of trade-secret documents. Florida Bar 24-1 reinforces this point: any disclosure of confidential information to an AI tool requires the client’s informed consent, and the carve-out structure makes the scope of that consent explicit and enforceable.

Clause 3: Disclosure-when-material

Opinion 512 requires lawyers to consider whether to inform clients about AI use when it is material to the work product or the fee. This clause converts that consideration into a documented commitment, defending against Rule 1.4 exposure from post-hoc disclosure questions.

Clause 3 converts the Rule 1.4 consideration from an informal partner judgment into a documented commitment. It also builds the billing discipline the California State Bar Practical Guidance requires. A firm that has committed to notifying clients when AI use materially reduces time billed has, by definition, implemented the mechanism for distinguishing AI-saved time from AI-review time in its matter files. That distinction is the California guidance’s central requirement.

Clause 4: Active-matter amendment

For matters in flight when the language pack deploys, a signed addendum brings existing engagement letters under the new framework. The active-matter amendment cycle is the most immediate step in closing the consent gap — and the one most firms defer indefinitely.

Opinion 512 does not grandfather existing matters. An engagement letter signed six months ago that says “we may use technology” does not retroactively satisfy the specific-consent standard for AI use that began after the letter was signed. The active-matter amendment is the mechanism for closing that gap. Most firms can complete the cycle in two to three weeks: identify active matters where AI has been or will be used on client confidences, send the amendment with a cover note from the responsible partner, and track signatures. The ethics partner who has run this cycle has documented the firm’s operationalization of the language pack.

Clause 5: Malpractice-carrier attestation

The malpractice carrier renewal question shifted in 2024–2026. Carriers now ask whether the engagement letters satisfy the specific-consent standard, not just whether a policy exists. This clause is the signed firm attestation that goes to the carrier — not client-facing language.

The carrier attestation is the output of the language pack, not the starting point. A firm that has built and deployed Clauses 1 through 4 can sign this attestation accurately. The attestation references both Florida Bar 24-1 and the California State Bar Practical Guidance by name — which is the citation the carrier’s underwriters expect to see in 2026. Signing it before the underlying language pack is in place creates a governance record that contradicts the firm’s actual practice, which is the failure mode carriers identify at claims time.

What the state opinions require

Opinion 512 is the federal architecture. Two state opinions tighten the requirements for firms with exposure in those jurisdictions, and both are already addressed in the five clauses.

A firm that builds the language pack to Florida’s standard — tenant isolation and no training on inputs — satisfies Opinion 512, Florida 24-1, and every other state-bar opinion issued through 2026. One language pack, not five.

What to do next

A 22-attorney firm can build and deploy the language pack in three weeks. The sequence is straightforward.

Week 1: The ethics partner drafts Clauses 1 and 2 against the firm’s actual AI tool inventory. Every tool in active use is named. Every category of work AI is used for is named. Carve-outs reflect the firm’s practice areas. The ethics partner confirms the data-handling language in Clause 1 against the vendor’s written attestation — the tenant-isolation and no-training-on-inputs claims must be verifiable before the engagement letter asserts them to a client.

Week 2: Clauses 3 and 5 are drafted and reviewed with the managing partner. The active-matter amendment (Clause 4) is prepared from the practice management system’s matter list. The ethics partner confirms the amendment identifies every active matter where AI has been or will be used on client confidences. That list is often longer than the managing partner expects.

Week 3: The language pack is presented at the partners meeting and approved. New matters begin under the updated template. The amendment cycle starts for active matters — target is signatures on all active-matter amendments within 10 business days. The carrier attestation is signed when the amendment cycle closes and all five conditions in Clause 5 are factually accurate.

The firm that completes this three-week sequence has protected itself against the most immediate Rule 1.6 exposure in its practice. The engagement letters are accurate. The amendments are signed. The carrier attestation is defensible. Every document a carrier renewal or a bar inquiry would request exists in the file.

For most firms, outside review of the language pack — specifically the Clause 1 data-handling language and the Clause 2 carve-out definitions — is the step that converts a first draft into a deployable artifact. That review typically takes two to four hours of an outside reviewer’s time and covers whether the language satisfies Opinion 512, Florida 24-1, and the California billing standard. The Ethics-Alignment Diagnostic includes that review as part of the engagement.