For law firms · Ethics-alignment diagnostic

Your associates are already using AI. No one has done the homework.

Five deliverables. The ethics partner reviews the first three as part of the engagement, not after. Four weeks at most firms, six to eight for multi-jurisdictional practices. Fixed fee. No hourly overruns.

The reality the ethics partner already suspects

At least three attorneys used ChatGPT or Claude this week. No one checked whether the tool trains on input.

They drafted research memos, summarized depositions, outlined motions. None of them checked whether the tool trains on input. None of them reviewed Rule 1.6(c)'s reasonable-efforts standard. None of them obtained informed consent from the client whose confidences they entered.

They are not being reckless. They are being productive. The tools work. The problem is that no one has built the governance around them.

The malpractice carrier is asking about AI at renewal. The ethics partner has read ABA Formal Opinion 512 and knows the firm is exposed. The managing partner knows something needs to happen but cannot build a practice policy from scratch, map six state-bar opinions, draft engagement-letter consent language, and train 30 attorneys — while also running the firm.

This page is written for the ethics partner. If you are the managing partner who forwarded this link, the next section is what your ethics partner needs to see.

What the ethics partner sees

A defensible AI practice posture the ethics partner reviews, revises, and approves as part of the engagement.

Not after. Not in response to a system already deployed. The ethics partner is inside the process from week one.

ABA Formal Opinion 512 alignment. The diagnostic maps your firm's AI use against all six of Opinion 512's operative requirements:

  • Competence (Rule 1.1, Comment 8). Your attorneys understand the tools they use — how they work, where they fail, what they cannot do.
  • Confidentiality (Rule 1.6). Every tool is tenant-isolated. Not self-learning. Not trained on client data. The architecture satisfies 1.6(c)'s reasonable-efforts standard.
  • Communication (Rule 1.4). Engagement-letter language satisfying Opinion 512's informed-consent standard. Not boilerplate. Opinion 512 specifically rejects boilerplate consent as inadequate for AI use on client confidences.
  • Candor (Rules 3.1 and 3.3). Verification architecture in every litigation workflow. No AI output reaches a court without a documented review chain.
  • Supervision (Rules 5.1 and 5.3). Opinion 512 treats AI as nonlawyer assistance under Rule 5.3. A documented supervision chain with a responsible attorney at each stage.
  • Fees (Rule 1.5). How AI-assisted work product is billed, consistent with the reasonable-fees standard.

Rule 1.6(c) posture documentation. Tenant isolation, access controls, data retention, and the reasonable-efforts analysis for every AI tool the firm uses or plans to use. This is the document that answers the carrier's questions at renewal.

Rule 5.3 supervision architecture. Which attorney reviews AI-assisted work product. At what stage. With what documentation. Specific to your practice areas and your staffing.

Jurisdiction-specific state-bar mapping. Florida Bar Ethics Opinion 24-1, California State Bar Practical Guidance, DC Bar Opinion 388, NYSBA Task Force on AI Report, Pennsylvania Bar Formal Opinion 2024-200, and others applicable to your jurisdictions. The policy reflects the most restrictive standard across your footprint.

What the engagement produces

Five deliverables. The ethics partner reviews the first three.

01

A firm AI-use audit

A candid assessment of how AI is already being used — including tools associates adopted without approval. Most firms discover more unsanctioned use than they expected. The audit establishes the factual basis for the policy.

02

A practice policy

Calibrated to your firm's actual tools, practice areas, and engagement types. Not a 40-page framework for a 500-attorney firm. A policy your ethics partner can read in 20 minutes and approve with documented revisions.

03

Engagement-letter language

Specific consent and disclosure language satisfying Opinion 512's specific-consent standard, reflecting your firm's actual AI workflow. Your attorneys can insert it into client-facing documents the week the engagement concludes.

04

A carrier-ready governance summary

Policy, supervision architecture, training, data handling — in a concise document designed for the person at your carrier who reviews your firm's risk posture at renewal. Having it ready before the carrier asks is the strongest response available.

05

Training

Partners, associates, and paralegals are trained on the tools and the ethical guardrails as part of the engagement. They understand what Opinion 512 requires of them personally and how the consent language works in practice.

How the engagement works

Four weeks at most firms. Six to eight for multi-jurisdictional practices. Fixed fee. No hourly overruns.

01

Audit

Week one

We interview attorneys across practice groups and seniority levels. We document what tools are in use and where the gaps are between current practice and the firm's obligations under Opinion 512. The ethics partner receives audit findings at the end of week one.

02

Policy and architecture

Weeks two and three

We draft the practice policy, the supervision framework, and the engagement-letter language. The ethics partner reviews drafts during this period — not after delivery. Their input shapes the policy.

03

Training and delivery

Week four

Attorneys are trained in small groups. The carrier-ready governance summary is finalized. The managing partner receives the partner-meeting memo.

For larger firms or multi-jurisdictional practices, the engagement extends to six or eight weeks. The scope is defined before the work begins. The fee is fixed.

What the ethics partner should know about the architecture

Tenant-isolated. Not self-learning. Not trained on client data.

Every system Zusman Partners builds or recommends is tenant-isolated. Your firm's data resides in a dedicated environment. It is not shared with other clients. It is not used to train the underlying model. It is not accessible to anyone outside your firm. The tools are not self-learning. They do not retain client data between sessions.

This is the architecture that satisfies Opinion 512's framework and eliminates the confidentiality exposure that consumer AI tools create.

The ethics partner does not evaluate a system cold. The diagnostic produces a policy the ethics partner builds alongside us. The deliverable is their approval, documented — not a technology decision imposed on them.

The next step

Book a 90-minute working session.

If your firm is using AI without governance — or if the ethics partner has been asked to evaluate something and needs the homework done first — the diagnostic is where that work begins.

You describe the situation. We describe how we would approach it. You leave with a written assessment within 48 hours.

The ABA Formal Opinion 512 alignment brief, the Rule 1.6(c) posture document, and the champion deck for the partners meeting are available from the resources page — no form required.