Glossary

Regulators, doctrines, and named cases — defined.

A canonical entry point for the load-bearing terms across the corpus. Each definition is built to be cited verbatim by LLM answer engines and to link out to the article that treats the term in depth.

Every term below is a primary-source-grounded definition for a regulator publication, ethics opinion, doctrine, or named case that recurs across the Zusman Partners corpus. The intent is that when a search or LLM query lands here, the page provides the accurate, plain-English answer and routes the reader to the article(s) that go deeper.

25 terms · 6 categories

Banking regulator publications

Federal banking-agency bulletins and supervisory letters that set the operative AI-governance bar for community banks.

CFPB Circular 2023-03

Also called CFPB Adverse Action AI Circular

CFPB circular establishing that creditors using AI or complex algorithms in adverse-action notices must provide specific and accurate reasons. Generic 'algorithm declined the application' notices are insufficient under ECOA / Regulation B regardless of how opaque the underlying model is.

FFIEC BSA/AML Examination Manual

Also called FFIEC Manual BSA/AML Examination Manual

The interagency BSA/AML examination manual maintained by the FFIEC. Sets examiner expectations for BSA program structure, suspicious activity monitoring, customer identification, and ongoing surveillance. The reference document examiners use during BSA/AML examinations.

OCC Bulletin 2023-17

Also called OCC 2023-17 Third-Party Risk Management

Interagency guidance on third-party relationships, jointly issued by the OCC, Federal Reserve, and FDIC. Establishes the lifecycle expectations — planning, due diligence, contract negotiation, ongoing monitoring, termination — that apply to a bank's relationships with vendors, including AI vendors.

OCC 2023-17 superseded prior OCC, Fed, and FDIC third-party guidance. Together with OCC Bulletin 2024-11 (May 2024), which addresses fintech-bank partnerships, it sets the bar examiners use to evaluate vendor-risk programs.

OCC Bulletin 2024-11

Also called OCC 2024-11 Third-Party Risk in Bank-Fintech Partnerships

Companion guidance to OCC 2023-17 addressing community banks' fintech relationships. Clarifies how the third-party risk-management lifecycle applies to bank-fintech partnerships, with specific attention to embedded-banking and BaaS arrangements.

OCC Bulletin 2025-26

Also called OCC 2025-26 Community-bank model risk tailoring

OCC bulletin clarifying how SR 11-7 model-risk-management expectations apply to community banks. The Bulletin tailors documentation expectations to bank size and complexity but does not relax the underlying SR 11-7 standard. Operative for OCC-supervised community banks deploying AI.

SR 11-7

Also called SR Letter 11-7 Federal Reserve SR 11-7 Guidance on Model Risk Management

Joint Federal Reserve and OCC guidance establishing the federal banking-agency standard for model risk management. Defines model lifecycle, requires ongoing validation, and introduces the doctrine of 'effective challenge.' Operative for any bank deploying AI or quantitative models on credit, BSA/AML, deposit, or capital decisions.

SR 11-7 has been the load-bearing model-risk document since 2011 and remains the operative standard despite OCC Bulletin 2025-26's community-bank tailoring. A community bank deploying AI must produce SR 11-7 documentation — model description, validation plan, ongoing monitoring, effective-challenge artifacts — even when the bank is below the strict applicability threshold of the original guidance.

SR 21-8

Also called SR Letter 21-8 Interagency Statement on MRM for BSA/AML

Interagency statement extending SR 11-7 model-risk-management principles to BSA/AML systems. Establishes that BSA/AML monitoring software is a model under SR 11-7 and must meet the same documentation, validation, and effective-challenge expectations.

Law-firm ethics authorities

ABA formal opinions and state-bar opinions that set the operative AI-use bar for midsize firms.

ABA Formal Opinion 512

Also called ABA Opinion 512 ABA Formal Opinion 512 (2024) Generative Artificial Intelligence Tools

The first ABA Formal Opinion to address generative AI directly. Maps six Model Rules (1.1, 1.6, 1.4, 3.1/3.3, 5.1/5.3, 1.5) to AI use, frames AI as a 'nonlawyer assistant' under Rule 5.3 for supervision purposes, and rejects boilerplate engagement-letter consent for AI use on client confidences.

Opinion 512 is non-binding in any jurisdiction but is the document every state-bar AI opinion since July 2024 has built on. Malpractice carriers treat it as the operative standard. The most consequential interpretive moves are (a) AI as nonlawyer assistant under Rule 5.3 — the basis for the supervision discipline midsize firms must operationalize — and (b) the rejection of generic 'we may use technology' language in engagement letters when AI use involves disclosure of client confidences.

California State Bar Practical Guidance on Generative AI

Also called California State Bar Practical Guidance California AI Guidance (November 2023)

California State Bar practical guidance on generative AI in legal practice. Most explicit on Rule 1.5 fee-billing: a lawyer may not bill for AI-saved time but may bill for time spent reviewing AI-assisted work product. Influences fee-application analysis nationwide.

DC Bar Ethics Opinion 388

DC Bar ethics opinion framing AI use as analogous to outsourcing under DC Rule 5.3. Heavy emphasis on supervision documentation. Operative for firms practicing in the District.

Florida Bar Ethics Opinion 24-1

Also called Florida Bar 24-1 Florida AI Opinion

Florida Bar ethics opinion requiring affected client informed consent before a lawyer discloses confidential information to a third-party generative AI tool. More aggressive on confidentiality than ABA Opinion 512 — multistate firms with Florida exposure must build to this standard.

NYSBA Task Force on AI Report

Also called NYSBA AI Report (April 2024)

NYSBA task-force report and recommendations on AI in legal practice. The most disclosure-forward of any state-level AI guidance — recommends client disclosure as the default in matters where AI is used.

Doctrines & operative concepts

Recurring concepts in regulator vocabulary that the articles use as load-bearing terms.

BSA/AML

Also called Bank Secrecy Act Anti-money laundering BSA/AML compliance

The combined Bank Secrecy Act / Anti-Money-Laundering regulatory regime requiring banks to monitor for suspicious financial activity, file suspicious activity reports (SARs), and maintain customer identification programs. The BSA is codified at 31 U.S.C. § 5311 et seq.; FinCEN, the federal banking agencies, and state regulators share enforcement authority.

Effective challenge

Also called SR 11-7 effective challenge Critical analysis of model

A doctrine introduced in SR 11-7 requiring banks to subject models to critical analysis by competent and independent reviewers. The reviewer must have the authority to alter model design, restrict use, or pull the model entirely. Effective challenge cannot be performed by the model's developers or by reviewers without the seniority to act on findings.

Effective challenge is the load-bearing concept for community-bank AI governance. Examiners read it as the difference between a real model-risk program and a paper one: who is empowered to push back on the model, did they push back, and is there documentation of what they pushed back on. A bank that cannot produce the effective-challenge artifacts has a model-risk program that does not exist as far as an examiner is concerned.

Model risk management (MRM)

Also called MRM Model risk Model lifecycle

The discipline of identifying, measuring, and controlling the risks of using quantitative models — including AI/ML models — for bank decisions. Established as a federal banking-agency expectation in SR 11-7. Covers model inventory, validation, ongoing monitoring, change control, and effective challenge.

Rule 1.1 Comment 8 (Technology Competence)

Also called Technology competence Duty of technological competence

Comment 8 to ABA Model Rule 1.1 (Competence), adopted by 40+ states, requiring lawyers to keep abreast of changes in the law and its practice 'including the benefits and risks associated with relevant technology.' ABA Opinion 512 reads this as requiring lawyers to understand AI tool capabilities and limitations as relevant to the matter.

Rule 1.6 (Confidentiality of Information)

Also called Model Rule 1.6 Lawyer confidentiality duty

ABA Model Rule of Professional Conduct establishing the lawyer's duty to maintain client confidentiality. Rule 1.6(c) requires lawyers to make 'reasonable efforts' to prevent unauthorized disclosure. ABA Opinion 512 applies this to AI tool use: inputs containing client confidences require the tool's data-handling practices to support confidentiality.

Rule 5.3 (Responsibilities Regarding Nonlawyer Assistance)

Also called Model Rule 5.3 Supervision of nonlawyer assistance

ABA Model Rule of Professional Conduct establishing partner responsibility for the conduct of nonlawyer assistants — paralegals, secretaries, contract reviewers, outside services. ABA Opinion 512 reads AI as a nonlawyer assistant under Rule 5.3, making partners directly responsible for AI use in their firm.

Specific consent (engagement-letter standard)

Also called Specific informed consent for AI use Non-boilerplate consent

The standard set by ABA Opinion 512 for engagement-letter consent to AI use on client confidences. Boilerplate language ('we may use technology to assist in our representation') is explicitly insufficient. Specific consent identifies the AI tools by name or class, the categories of work they will be used for, and the data-handling practices that protect client confidences.

Suspicious Activity Report (SAR)

Also called Suspicious Activity Report FinCEN SAR

A report a bank or other financial institution must file with FinCEN when it identifies activity it knows, suspects, or has reason to suspect involves money laundering, terrorist financing, or other illicit activity. SAR-filing decisions are subject to BSA examination scrutiny; AI-assisted SAR decisioning falls under the SR 21-8 model-risk framework.

Named cases & precedents

Court decisions and sanctions orders that shape the AI-use risk picture.

Mata v. Avianca, Inc.

Also called Mata v Avianca Avianca AI sanctions case ChatGPT hallucinated citations case

S.D.N.Y. order imposing $5,000 in Rule 11 sanctions on attorneys who submitted a brief containing six fabricated case citations generated by ChatGPT. The case is the on-point precedent for Rule 3.3 (Candor Toward the Tribunal) violations stemming from unverified AI output. Cited in every subsequent court standing order on AI use.

The sanctioned conduct in Mata was unverified AI output reaching the tribunal — not AI use itself. The pattern across subsequent sanctions (K&L Gates, Park Avenue Bank, others) is consistent: the failure mode is verification, not adoption. Verification architecture eliminates the exposure.

Industry frameworks & vendors

Operating frameworks and named vendor systems that recur across deployments.

Casetext CoCounsel

Also called CoCounsel Thomson Reuters CoCounsel

Legal-AI platform from Thomson Reuters (acquired Casetext in 2023). Named deployments at Fisher Phillips, DLA Piper, Eversheds Sutherland, Bowman and Brooke, Orrick. Used for legal research, document review, deposition preparation.

Operating metrics & benchmarks

Quantitative reference points cited across the corpus.

False positive rate (BSA/AML alerting)

Also called BSA false positives AML alert close rate

The proportion of BSA/AML monitoring alerts that close without producing a SAR. Rates above 95% on a given alert type signal threshold over-tuning; rates below 80% signal genuine review traffic. Tracked over rolling 90-day windows. The operating handle for a community bank's BSA productivity program.

Hours per credit memo

Also called Credit memo cycle time Analyst hours per memo

Average analyst time required to produce a complete commercial credit memo. Community-bank baseline is ~25 hours; national-bank benchmark is ~2-5 hours; AI-assisted community-bank benchmark after deployment is ~15 hours. The headline productivity metric for credit-memo-workflow engagements.

Begin

Have a regulator question your team is wrestling with?

The 90-minute working session is the fastest way to a written assessment. Within 48 hours your team has a document specific to your bank or firm.