Pillar · B1

AI governance for community banks: what SR 11-7 requires

AI governance for community banks: what SR 11-7 requires

A working CRO’s guide to building an AI governance practice that holds up under examination, without copying the $50B bank’s playbook, hiring a Chief AI Officer you cannot afford, or treating governance as a vendor purchase.

A community bank CRO at a $1.8 billion institution opens her email on a Wednesday in April. The board chair has forwarded a question from the audit committee: what is our AI position? Her CEO has appended one line: the OCC asked the same thing at lunch.

She has watched two AI initiatives at peer banks fail. One because the vendor couldn’t produce SR 11-7 documentation that survived effective-challenge review. One because the bank built a credit-memo workflow without inventorying the marketing team’s use of ChatGPT, and the examiner found it. Her CFO will ask, before the next board meeting, what governance practice the bank has. Her bank does not have one yet.

This is the situation almost every community bank between $500M and $10B is sitting in right now. What follows: what an AI governance practice actually requires at that scale, what the operative regulatory documents say, what a defensible practice looks like in 2026, and what to do in the next 90 days.

The problem in CRO vocabulary

Every community bank CRO is being asked the same three questions, in roughly this order:

  1. Do we have an AI policy? — usually from the board chair or the audit committee, after they read something in American Banker or attended a state banking association meeting.
  2. What models are we using, and which of them have AI in them? — usually from the CFO, who has noticed that nCino, Verafin, and the bank’s deposit-pricing tool have all added AI features in the last 18 months.
  3. What will the examiner ask? — from the CEO, who is preparing for the next safety-and-soundness exam and has heard the OCC is asking variations of this question informally.

Each question is a different version of the same underlying issue: the bank has accumulated AI exposure across 5–15 operational workflows, the exposure is not inventoried in any single place, and the governance practice that would normally surround a model risk does not exist because the AI tooling slipped in through vendors, integrations, and individual experimentation rather than through a formal model-deployment process.

The risk is not that the bank’s AI is bad. The risk is that the bank cannot describe what its AI is doing. And the examiner’s first opening question, when AI comes up, is show me the inventory.

What the regulators actually say

The community-bank-relevant guidance is more compact than most CROs realize. There are six documents that matter. None of them are AI-specific; all of them apply.

These are the six documents that govern. There is no separate AI rulebook. There is the model-risk framework (SR 11-7), the community-bank application (OCC 2025-26), the BSA/AML extension (SR 21-8), the third-party framework (OCC 2023-17 and 2024-11), and the consumer-credit overlay (CFPB 2023-03). A community bank that knows these six documents and can demonstrate practice against each has a defensible governance posture.

What a defensible practice looks like

A defensible AI governance practice at a community bank between $500M and $10B is built around one document and four discipline rituals. Not a platform. Not a hire. A discipline.

The five components are described in the steps below.

  1. The AI policy document

    One document, 8–15 pages. Defines the bank's AI perimeter (what counts as AI under the policy), the governance structure (named roles, named approvals, named board reporting cadence), the validation cadence per materiality category, the third-party diligence overlay, and the adverse-action specificity standard. The policy is signed by the CRO and approved by the board Risk Committee. It is reviewed annually.

  2. The AI inventory

    One spreadsheet (or one CMS entry per model). Every AI tool the bank uses, with: tool name, vendor, deployment environment, data inputs, decision outputs, business owner, governance owner, materiality tier (high / medium / low), validation cadence, last validation date, next validation date, third-party risk tier, vendor SOC 2 currency. Updated quarterly. Reviewed at every board Risk Committee meeting.

  3. The validation discipline

    For high-materiality AI (anything in credit, BSA, deposit pricing, or fair-lending sensitivity): annual validation plus quarterly performance monitoring, with documented effective-challenge rounds. For medium materiality (operational efficiency tools): biennial validation plus annual performance review. For low materiality (internal-only or workflow assists): biennial review only. Cadence rationale is documented per OCC 2025-26.

  4. The third-party discipline

    Per OCC 2023-17 and 2024-11. For every AI vendor: planning artifact, due-diligence artifact, contract review with named AI provisions, ongoing monitoring (annual at minimum), termination-readiness check. For high-materiality vendors: SOC 2 Type II current within 12 months, tenant-isolation documentation in writing, named-bank reference at comparable scale. The bank's third-party file is the artifact the examiner reviews.

  5. The board-reporting cadence

    Quarterly: AI inventory snapshot, materiality distribution, validation-cadence status, third-party-risk dashboard, any incidents or near-misses. Annually: full policy review and re-approval. Format: 2-page executive summary plus appendix. Read by the board Risk Committee. The CRO's narrative is the document; the rest is supporting data.

This is the discipline. None of it requires a Chief AI Officer. None of it requires a six-figure governance platform. It requires a CRO who treats AI as a category of model risk and runs the cadence with the same discipline already applied to credit, market, and operational risk.

The AI inventory: where most banks have the most exposure

If a community bank does only one thing in the next 90 days, it should be the inventory. The inventory is the conversation the examiner is going to have and is the foundation every other discipline depends on.

A defensible AI inventory has these fields per entry. Reading-time test: a CRO should be able to skim the inventory in 5 minutes and answer any examiner question for the next 60 minutes from it.

FieldWhy it mattersHow most inventories fail
Tool name and vendorIdentifies the system uniquelyListed as 'nCino' instead of 'nCino Banking Advisor v3.2'
Deployment environmentEstablishes whether data leaves the bankListed as 'cloud' without specifying tenant isolation
Data inputsEstablishes confidentiality exposureVague — 'customer data' instead of 'borrower financials, loan tape'
Decision outputsEstablishes the model's role in bank decisions'Recommendations' instead of 'first-draft credit memo to be reviewed by analyst'
Business owner (named)Establishes operational accountabilityListed by department, not by name
Governance owner (named)Establishes risk accountabilityOften blank — the gap that produces MRAs
Materiality tierDrives validation cadenceFrequently 'medium' for everything (signals lack of triage)
Validation cadenceDemonstrates SR 11-7 disciplineOften missing for AI-specific models
Last and next validation dateDemonstrates the cadence is realOften blank or aspirational
Third-party risk tierConnects to OCC 2023-17 / 2024-11 fileOften inconsistent with the third-party file
Vendor SOC 2 currencyDemonstrates ongoing diligenceFrequently expired (>12 months old)

The fields are mundane. The discipline is in the maintenance — the inventory must be current as of the most recent quarter. An expired inventory is worse than no inventory.

The shadow inventory is where most banks have the most exposure: the AI tools that are in use but not on the inventory. The most common categories are (a) sanctioned vendor AI features added in product updates the bank did not formally adopt (Verafin’s ML tuning, the loan origination platform’s AI suggestions, the deposit-pricing tool’s recommendation engine), (b) general-purpose AI tools used by individual employees on bank work (ChatGPT used to summarize call reports, Microsoft Copilot embedded in the bank’s M365 tenant, Claude used by the marketing team), and (c) integrations or APIs that include AI components in third-party systems the bank treats as static (CRM enrichment, fraud-check overlays, identity verification).

The bank that surfaces these and adds them to the inventory before the examiner asks has converted the highest-exposure governance gap in the industry today into a controlled artifact.

Field evidence from 2025–2026

The evidence base on community-bank AI governance in 2025–2026 is thin but instructive. Three observations from named institutions, regulator releases, and engagement field data:

The MRA pattern is clear. Across the OCC’s published Q3 and Q4 2024 enforcement summaries, the most common community-bank model-risk MRA centers on inventory completeness — banks have models in production that are not reflected in the inventory the bank presented to the examiner. The remediation cost of a single inventory MRA at a $1B–$3B bank typically runs $500K–$2M in first-year remediation effort plus multi-year board distraction (composite from OCC examination appeal data, Q3 2024).

Vendor-supplied governance is not enough. nCino’s Banking Advisor went generally available in June 2024, with Northern Bank as the named first community-bank deployment. The vendor provides documentation templates intended to support SR 11-7 governance — but those templates are starting points, not deployable artifacts. Banks that deploy them without rewriting them in the bank’s institutional voice have produced documentation that survives a vendor audit but fails an examiner-facing effective-challenge review (composite engagement observation, 2024–2025).

The Chief AI Officer hire is not the answer. ABA’s 2024 Compensation and Benefits Survey reported zero community banks at the $1B–$3B tier with a dedicated Chief AI Officer role. The role exists at the $20B+ tier. At community-bank scale, the CRO is the de facto AI risk owner. Banks that hired a Chief AI Officer at this scale typically produced a governance practice that did not survive the next CEO transition.

What most banks get wrong

Five failure modes that the field data shows produce the worst outcomes. Avoiding these matters more than chasing the next vendor release.

The five failure modes:

  1. Treating AI as a separate category from model risk. A bank that has a credit-model risk discipline and a separate “AI policy” is creating two parallel governance streams. The examiner reads SR 11-7 as the operative document. So should the bank.

  2. Inventory built once and abandoned. The most common pattern. Inventory built in advance of an exam, accurate at the moment of the exam, untouched between exams. By the next cycle the inventory is incomplete and the bank has earned the same MRA twice.

  3. Vendor documentation accepted as bank documentation. Vendor-supplied SR 11-7 templates are starting points. They have to be rewritten in the bank’s voice, signed by the bank’s CRO, and reflect the bank’s specific risk profile. A document that reads as if it could appear at any community bank is not the bank’s documentation.

  4. Effective challenge interpreted as ‘someone reviewed it.’ Effective challenge requires critical analysis by objective, informed parties who can identify model limitations and produce appropriate changes. A junior analyst signing a review form is not effective challenge. An outside consultant signing a review without the authority to challenge is not effective challenge. The discipline requires the reviewer to have the credentials and the authority to dissent.

  5. Treating the third-party file as separate from the model risk file. OCC 2023-17 and SR 11-7 operate on the same vendor relationships. A bank with two parallel files for the same vendor relationship is doing twice the work and creating reconciliation gaps the examiner will find. One file. One vendor. Both disciplines.

What to do in the next 90 days

A 90-day sequence that produces a defensible practice without overreach. This is the sequence the engagement field has converged on across community-bank engagements 2024–2025.

  1. Days 1–14: Inventory build

    Surface every AI tool in use, including the shadow inventory. Interview business owners. Inventory the vendor product releases that have added AI features in the last 18 months. Produce the inventory in the structured format above. Do not yet attempt to validate or score — just establish completeness.

  2. Days 15–30: Materiality triage and policy draft

    Score each inventory entry by materiality (high / medium / low). Draft the AI policy document, anchored in SR 11-7 and OCC 2025-26, scoped to the bank's actual exposure. The policy is short — 8–15 pages. Build the validation-cadence schedule per materiality.

  3. Days 31–45: Third-party file alignment

    For every high-materiality AI vendor, build (or confirm) the OCC 2023-17 / 2024-11 file: planning, due diligence, contract review, monitoring, termination-readiness. Resolve any inconsistencies between the AI inventory and the third-party file. They must reconcile.

  4. Days 46–60: Effective-challenge for the highest-materiality models

    Run a documented effective-challenge round on the top 2–3 high-materiality AI tools. The reviewer must have credentials and authority to dissent. Document the challenge, the response, and any changes. This is the artifact that demonstrates the discipline is real.

  5. Days 61–75: Board Risk Committee approval

    Present the policy, the inventory, the third-party file alignment, and the effective-challenge artifacts to the board Risk Committee. Get the policy approved. Establish the quarterly reporting cadence.

  6. Days 76–90: Examiner-readiness pack

    Assemble the materials the examiner will request: policy, inventory, materiality rationale, validation schedule, third-party file alignment, effective-challenge documentation, board minutes. The pack is calendar-ready 6 weeks before the next exam — never assembled in the week before.

A bank that completes this 90-day sequence has converted the most exposed governance gap in the industry into a calendar-ready practice. The cost is internal time plus an optional outside reviewer for the effective-challenge round. The outcome is a posture the CRO can defend, a board summary the chair can present, and an examiner-readiness pack that produces shorter exam cycles.

What this engagement looks like

For most community banks the choice is whether to run this 90-day sequence with an outside reviewer participating in the effective-challenge rounds, or to run it entirely internally and bring in a reviewer only if the OCC raises a finding.

The argument for the outside reviewer: SR 11-7’s effective-challenge requirement is satisfied more readily by a credentialed reviewer outside the management chain than by an internal designee. The cost is bounded ($35K–$75K for a typical four-to-eight-week diagnostic). The output is a board-ready summary, the inventory itself, the policy draft, the materiality rationale, and the effective-challenge documentation for the top high-materiality models — assets the bank owns and maintains after the engagement ends.

The argument for running it internally: a CRO with the time to commit and a willingness to bring in an external party for one or two effective-challenge rounds can produce a defensible posture without engaging a consultant. Many banks at the $500M–$1B tier do this successfully.

The choice is not binary. The most common pattern is internal build of the inventory and the policy, external participation in the effective-challenge rounds for the highest-materiality models, and external production of the board-ready summary and the examiner-readiness pack. This pattern accommodates the bank’s internal capacity while addressing the two artifacts that are hardest to produce credibly from inside the management chain.