Article · B1.1
How to inventory the AI tools your bank already uses
How to inventory the AI tools your bank already uses
A 14-day protocol a community bank CRO can run to produce an AI inventory that survives the examiner’s first opening question, with the shadow inventory surfaced before the exam begins.
A community bank CRO at a $1.8 billion institution is six weeks from her OCC safety-and-soundness exam and opens the AI inventory the prior CRO left her: three rows, last touched in April 2024. Since then Verafin has pushed two ML updates, the credit team piloted nCino Banking Advisor, the deposit-pricing vendor added a recommendation engine, and three departments received Copilot licenses. The most common community-bank model-risk MRA across the OCC’s 2024 enforcement summaries centers on inventory completeness, with first-year remediation running $500K–$2M at a bank her size. This is the 14-day field protocol for producing an AI inventory community bank CROs can defend at the examiner’s opening question.
The problem in CRO vocabulary
“The inventory” is not a spreadsheet. It is the list the examiner asks for first when AI comes up, and it is the foundation every other discipline (validation, effective challenge, third-party file, board reporting) depends on. SR 11-7 has required it since April 2011. OCC Bulletin 2025-26 reaffirms it.
Most community-bank inventories fail the first examiner review for four reasons: stale (last update predates the most recent vendor release), incomplete (shadow inventory never surfaced), ambiguous (tools listed by vendor rather than product and version), and unreconciled (the AI inventory and the third-party file name different vendors for the same system). Remediation is 10x harder after the MRA than before, because the bank is running the build under an examiner clock rather than a planning clock.
What an AI inventory at a community bank must contain
An inventory entry that answers every examiner follow-up for the next 60 minutes has ten fields. Each does a specific job. Each is a place the inventory commonly fails.
| Inventory field | What the examiner proves | How most inventories fail | How to do it right |
|---|---|---|---|
| Tool name, product, version, vendor | Unique identification | Listed as 'nCino' rather than the product and version deployed | Product/module, version, vendor legal entity, updated per release |
| Deployment environment | Whether bank data leaves the tenant perimeter | 'Cloud' without tenant isolation specified | SaaS / dedicated tenant / on-prem; isolation posture; data residency |
| Data inputs | Confidentiality exposure | Vague ('customer data') | Named data classes with source systems |
| Decision outputs | The model's role in bank decisions | 'Recommendations' | The specific decision, draft, score, or triage call, and who consumes it |
| Business owner (named) | Operational accountability | Listed by department, not by person | Named individual with title, updated when the seat changes |
| Governance owner (named) | Risk accountability | Frequently blank (the gap that produces MRAs) | Named individual (typically CRO or VP Risk), signed off |
| Materiality tier | Drives validation cadence per OCC 2025-26 | 'Medium' for everything | High / medium / low with a one-sentence rationale |
| Validation cadence, last/next date | SR 11-7 discipline operating | Blank or aspirational | Cadence per materiality; dates on the calendar |
| Third-party risk tier | Reconciliation with OCC 2023-17 / 2024-11 file | Inconsistent with the third-party file | Tier matches third-party file; one vendor, one record |
| Vendor SOC 2 currency, last review date | Ongoing diligence is current; the inventory is a living record | Expired SOC 2; annual review only before the exam | SOC 2 within 12 months; quarterly review with named reviewer and dated signature |
The shadow inventory
The sanctioned inventory is the easier half of the work. The bank knows it bought nCino. The bank knows it contracted Verafin. The shadow inventory is where the exposure sits: AI tooling in production the bank did not deliberately procure. Three categories cover nearly every shadow-inventory miss across community-bank engagements 2024–2025.
Vendor AI features added via product update. The bank contracted Verafin in 2019 for transaction monitoring. Verafin added machine-learning alert tuning in a 2023 release. The bank did not re-paper the contract and did not add the ML layer to the inventory. The ML layer is nevertheless a model under SR 11-7 from the moment it produces a triage call. Surfacing method: vendor-by-vendor product-release review for releases in the last 18 months. The CIO and procurement lead produce the list in two afternoons.
Individual AI use by employees on bank data. A senior analyst uses personal ChatGPT Plus to summarize call reports. Marketing uses Claude for newsletter drafts. The CFO’s assistant uses Copilot for board-packet drafts. None is on any inventory. All produce a draft, score, or summary shaping a bank action. Surfacing method: the business-owner interview plus a light-touch M365 / Google Workspace audit for unsanctioned AI plug-ins.
AI embedded in third-party systems the bank treats as static. CRM enrichment scoring lead quality. Fraud overlays on the deposit channel. Identity-verification vendors using ML for document authentication. Core-processor AI-assisted reconciliation. Contracted for a non-AI function; the AI came with the stack. Surfacing method: third-party-file walkthrough with the vendor-management lead, focused on AI capability disclosed in the last 24 months of release notes.
The interview protocol
The surfacing work is an interview protocol, not a software crawl. The bank’s people know what they are using. The CRO’s job is to ask the right questions so the answers land in the inventory without friction. One 30-minute interview per business line: credit, BSA, deposits, operations, marketing, HR, IT. Seven interviews in Days 4–7. The CRO (or a delegated VP Risk) runs each. Standing questions are the same for every seat.
The interviews produce two outputs per line: tools to add to the inventory, and re-tier flags for tools already on the inventory but mis-materialized.
The materiality triage
Every entry is scored high, medium, or low. The score drives validation cadence per OCC Bulletin 2025-26, where the bank spends the effective-challenge budget, and which vendors get the deeper third-party file. A CRO who cannot articulate why an entry is high versus medium is signaling that triage was not deliberate.
Does the tool affect a consumer credit decision, a BSA/AML decision, a fair-lending outcome, or a deposit-pricing decision across a book of customers? If yes, the entry is high-materiality. Annual validation plus quarterly performance monitoring. The AI-assisted credit memo is high. Verafin’s ML alert tuning is high. The deposit-retention scoring engine is high.
Does the tool shape a bank decision affecting an identifiable customer outcome, but none of the four high categories above? If yes, the entry is medium-materiality. Biennial validation plus annual performance review. A CRM enrichment layer used for relationship-management prioritization is medium.
Does the tool produce drafts or summaries a named human reviews before the output leaves the bank? If yes, and no higher condition applies, the entry is low-materiality. Biennial review only.
Distribution typically lands at roughly 20% high, 35% medium, 45% low at the $1B–$3B tier. Mostly-medium with few high or low is the signal triage did not happen.
The 14-day sequence
The protocol compresses into 14 working days. A CRO and a VP Risk running this in parallel with their other work complete it inside three calendar weeks. The output is an examiner-ready inventory artifact and a materiality schedule the board Risk Committee approves at the next meeting.
-
Day 1–3: The seed list
Start from the procurement file and the IT systems list. Every vendor, product, version. Flag any vendor with a release in the last 18 months. Pull release notes. Interview the CIO for integrations-in-flight and pilots not yet through procurement.
-
Day 4–7: Business-owner interviews
Seven 30-minute interviews: credit, BSA, deposits, operations, marketing, HR, IT. Run the standing question set. Record surfaced tools and re-tier flags.
-
Day 8: Reconciliation against the third-party file
Walk the inventory against the OCC 2023-17 / 2024-11 third-party file entry by entry. Resolve every mismatch. One vendor, one record. Escalate any vendor with an expired SOC 2.
-
Day 9–10: Field population and normalization
Populate all ten fields per entry. Normalize tool names (product and version). Name business owner and governance owner on every entry, with no blanks.
-
Day 11–12: Materiality triage
Score each entry high / medium / low. Document rationale in one sentence. Set validation cadence per tier per OCC 2025-26. Build the validation calendar out 24 months.
-
Day 13: CRO review and sign-off
The CRO reads the full inventory in one sitting. Challenge any entry that feels under-described. Sign with a dated signature. The signature is the artifact that proves the discipline is real.
-
Day 14: Board Risk Committee brief
Produce a two-page executive summary: materiality distribution, shadow items surfaced, third-party file reconciliation status, validation calendar. The brief goes on the next Risk Committee agenda.
Field observation across community-bank engagements 2024–2025 ($1.2B, $2.4B, $3.8B asset sizes): the protocol surfaced on average 6–11 previously-uninventoried AI uses per bank. Vendor AI features added via product update accounted for roughly 40% of the shadow inventory. Individual AI use by employees was roughly 35%. Third-party-embedded AI was roughly 25%. None of the three banks had surfaced any of these categories before the engagement.
What to do next
A CRO sitting with a stale inventory and an exam on the calendar has two reasonable paths.
Run the 14-day protocol internally. The CRO owns the sequence, a VP Risk supports the interviews, and the bank produces the inventory and materiality schedule without outside help. This works for banks with internal capacity and a CRO who has run a prior inventory cycle. Effective challenge on the highest-materiality entries still needs to happen, typically in a subsequent cycle.
Or run the protocol with an outside reviewer participating in the interviews, the materiality triage, and the board brief. The reviewer brings the effective-challenge credibility SR 11-7 calls for, produces the board summary in the bank’s voice without internal-politics friction, and compresses the calendar when the exam is close. Bounded cost: $35K–$75K for a four-to-eight-week engagement covering the inventory protocol, the policy draft, and the effective-challenge documentation for top high-materiality entries.
The inventory work is the same regardless of path. The choice is who runs it and how defensible the output is at the examiner’s opening question.