Pillar · B3
AI in community-bank credit and lending: the full picture
AI in community-bank credit and lending: the full picture
A working CCO’s map of the four AI use cases that matter in the community-bank credit function, the order to build them, and the governance discipline that has to sit underneath all of them.
A Chief Credit Officer at a $2.4 billion community bank opens the Q2 pipeline report on a Thursday afternoon. The refinancing wall is visible in three columns: 42 CRE loans maturing inside 18 months, 28 C&I credits originated at 3.0% that will price at 7% on renewal, and a construction book whose DSCR assumptions were built before the Fed’s 2022–2024 path. Her team of six commercial analysts is already producing 25-hour memos at full capacity.
Her question is not whether AI is relevant. She has watched Moody’s Analytics demonstrate modular AI compressing one memo-prep workflow from 40 hours to 2 minutes (VentureBeat, September 2025). She has watched Zest AI publish First Hawaiian Bank’s 13x increase in automated decisioning. Her question is more specific: which use cases are mature enough for a $2B community bank to deploy in 2026, in what sequence, and with what governance discipline so the next OCC exam goes shorter rather than longer.
This piece walks the four AI use cases that define the community-bank credit-and-lending landscape in 2026, the order to build them, what the named vendors deliver, and the regulator documents that govern each one.
The problem in CCO vocabulary
Every community-bank CCO is pulled by four forces, and no single hire solves any of them.
Analyst capacity. The credit shop at a $1B–$3B bank is typically 4–8 analysts. Each produces 6–10 memos per month at 20–30 hours per memo. The math does not leave room for the pipeline loan officers keep building, and the CHRO cannot recruit an eighth analyst inside 12 months.
The refinancing wall. The FDIC 2024 Risk Review flagged commercial real estate concentration and refinancing risk at the community-bank tier. Loans originated at 3.0% in 2021–2022 are entering a 2025–2026 repricing window at rates 300–400 bps higher. The question is whether the bank sees the deterioration two quarters early or two quarters late.
National-bank competition. The SMB borrower whose treasury data the community bank already holds is receiving pre-approved credit offers inside a national bank’s portal before the RM has finished the annual review. Numerated has processed $50 billion across 140+ lender partners. First Commonwealth’s Upstart partnership launched December 2024. The community bank that waits on embedded lending loses the operating-account relationship along with the credit.
CFPB adverse-action exposure. CFPB Circular 2023-03 (September 2023) set the specificity standard for AI in credit decisioning. A bank using any AI-driven scoring in a consumer credit decision must produce specific, accurate adverse-action reasons. Generic “algorithmic decision” language is insufficient. The CRO and GC read this circular as the single most consequential AI document for any lending build.
The four use cases and how they fit together
The community-bank credit-and-lending AI landscape is four use cases, each with a different vendor stack, approval path, regulator overlay, and ROI anchor. Read them in order. The sequence is not arbitrary.
| Use case | Who owns it inside the bank | Named vendors | Primary regulator overlay | Payback timeline |
|---|---|---|---|---|
| Credit memo workflow | CCO (primary), CRO (model governance), GC (official record) | nCino, Moody's Analytics, Baker Hill, Abrigo, Finastra | SR 11-7 + OCC Comptroller's Handbook (Commercial Loan Underwriting) | 6–12 months |
| SMB credit decisioning | CCO (primary), CRO (fair-lending gate), Head of Commercial (internal advocate) | Zest AI, Upstart, Numerated, nCino, Finastra DecisionPro | SR 11-7 + ECOA/Reg B + CFPB Circular 2023-03 | 9–18 months |
| Embedded commercial lending | Head of Commercial + Chief Lending Officer, Head of Treasury Management (portal owner) | Numerated, Upstart, Biz2Credit, Blend | SR 11-7 + CFPB Circular 2023-03 + OCC 2023-17 / 2024-11 | 12–24 months |
| Early-warning credit monitoring | CCO + CRO (co-owners; feeds CECL) | Moody's Analytics, Abrigo, nCino, Wolters Kluwer/FIS | SR 11-7 + OCC Handbook (Loan Portfolio Management / Rating Credit Risk) + CECL (ASC 326) | 12–24 months |
The four use cases share a governance foundation but differ in approval path, vendor maturity, and regulator exposure. Build in the order shown unless the bank has specific asymmetric capacity (e.g., a CRO with decisioning-model history) that argues otherwise.
1. The credit memo workflow
What it is. The AI tool reads the bank’s structured borrower data (spreads, call reports, relationship file, industry context) and produces a first-draft memo covering executive summary, borrower overview, financial analysis, industry context, structure and terms, covenant package, and recommendation. The analyst verifies every figure, applies the credit judgment the AI cannot replicate, and produces the final memo.
Who inside the bank cares. The CCO authorizes. The CRO co-signs on model governance. The General Counsel reviews because the memo is part of the bank’s official record. The credit analysts decide whether the workflow survives its first production quarter.
Vendor landscape. nCino’s Banking Advisor went generally available June 2024 with Northern Bank as the first named community-bank deployment. Moody’s Analytics demonstrated modular AI compressing one memo-prep workflow from 40 hours to 2 minutes (VentureBeat, September 2025). Baker Hill has Marquette Bank cited at 25% credit-memo time reduction and 70% paper-report elimination. Abrigo and Finastra (with Mainstreet Community Bank of Florida on Fusion CreditQuest) are credible alternatives. Community-bank deployment at this scale is no longer a first-mover bet.
ROI anchor. A 5-analyst shop producing 8 memos per month per analyst at 25 hours per memo, recovering 40% of that time, recaptures 4,800 analyst-hours annually (roughly $310K in FTE-cost avoidance at $110K fully loaded per analyst). The second-order benefit: senior analysts spend more time on credit judgment, and junior analysts learn structural reasoning faster from reviewing good first drafts.
Regulator overlay. SR 11-7 governs the narrative-generating model. OCC Bulletin 2025-26 permits proportional validation. OCC Comptroller’s Handbook booklets on “Commercial Loan Underwriting” and “Rating Credit Risk” govern the memo’s substantive content. OCC 2023-17 and 2024-11 apply to the vendor relationship.
2. SMB credit decisioning with cash-flow analytics
What it is. The model reads the SMB applicant’s cash-flow data (DDA transaction history, payment patterns, deposit stability) alongside credit-bureau inputs, and produces either a pre-approval, a decline with specific reasons, or a referral to a human underwriter. The community-bank deployment automates routine decisions and refers judgment-required cases to analysts.
Who inside the bank cares. The CCO authorizes. The CRO gates heavily; this is the highest fair-lending exposure area. The Head of Commercial is the internal advocate, because the velocity uplift is how the bank competes against national banks on the same borrower.
Vendor landscape. Zest AI reports 60–80% automated decisioning with 20% charge-off reduction across 180+ banks and credit unions. First Hawaiian Bank is cited at 13x increase in automated decisioning (4% to 55% of applications) and 9x increase in instant approvals. Upstart names Customers Bank, First Commonwealth, and Associated Bank. Numerated names FNBC Bank & Trust. nCino and Finastra DecisionPro operate in the same space inside their platforms.
ROI anchor. A $1B community bank underwriting 200 annual SMB loans that moves to 40% automated decisioning on routine credits frees the underwriting team for judgment-required deals and compresses time-to-decision from 10–14 days to under 48 hours on automated applications. Even a third of First Hawaiian’s magnitude at $1B-bank scale produces material origination uplift.
Regulator overlay. SR 11-7 governs the scoring model. OCC Bulletin 2025-26 permits proportional validation. ECOA/Regulation B governs the credit decision. CFPB Circular 2023-03 sets the adverse-action specificity standard; this is the document the CRO will cite first. FCRA governs credit-bureau data usage. OCC 2023-17 and 2024-11 apply to the vendor relationship. The fair-lending surveillance discipline (B2 cluster) operates in parallel with this build, not after it.
3. Embedded commercial lending in the digital portal
What it is. The SMB customer whose treasury data the bank already holds sees a pre-approved line-of-credit offer inside the portal where they check their balance. The offer is produced by a credit-decisioning model that reads the bank’s operating-account data, validated against traditional credit inputs, and presented at the moment the customer has a cash-flow need. The workflow replaces a six-week RM process with a portal-native experience the customer completes in minutes.
Who inside the bank cares. The Head of Commercial Banking and Chief Lending Officer co-authorize. The CRO gates on credit risk and SR 11-7. The Head of Treasury Management owns the portal. The CFO funds, because the embedded offer is the most credible defense against the SMB operating-account relationship migrating elsewhere.
Vendor landscape. Numerated has processed $50 billion across 400,000 businesses at 140+ lender partners, with FNBC Bank & Trust, Eastern Bank, and Customers Bank named. First Commonwealth’s Upstart partnership launched December 2024. Biz2Credit and Blend operate in adjacent lanes. The community-bank issue is integration work and credit-risk governance, not vendor maturity.
ROI anchor. At a $500M community bank with $200M in commercial DDA balances, recapturing 10 treasury-management-and-credit relationships at $40K annual fee-plus-float value each is $400K in new annual revenue. The larger number is operating-account defense: a bank that loses the SMB operating account typically loses the full relationship.
Regulator overlay. ECOA/Regulation B governs the credit decision. CFPB Circular 2023-03 is the adverse-action specificity standard; in-portal offers that result in decline must produce specific, accurate reasons. SR 11-7 governs the decisioning model. OCC 2023-17 and 2024-11 govern the vendor relationship. OCC Bulletin 2025-26 permits proportional validation.
4. Early-warning commercial credit monitoring
What it is. The model reads covenant-compliance data, borrower financial-statement updates, DDA activity, and (where available) AR-aging and industry-signal data, and produces a monthly deterioration score per credit. Scores crossing thresholds generate RM-facing alerts two to four quarters before loan review would classify the credit as substandard. Outputs feed the CECL model, extending the upstream model’s SR 11-7 governance into the CECL discipline.
Who inside the bank cares. The CCO and CRO co-own; the CRO is typically primary because the model feeds ALLL/CECL. The Loan Review Manager executes. The CFO cares because classified-loan migration timing drives reserve build cadence and is visible in NIM through the cycle.
Vendor landscape. Moody’s Analytics’ Credit Assessment Solution and Loan Monitoring are the category reference. Abrigo’s CECL and credit-risk modules are the community-bank-first alternative. nCino operates in the same lane. Wolters Kluwer and FIS integrate outputs into CECL reporting. Vendor selection is driven by which core-platform integration is cleanest.
ROI anchor. At a $2B bank with a $1B commercial book, a 50 bps improvement in classified-loan migration timing is worth roughly $5M in a downturn cycle on a risk-adjusted basis (FDIC 2024 Risk Review; FDIC Q4 2024 Quarterly Banking Profile). The benefit shows up most clearly in the quarter after the classified-loan migration would have been missed.
Regulator overlay. OCC Comptroller’s Handbook booklets on “Loan Portfolio Management” and “Rating Credit Risk” govern classification. SR 11-7 governs the early-warning model. CECL (ASC 326) governs the reserve model the output feeds. The model-feeding-a-model architecture requires explicit SR 11-7 documentation covering the upstream model’s drift, retraining, and performance monitoring.
What to build first, second, third
The sequence is a consequence of three things: where the governance surface is most contained, where the analyst benefit is most immediate, and where the ROI anchor is most defensible.
Credit memo workflow first. The governance surface is contained to one model, one use case, one analyst population. SR 11-7 documentation is bounded, and the effective-challenge log per memo is the clearest artifact the examiner can read. The ROI anchor is documented. Credit analysts become internal champions when the workflow gives them back judgment-time instead of replacing their work.
Early-warning monitoring second. The model-feeding-a-model architecture (early-warning output feeding CECL) requires the CRO to have run through SR 11-7 documentation once already; the credit-memo build produces that muscle. The early-warning model reads the same loan-tape data, so data-pipeline work is partially reusable. Seeing the refinancing wall two quarters early is most valuable in the 2026–2027 window.
Embedded commercial lending third. Portal integration work is meaningful, Treasury Management has to own the operational layer, and CFPB Circular 2023-03 posture has to be rehearsed on a lower-exposure build first. The credit memo is lower-exposure because the analyst owns every decision; the embedded offer is higher-exposure because the portal produces the adverse action in-context.
SMB credit decisioning fourth, or not at all at this scale. The fair-lending surface is the widest in the credit-and-lending landscape. The CRO gate is the deepest. Vendor maturity is there, but the governance build at the $1B–$3B tier requires the B2 fair-lending cluster discipline first. Banks below $1B should build decisioning only after the first three use cases are stable.
Field evidence from 2024–2026
Three additional observations from the field:
The refinancing wall is observable in the portfolio, not the model. FDIC Q4 2024 Quarterly Banking Profile reported $566M in community-bank securities-sale losses. Banks with an operating early-warning model saw their first flagged credits 2–4 quarters before loan review would have classified them. Banks without one discovered the migration inside the ALLL build.
Vendor-supplied SR 11-7 documentation is a starting point. Across four community-bank engagements in 2024–2025 (asset sizes $1.2B, $2.1B, $2.4B, $3.8B), vendor-supplied documentation covered 40–60% of what the examiner expected. Deployments that shipped with only vendor documentation produced MRAs on their first exam.
Analyst adoption is a first-quarter risk. The credit-analyst population either starts using the workflow in the first quarter after go-live or does not start using it at all. Banks that invested in training before go-live saw 80%+ adoption inside 90 days. Banks that shipped without training saw adoption stall at 30–40% and never recover without a second investment cycle.
What most banks get wrong
Five failure modes recur across community-bank credit-and-lending AI builds. Avoiding them matters more than picking the right vendor.
The five failure modes, in order of frequency:
-
Running the use cases in parallel. The most common over-reach. Four workflows built simultaneously produce four partially-complete governance surfaces, four half-trained analyst populations, and one exhausted CRO. Each use case in the sequence produces the institutional muscle the next one needs.
-
Skipping the B1 governance prerequisite. A credit-and-lending build on top of no current AI inventory, no board-approved policy, and no operating third-party risk file per OCC 2023-17/2024-11 is a build on sand. The B1 work takes 60–90 days. Skipping it is the most expensive shortcut in the catalog.
-
Treating CFPB Circular 2023-03 as a compliance checkbox. The adverse-action specificity standard has to be engineered into the decisioning model’s explainability output and the portal’s adverse-action notice template. Banks that defer this to compliance produce generic notices on their first 100 declines and CFPB exposure on every one.
-
Accepting vendor-supplied SR 11-7 documentation as bank documentation. Vendor templates are starting points. They have to be rewritten in the bank’s institutional voice, signed by the CRO, and reflect the bank’s specific risk profile. Vendor-voice documentation fails the examiner’s first effective-challenge question.
-
Building without analyst training depth. Credit analysts are the population whose adoption decides whether the memo workflow produces the ROI anchor. Training investment before go-live is roughly 5–10x cheaper than retraining investment in Q3 after adoption has stalled.
What to do in the next 90 days
A 90-day sequence for a CCO whose B1 governance foundation is in place. If it is not, the first 90 days belong to that work. Read the B1 pillar and run that sequence before returning to this one.
-
Days 1–14: Credit-memo vendor evaluation and pilot segment selection
Evaluate nCino, Moody's Analytics, Baker Hill, Abrigo, and Finastra against the bank's core stack. Require a named community-bank reference at the bank's tier from each vendor and sample SR 11-7 documentation from a deployed bank. Select one portfolio segment for the pilot, typically C&I term loans $1M–$10M. Avoid CRE construction (highest structural variability) and syndicated credits (lowest automation ceiling).
-
Days 15–45: Pilot design and SR 11-7 documentation in the bank's voice
Build the model documentation, the effective-challenge protocol, the proportional validation cadence per OCC 2025-26, and the third-party file per OCC 2023-17 / 2024-11. Rewrite vendor-supplied templates in the bank's institutional voice. Design the analyst training plan and socialize with GC and CRO for sign-off before pilot kickoff.
-
Days 46–75: Pilot execution through 30–50 memos
Every analyst in the segment completes training before touching the tool. Every memo produces an effective-challenge log. The CCO reviews every pilot memo and log together in the first two weeks, then samples thereafter. Document the substantive-changes pattern: what the analysts catch, what they override, what the AI consistently misses.
-
Days 76–90: Full segment deployment and early-warning scoping
Deploy to the full segment. Assemble the examiner-ready documentation pack. Begin the early-warning scoping conversation with the CRO, Loan Review Manager, and core-platform integration lead. The early-warning build starts in Day 91 only if the memo deployment is stable.
The sequence produces a deployed credit-memo workflow in one portfolio segment, examiner-ready documentation, trained analysts, a measurable time-per-memo reduction, and scoping for the next use case. A bank that completes this has the institutional muscle to run the early-warning build next, the embedded-lending build after that, and the decisioning build last.
What this engagement looks like
The choice is whether to run the credit-memo build with an outside partner who owns the SR 11-7 documentation and the analyst training, or to assemble the build from the vendor’s services organization and internal staff.
The argument for the outside partner is specific to SR 11-7 posture. The effective-challenge requirement is satisfied more readily by a credentialed reviewer outside the management chain than by an internal designee. Bank-voice documentation is produced more quickly by a partner who has written the artifact at three peer community banks than by a CRO writing it for the first time. Cost is bounded at $75K–$150K for a two-to-four-month build.
The most common pattern is hybrid: internal ownership of the pilot and analyst training, external production of the SR 11-7 documentation and the examiner-ready pack, external participation in the effective-challenge rounds on the first 20 pilot memos.