Pillar · B2

BSA/AML alert triage: the community-bank AI playbook

BSA/AML alert triage: the community-bank AI playbook

A working BSA Officer’s guide to compressing false-positive triage by 60%, preserving SAR quality, and building the SR 21-8 governance an examiner can inspect in twenty minutes.

A BSA Officer at a $2 billion community bank pulls the weekend alert queue at 8:30 on Monday morning. 340 alerts. The prior week she cleared 290. Her team has four people including her. The Financial Crimes Enforcement Network call she has weekly with the CRO is in three hours. By 11:00 she will have triaged 60. The other 280 will follow her into Tuesday and Wednesday. She has not yet looked at the cases that actually need investigation.

This is the compliance conversation at every community bank between $500M and $10B right now. BSA teams are drowning in false positives. The ratio of actual suspicious activity to alert volume has been getting worse for four straight years. Filed SARs increased 18.5% between July 2023 and December 2024. Depository institutions filed 2.6 million SARs in FY 2024. The BSA Officer is the highest-veto single role in any compliance initiative, and she is being asked to do more work with the same headcount.

What follows: what an AI-assisted BSA/AML triage practice actually requires of a community bank at this scale, what the operative regulatory documents say, what the vendor landscape looks like in 2026, and what to do in the next 90 days.

The problem in BSA Officer vocabulary

The BSA Officer at a community bank is being asked the same three things:

  1. Our false-positive rate is out of control — can AI fix it? Usually from the CEO or the CFO, who has noticed that the BSA team has grown from three to five over 24 months without the deposit base growing proportionally.
  2. Whatever we deploy has to pass the next FFIEC exam without adding findings. From the CRO, who has read SR 21-8 and knows what proportional review looks like.
  3. The SAR quality cannot drop. From the BSA Officer herself, speaking to whoever will listen — because she has personal liability under the filings, and a vendor-driven deployment that reduces SAR volume by reducing true-positive detection ends her career.

Each request is a version of the same underlying issue: the rule-based transaction-monitoring system the bank has been running for ten years generates more alerts than the team can process; the alert-to-SAR ratio is now 50:1 or worse in most deployments; and the pressure to automate is colliding with the pressure to preserve the filing quality the FinCEN examiner reads against.

The risk is not that AI is bad at this. AI-assisted triage, properly deployed, materially helps. The risk is that a vendor-driven deployment without SR 21-8 governance produces a system that appears to work, passes the pilot, and then fails the first model-risk review. By then the vendor contract is signed and the internal champion is gone.

What the regulators actually say

Five documents govern. Every community-bank BSA Officer should be fluent in these.

What a defensible deployment looks like

A community bank between $500M and $10B deploying AI in BSA/AML builds discipline around four artifacts and a recurring review cadence.

  1. The BSA model file

    One file per AI-assisted tool. Includes model description, data inputs, validation plan, ongoing monitoring schedule, effective-challenge log, and incident record. Signed by the BSA Officer and the CRO. Reviewed annually and after any material threshold change.

  2. The alert-tuning governance protocol

    Names who can change a threshold, who approves the change, how the change is back-tested against historical alerts, and how the false-positive-rate impact is documented. Every tuning decision produces a written artifact the examiner can inspect. No verbal threshold changes.

  3. The SAR-quality audit

    Quarterly. A random 1-in-25 sample of filed SARs is reviewed against the pre-deployment baseline: narrative quality, factual completeness, timeliness. The audit catches SAR erosion early, before the FinCEN examiner does. A bank that cannot produce this audit has no evidence its AI deployment preserved filing quality.

  4. The third-party file

    OCC 2023-17 / 2024-11 lifecycle: planning, diligence, contract, monitoring, termination. Reconciled with the SR 21-8 model file — same vendor, one relationship, two disciplines, zero gaps. Vendor SOC 2 Type II current within 12 months. Named community-bank reference at comparable scale.

  5. The board reporting cadence

    Quarterly to the board Risk Committee. The CRO presents, the BSA Officer supports. Standing agenda: alert volume and disposition, false-positive rate, SAR filing volume, SAR-quality audit results, any incidents or threshold changes. Two pages plus appendix. Read, not skimmed.

This is the discipline. None of it requires a platform purchase beyond the triage tool itself. It requires a BSA Officer who treats the AI deployment as a risk the bank already knows how to manage: the vendor diligence, the model governance, the filing-quality audit, the board reporting. Same cadence the bank already applies to credit and market risk.

Field evidence from 2024–2026

Two observations from the field:

SAR-quality preservation is not automatic. Deployments that focus only on reducing the alert queue occasionally achieve the false-positive reduction at the cost of SAR volume — the triage tool suppresses alerts that would have converted to filings. The Pinnacle and MidCountry cases preserved SAR volume because the deployments included tuning review protocols and quality audits. Deployments without those protocols have produced measurable SAR erosion — a finding a FinCEN examiner will surface in the next review.

The community-bank advantage is governance speed. Large banks building AI-assisted triage have disclosed multi-year validation cycles. Community banks between $1B and $5B, working against SR 21-8 with proportional validation per OCC 2025-26, have completed full deployments in 4–8 months including governance documentation. The speed is a function of bank size, not vendor capability — the bank that can approve a change in a Tuesday committee meeting moves faster than the bank that routes through four regional committees.

The vendor landscape

Three categories of vendors serve community banks at the $500M–$10B tier. Different strengths, different governance implications.

VendorNamed community-bank deploymentsStrengthSR 21-8 documentation posture
Verafin (Nasdaq)Pinnacle Bank (named); 2,500+ customer base66% aggregate false-positive reduction; mature ML tuningVendor provides template documentation — typically requires rewrite in bank voice
AbrigoMidCountry Bank, Texan Bank, American Bank N.A.Community-bank-focused product line; integrated SAR narrative draftingTemplate documentation adequate for medium-materiality models; CRO rewrite for high
Unit21200+ customers; specific bank names less disclosedUp to 93% false-positive reduction in aggressive tuning; flexible platformSelf-service documentation — higher burden on bank to build SR 21-8 file
Feedzai / DataVisorLarger banks primarily; fewer community-bank referencesEnterprise-grade; over-specified for most $1B–$3B deploymentsGovernance posture varies; bank carries the documentation burden

Vendor-supplied documentation is a starting point in every case. A bank that accepts vendor templates unmodified has produced documentation that survives a vendor audit but fails an examiner-facing effective-challenge review.

What most banks get wrong

Five failure patterns that the field data shows produce the worst outcomes. Avoiding these matters more than choosing the right vendor.

  1. Deploying without the alert-tuning governance protocol in place. The most common. The tool goes live, the BSA Officer makes threshold adjustments informally as the team learns, and by month six the audit trail does not exist. The SR 21-8 review finds a gap. The remediation is a documentation rebuild, not a re-deployment — but the CRO’s trust is spent.

  2. Accepting the vendor’s false-positive number uncritically. A “66% reduction” from the vendor is an aggregate across their customer base. The bank’s actual reduction depends on the bank’s current rule-base, customer profile, and transaction mix. Banks that plan capacity recovery to the vendor number and do not back-test have over-promised the CFO.

  3. Letting the SAR-quality audit slip. The quarterly audit is the single most important discipline. Skipping two quarters produces either (a) an unreviewed erosion that the FinCEN examiner surfaces, or (b) a defense-free response if the bank is challenged on filing quality. The audit is 4–6 hours of work per quarter and is non-negotiable.

  4. Running the SR 21-8 and OCC 2023-17 files as separate exercises. Same vendor, same relationship, two files with inconsistent data. The examiner reads both and finds the gap. A bank with one reconciled file per vendor does half the work and produces a cleaner posture.

  5. Buying into the automation rather than the discipline. A BSA Officer who treats the deployment as “the tool does the work now” has lost the review muscle. The tool does the triage; the BSA Officer owns the judgment, the tuning, the quality audit, and the filing. A deployment that erodes the human review produces a compliance posture the FinCEN examiner has explicitly warned against.

What to do in the next 90 days

A 90-day sequence that produces a defensible deployment. This is the pattern community-bank engagements have converged on across 2024–2025.

  1. Days 1–14: Baseline

    Measure the current state. Alert volume per week, false-positive rate, analyst hours on triage, SAR filings per quarter, SAR-quality sample. You cannot demonstrate a reduction without a documented baseline.

  2. Days 15–30: Vendor evaluation

    Verafin, Abrigo, Unit21 plus the incumbent vendor's native AI. Require named community-bank references at your tier. Require sample SR 21-8 documentation from a deployed bank. The vendor who cannot produce either is disqualified.

  3. Days 31–45: Pilot design and governance drafting

    Pilot scope — one transaction segment, 60 days. Draft the alert-tuning protocol, the SAR-quality audit cadence, and the SR 21-8 model file in the bank's voice. Reconcile with the OCC 2023-17 third-party file. Get BSA Officer and CRO sign-off before the pilot starts.

  4. Days 46–75: Pilot execution

    Run the pilot. Document every threshold change. Run the first SAR-quality audit on week 10. Measure: alert reduction, false-positive rate change, SAR volume change, analyst-hour recovery. Meet weekly with the BSA Officer and vendor to surface issues early.

  5. Days 76–90: Governance packet and full deployment decision

    Assemble the examiner-readiness packet: SR 21-8 model file, alert-tuning protocol, SAR-quality audit results, third-party file, board summary. The packet is calendar-ready for the next exam cycle. If the pilot results support it, roll to full deployment on day 91.

A bank that completes this sequence has deployed AI-assisted triage with a defensible governance posture. Cost: internal time plus the pilot engagement plus first-year vendor fees. Outcome: 0.5–1.2 FTE of triage capacity recovered, SAR quality preserved and audited, board Risk Committee has a new dashboard to review, and the next FFIEC exam cycle runs shorter.

What this engagement looks like

Two shapes fit most community banks:

Governance-and-pilot shape (most common). A 10–14 week engagement that runs vendor evaluation, pilot design, SR 21-8 documentation, SAR-quality audit protocol, and one documented pilot cycle. Fixed fee, typically $60K–$120K. The bank owns the deployment decision. Outside participation is concentrated in the governance draft and the effective-challenge rounds.

Full-deployment shape. A 4–6 month engagement covering vendor evaluation through full production rollout, including the first quarterly SAR-quality audit. Fixed fee, typically $100K–$200K. Appropriate when the bank’s internal capacity is stretched and the BSA Officer wants an outside party through the first exam cycle.

Either shape produces the same core artifacts: the SR 21-8 model file in the bank’s voice, the alert-tuning governance protocol, the SAR-quality audit cadence, the third-party file reconciliation, and the board Risk Committee dashboard. The bank owns these after the engagement ends. They are the posture the FFIEC examiner reviews.