At the 2026 BAFT International Trade & Payment Conference, a panel on AI in Compliance and Fraud Detection brought together leaders from across the payments and risk ecosystem. The discussion was moderated by Ryan Lastra, Senior Vice President of Global Payments and Trade at BNY, and featured Teresa Buechner, VP of Strategic Client Partnerships at Quantifind, alongside Sumner Francisco, Senior Vice President of Domestic Payments at BNY.
What stood out wasn’t a debate about whether AI belongs in financial services. That question has already been answered. The conversation focused instead on what comes next:
- How institutions govern AI
- Explain its decisions
- Collaborate across organizational boundaries in an environment where financial crime and fraud are increasingly networked
For years, conversations about AI in financial services focused on potential. What could AI do? Where might it fit? When would regulators allow it?
That phase is over. AI is already embedded in payments, fraud detection, and compliance workflows. The real challenge now isn’t whether AI works. It’s whether institutions can operate it responsibly and effectively across an ecosystem where risk doesn’t stop at the edge of a single bank. Put more bluntly, financial crime and fraud are networked, but defenses are still siloed.
From Isolated Models to Shared Risk Reality
AI today is doing real work. It monitors transactions in real time, flags anomalous behavior, and helps teams reduce massive alert backlogs. Teresa shared examples where AI rescored alerts from existing systems and cleared nearly 70% false positives, turning hundreds of thousands of alerts into a manageable, high-risk subset. But the panel made an important distinction. Optimizing inside one institution is no longer enough. As Teresa noted, risk rarely lives with a single individual or transaction. It lives in networks. Who someone transacts with, how funds move across accounts, and how behaviors propagate across institutions all matter. A customer can appear low risk in isolation while being deeply embedded in a risky ecosystem elsewhere.
Fraudsters understand this. They exploit the gaps between banks, knowing that one institution’s visibility ends where another’s begins.
AI as a Force Multiplier, but Only If the Data Connects
Much of the discussion focused on AI as an augmentation layer rather than a replacement for human decision-making. Machine learning helps surface patterns. Generative AI accelerates investigations. Humans still define risk appetite and make final calls. That augmentation breaks down when data is fragmented. Both Teresa and Sumner emphasized that AI’s effectiveness depends on the breadth of signals it can see. When banks operate in isolation, even the best models are constrained. When signals can be shared responsibly, securely, and within regulatory frameworks, AI becomes exponentially more powerful.
This is where collaboration shifts from a nice-to-have to a strategic necessity.
Payment Rails as the Hidden Collaboration Layer
One of the more interesting insights from the panel was that banks already share a common denominator. They just don’t always treat it that way.
Payment systems themselves, including those operated by the Fed or TCH, sit at the center of financial activity. They represent natural aggregation points where risk patterns emerge across institutions, not just within them. Sumner pointed out that these shared rails could become the foundation for industry-wide risk intelligence. That would enable earlier detection of coordinated fraud and more consistent responses across the ecosystem.
In other words, much of the infrastructure for collaboration already exists. The challenge is aligning governance, incentives, and data-sharing models to make it usable.
Regulators Aren’t the Obstacle. Opacity Is.
A recurring myth the panel pushed back on was the idea that regulators are inherently opposed to AI or data sharing.
Teresa made it clear that regulators care less about whether AI is used and more about whether institutions can explain and defend their decisions. Explainability, traceability, and strong model governance are what make innovation viable. They are not optional. The same logic applies to collaboration. When banks understand their own models, data sources, and decision pathways, they are far better positioned to share insights responsibly, participate in cross-institution initiatives, and engage regulators proactively rather than defensively.
The alternative, each institution fighting financial crime alone, only benefits adversaries.
An Arms Race Fought at the Network Level
Fraudsters already operate as a network. They use AI to test boundaries, probe systems with low-value transactions, and adapt faster than any single institution can respond. Defending against that requires more than better models inside one bank. It requires shared intelligence, faster feedback loops, and coordinated responses. These are all areas where AI can help, but only if institutions are willing to work together.
As Teresa noted, AI is not set it and forget it. Models must be continuously retrained as tactics evolve. Without collaboration, every bank is forced to relearn the same lessons independently, at enormous cost.
The Future Is Converged and Collective
When asked to describe the next two years of AI in one word, Teresa chose “convergence.” Fraud, AML, cyber, identity, and payments risk are collapsing into one connected system problem—and AI is the connective tissue. Sumner called it a “renaissance.” AI is becoming embedded everywhere.
The panel suggested a third, implicit word: community.
The next phase of AI in financial services won’t be defined by who has the best standalone model. It will be defined by who can combine strong governance, human oversight, and cross-institution collaboration, using AI not just to protect individual banks, but to strengthen the financial ecosystem as a whole.
Get in touch to learn how Quantifind is powering the future of financial crime risk detection.