What It Really Takes to Build AI Decisioning Platforms Banks Can Trust
Building a Decision Intelligence platform for financial services sounds straightforward until you’re actually doing it. Provenir CPO David Mirfield joined Helen Yu on CxO Spice (Episode 133) to get into the specifics: the architectural decisions, the roadmap trade-offs, and the hard-won lessons from two decades of working with banks, fintechs, and everyone in between.
Here are the key insights from their conversation.
One platform, built for the full lifecycle
Financial services organizations have spent years assembling point solutions for credit risk, fraud, onboarding, and customer management. The result is fragmented data, duplicated logic, and decisions made in silos that don’t reflect how risk actually moves across the customer journey.
David’s take on why that’s such a persistent problem:

That’s the real challenge of building a unified platform: it’s organizational as much as it’s technical. Customers can run separate teams on one platform for legitimate regulatory or logistic reasons and still get the benefit of shared data and shared logic.
And that logic overlaps more than most people realize. Credit and fraud share roughly 90% of the same data and strategic considerations. Building separate capabilities for each means solving the same problem twice and introducing blind spots at the seams.
The platform also serves many different users simultaneously:
- The senior credit risk manager setting strategy
- The deeply technical analyst deploying code and managing workflows
- The data scientist running R and Python models
- The business user who needs to adjust a decision flow without writing a line of code
Provenir’s approach is to maintain genuine technical depth while progressively building toward low-code and no-code interfaces, working up from a strong foundation rather than stripping the platform down.
Use case agnostic, model agnostic
This was one of the most quotable moments in the conversation, and Helen said she was stealing it:

Provenir hasn’t built a dedicated fraud product or a dedicated credit product. It’s built an engine flexible enough to serve both, and everything in between, without constraining how customers configure it. The platform’s breadth is a feature, not a lack of focus.
The same thinking applies to AI. The pace at which foundation model providers are moving makes it strategically unwise to commit to any single LLM or agentic framework.
“I don’t think anyone would pretend to be able to keep up with the aggressive pace that Anthropic, OpenAI, and all of the others are moving at. They don’t seem to have a clear moat — people are switching from one to another as soon as the best version is available.”
Provenir’s response is to be the orchestration layer, not the AI itself. That means staying agnostic across LLMs, agentic capabilities, and frameworks, and adding support natively as they mature. The most recent example: MCP support, already integrated into the platform.
In regulated markets, there’s an additional reason to stay independent from any specific AI provider. Explainability and transparency aren’t optional. Being able to show a regulator exactly why a decision was made, and how the data supported it, matters as much as the decision itself.
Data orchestration is the moat
If there’s one area where Provenir has built a durable competitive advantage, David pointed squarely at data. And he made the point with some feeling:
“I remember working in other organizations — it took ten weeks to do some data integrations. It’s not because people aren’t technically capable. It’s because it needs an established, clean way of doing it.”
Provenir built that clean way of doing it long before David joined the business, and the flexible adapter infrastructure that came from it remains one of its clearest differentiators. The 225+ pre-integrated data sources in the marketplace are part of the story. The more important capability is that customers can build their own integrations directly within the platform, to internal databases, RESTful APIs, LLMs, and agentic services, through a low-code UI, without needing an engineering sprint.
The product decision David flagged as one of the hardest: choosing to stop building new marketplace integrations at scale, because there are higher-priority areas on the roadmap. Knowing when to stop adding and start deepening is genuinely hard, and it doesn’t happen without a clear point of view on what the platform is for.
Real time and batch aren’t in conflict
Most institutions know that real-time decisioning is where they’re headed. Most are still running monthly or weekly batch processes because that’s what their core systems support. Provenir’s position is to bridge that transition rather than force it.
The same decisioning engine handles batch and real-time processing, with a single UI and a single configuration layer. A customer can go live on batch and switch to real time when they’re ready, without rebuilding anything. David illustrated why that matters in practice:
“Imagine you’ve got 10 data calls, and each one takes a second. Running them in series, that’s 10 seconds. Because we’re a mature platform, you can parallelize those processes and make all those data calls at the same time. So you’re making 10 data calls, but they’re all coming back within one second.”
For use cases that don’t require external data calls at all, the engine handles 10,000 transactions per second at enterprise scale. The underlying principle across all of it: improvements to the core engine benefit every use case built on top of it, simultaneously.
Where investment is going
Two areas are getting the most product development attention through H1 and into H2 this year.
The first is Decision Intelligence. Provenir recently launched a simulation module that lets users compare production data against historical performance before making a change. Coming next are proactive recommendations, where the platform surfaces areas within a customer’s decisioning flow that could be improved, using data and models the customer already has.
“Not just having an end user make a change and ask ‘what was the output?’ — but proactively saying, ‘There are three or four areas within your decisioning flow where you’ve already got the data to improve that decision.'”
That moves the platform from answering questions to generating insight before anyone thinks to ask. Agentic interfaces make those recommendations easy to explore interactively; automated machine learning provides the statistical rigour underneath.
The second area is continued enterprise depth: regulatory controls, security, data protection, and the governance infrastructure that large tier-one banks require before trusting a platform with their most sensitive decisioning workflows. The goal, as David put it, is to be the safe pair of hands that is also the most innovative engine in the room.
Watch the full episode on YouTube or find it on Helen’s LinkedIn newsletter, CxO Spice with Helen Yu.

What It Really Takes to Build AI Deci...

What if you could spot first-party fr...

When Did You Last Review Your Third-P...

Transaction to Relationship: Rethinki...

Navigating Auto Lending in 2026

One Portfolio, Two Economies

Buy the Engine. Build the Advantage.

Why Nordic Banks Must Balance Fraud C...

The Growing Threat of Fraud in UK Aut...

Why Telcos Can’t Afford to Think Like...

The Fraud-AI Double Bind

Why 77% of Financial Institutions See...

Smarter Acquisition and Customer Mana...

Open Banking Expo Toronto

