Skip to main content
Blog

Navigating the Promise and Peril of Generative AI in Financial Services

Mark Collingwood
October 7, 2025

Financial services leaders are being bombarded with AI pitches. Every vendor claims their solution will revolutionise decisioning, slash costs, and unlock untapped revenue. Meanwhile, your competitors are announcing AI initiatives, your board is asking questions, and your teams are already experimenting with ChatGPT and other tools—sometimes without your knowledge.

The pressure to “do something” with AI is intense. But the organisations that rush to deploy generative AI without understanding its limitations are setting themselves up for problems that may not become apparent until it’s too late.

At Provenir, we’ve built AI decisioning capabilities that process over 4 billion decisions annually for financial institutions in 60+ countries. We’ve seen what works, what doesn’t, and what keeps risk leaders up at night. More importantly, we’ve watched organisations make costly mistakes as they navigate AI adoption.

In this article you’ll find a practical assessment of where generative AI delivers real value in financial services, where it introduces unacceptable risk, and how to tell the difference.

Where AI Delivers Value

The efficiency benefits of AI in financial services are tangible and significant. Here’s where we’ve seen AI deliver measurable business impact:
Faster model development and market response:
What once took months in model evaluation and data assessment can now happen in weeks, enabling lenders to respond to market changes and test new data sources with unprecedented speed.
Transaction data transformed into intelligence:
Advanced machine learning processes enormous volumes of transaction data to generate personalised consumer insights and recommendations at scale—turning raw data into revenue opportunities.
Operational oversight streamlined:
Generative AI helps business leaders cut through the noise by querying and summarising vast amounts of real-time operational data. Instead of manually reviewing dashboards and reports, leaders can quickly identify where to focus their attention—surfacing which workflows need intervention, which segments are underperforming, and where action is most likely to drive business value.
These aren’t future possibilities. Financial institutions are achieving these outcomes today: 95% automation rates in application processing, 135% increases in fraud detection, 25% faster underwriting cycles. While GenAI-powered assistants accelerate model building and rapidly surface strategic insights from complex decision data.

The Risks Nobody Talks About

However, our work with financial institutions has also revealed emerging risks that deserve serious consideration:
When AI-Generated Code Contradicts Itself

Perhaps the most concerning trend we’re observing is the use of large language models to generate business-critical code in isolation. When teams prompt an LLM to build decisioning logic without full knowledge of the existing decision landscape, they risk creating contradictory rules that undermine established risk strategies.

We’ve seen this play out: one business unit uses an LLM to create fraud rules that inadvertently conflict with credit policies developed by another team. The result? Approved customers getting blocked, or worse—high-risk applicants slipping through because competing logic created gaps in coverage. In regulated environments where consistency and auditability are paramount, this fragmentation poses significant operational and compliance risks.

When Confidence Masks Inaccuracy

LLMs are known to “hallucinate”—generating confident-sounding but factually incorrect responses. In financial services, where precision matters and mistakes can be costly, even occasional hallucinations represent an unacceptable risk. A single flawed credit decision or fraud rule based on hallucinated logic could cascade into significant losses.

This problem intensifies when you consider data integrity and security concerns. LLMs trained on broad, uncontrolled datasets risk inheriting biases, errors, or even malicious code. In an era of sophisticated fraud and state-sponsored cyber threats, the attack surface expands dramatically when organisations feed sensitive data into third-party AI systems or deploy AI-generated code without rigorous validation.

The Expertise Erosion

A more insidious risk is the gradual erosion of technical expertise within organisations that become overly dependent on AI-generated solutions. When teams stop developing deep domain knowledge and critical thinking skills—assuming AI will always have the answer—organisations become vulnerable in ways that may only become apparent during crisis moments when human judgment is most needed.

Combine this with LLMs that are only as good as the prompts they receive, and you have a compounding problem. When users lack deep understanding of what they’re truly asking—or worse, ask the wrong question entirely—even sophisticated AI will provide flawed guidance. This “garbage in, garbage out” problem is amplified when AI-generated recommendations inform high-stakes decisions around credit risk or fraud prevention.

Regulators Are Watching

The regulatory environment is evolving rapidly to address AI risks. The EU AI Act, upcoming guidance from financial regulators, and increasing scrutiny around algorithmic bias all point toward a future where AI deployment without proper governance carries substantial penalties. Beyond fines, reputational damage from AI-driven failures could be existential for financial institutions built on customer trust.

What Successful Institutions Are Doing Differently

Based on our work with financial institutions globally, the organisations getting AI right start with a fundamental recognition: AI is already being used across their organisation, whether they know it or not. Employees are experimenting with ChatGPT, using LLMs to generate code, and making AI-assisted decisions—often without formal approval or oversight. The successful institutions don’t pretend this isn’t happening. Instead, they establish clear AI governance frameworks, roll out comprehensive training programs, and implement mechanisms to monitor adherence. Without this governance layer, you’re operating blind to the AI risks already present in your organisation.

With governance established, these organisations focus on maintaining human oversight at critical decision points. AI augments rather than replaces human expertise. Business users configure decision strategies with intuitive tools, but data scientists maintain oversight of model development and deployment. This isn’t about slowing down innovation—it’s about ensuring AI recommendations get validated by people who understand the broader context.

Equally important, they refuse to accept black boxes. In regulated industries, explainability isn’t negotiable. Every decision needs to be traceable and understandable. This isn’t just about compliance—it’s about maintaining the ability to debug, optimize, and continuously improve decision strategies. When something goes wrong (and it will), you need to understand why.

Rather than accumulating point solutions, successful institutions build on unified architecture. They recognize that allowing fragmented, AI-generated code to proliferate creates more problems than it solves. Instead, they use platforms that provide consistent decision orchestration across the customer lifecycle. Whether handling onboarding, fraud detection, customer management, or collections, the architecture ensures that AI enhancements strengthen rather than undermine overall decision coherence.

These organisations also treat AI as a living system requiring continuous attention. AI models need ongoing observability and retraining. Continuous performance monitoring helps identify when models need refinement and surfaces optimisation opportunities before they impact business outcomes. The institutions that treat AI deployment as “set it and forget it” are the ones that end up with the costliest surprises.

Finally, they maintain control of their data. Rather than sending sensitive data to third-party LLMs, forward-thinking organisations deploy AI solutions within secure environments. This reduces both security risks and regulatory exposure while maintaining full control over proprietary information.

Why Inaction Isn’t an Option

The irony is that many leaders debating whether to “adopt AI” have already lost control of that decision. AI is already being used in their organisations—the only question is whether it’s governed or ungoverned, sanctioned or shadow IT.

Meanwhile, fintech disruptors are leveraging AI to deliver frictionless, personalised experiences that traditional institutions must match. The competitive gap isn’t just about technology—it’s about the ability to move quickly while maintaining control and compliance.

Organisations that succeed will be those that combine AI capabilities with strong governance frameworks, architectural discipline, and deep domain expertise. They’ll move beyond isolated experiments to implement AI in ways that deliver real business value while maintaining the trust and regulatory compliance that financial services demand.

The institutions making smart bets on AI aren’t the ones moving fastest—they’re the ones moving most thoughtfully, with equal attention to capability, transparency and governance.

Find out more about Provenir AI

LATEST BLOGS
Jason Abbott, Fraud Solution Director at Provenir, explains how to fight First-Party Fraud.

First-Party Fraud: T...

BLOG First-Party Fraud: The Hidden Cost of "Good" Customers
ken lee blog post

Driving Intelligent ...

Driving Intelligent Lending Beyond the LOS: A Leadership Perspective
blog selfie

Beyond the Selfie: W...

A solution needs to bridge the gap left by
The State of AI, Risk, and Fraud in Financial Services

The State of AI, Ris...

The State of AI, Risk, and Fraud in Financial
europe mortgages

Top Mortgage Lending...

Top Mortgage Lending Trends in the UK and Europe:
blog lending thumbnail

BLOG: Unlocking Succ...

Thriving Through Change: Unlocking Success in Poland’s Lending Revolution
BLOG AI Round up

BLOG: Shaping the Fu...

Shaping the Future of Decisioning: How These Leading Financial
The Importance of Customer Experience in Driving Loyalty Across the Subscriber Lifecycle

Blog: The Importance...

Discover how telcos can enhance customer experience across the