The AI paradox: Why more AI models don't equal less fraud

A pink triangle with a red exclamation mark inside on a blue digital landscape
(Image credit: Getty Images)

In 2025, fraud cases surged, with a total of £629.3 million stolen by criminals in the first six months alone and 2.09 million confirmed incidents cases across both authorized and unauthorized fraud.

Across banking, insurance, and government, fraud prevention teams are doubling down on Artificial Intelligence (AI) to combat a new generation of rapidly evolving threats.

Ross Aubrey

Head of Fraud Solutions EMEA at Quantexa.

AI has huge potential: it can spot patterns humans miss, analyze billions of records instantly, and slash false positives.

Latest Videos From

Yet many organizations are encountering a frustrating reality - deploying more models does not automatically reduce fraud losses.

This tension is what we call the AI Paradox.

What is the AI Paradox?

The paradox lies in the gap between AI’s theoretical potential and its real-world performance within the messy, high-stakes domain of financial crime.

Several key tensions define this disconnect. First, there is a mismatch between data volume and relevance. While total transaction data is massive, confirmed fraud cases are statistically rare, making it difficult to train high-precision models.

At the same time, fraudsters are adopting AI tools themselves. With Generative AI (GenAI), they can rapidly adapt tactics, automating the creation of deepfake IDs, synthetic documents, and large-scale phishing campaigns.

Speed has become a critical advantage. However, prioritizing speed often leads to black-box models, where little attention is given to explainability. While these models may be effective, decisions that cannot be justified to regulators or investigators quickly become a liability rather than an asset.

Finally, while investment in AI is increasing, the way performance is measured remains a challenge. Flagging millions of suspicious events is relatively easy; the real value lies in prioritizing the small number that require human intervention. Smarter prioritization reduces investigative workload and surfaces the most critical cases.

This challenge is compounded by fragmented data across institutions, often restricted by regulatory and operational silos. Connecting these data sources provides AI with a more complete view, improving decision-making, increasing accuracy, and enabling more efficient workflows.

GenAI: The Great Accelerator

AI may be neutral in principle, but its accessibility to all sides intensifies a fundamental paradox. It has become the ultimate dual-use technology.

For defenders, GenAI accelerates detection, streamlines customer due diligence, and surfaces hidden risks at unprecedented speed. For adversaries, it is being weaponized to craft convincing social engineering attacks, automate bot-driven scams, and generate synthetic identities that erode digital trust.

Unlike regulated organizations, criminals operate without constraints. Tools such as FraudGPT and WormGPT have lowered the barrier to entry, enabling even low-skilled actors to launch sophisticated, cross-border attacks at minimal cost.

At the same time, organizations want to use GenAI to query data and summarize complex cases. Doing so requires robust governance frameworks to mitigate bias, protect privacy, and ensure explainability.

A Cross-Industry Crisis

The impact of GenAI is not confined to a single sector; it represents a systemic shift in how deception is created and scaled.

In banking, it enables the rapid expansion of mule account networks and phishing campaigns, with synthetic identities masking criminal activity.

In insurance, organized fraud rings are fabricating entire claims, complete with medical records and accident imagery, at a level of realism that is increasingly difficult to detect.

Public sector systems are facing similar pressure, as tax and benefits programs are targeted by synthetic personas supported by convincing but entirely fabricated digital evidence.

Overcoming the Paradox

Without context, AI remains siloed, reactive, and vulnerable. To address this, organizations are shifting towards Decision Intelligence (DI), an approach that goes beyond applying AI to incorporating contextual understanding. This enables the detection of relationships and behaviors that traditional models often miss.

By connecting data points, organizations can perform entity resolution across people, companies, and counterparties, creating a unified view of risk. This holistic perspective supports more informed decision-making and helps identify suspicious behavior across multiple touchpoints, including collusive fraud rings and mule networks that would otherwise remain hidden.

Interoperability further strengthens this approach by integrating existing systems with third-party data, closing the gaps that fraudsters exploit.

Crucially, context also addresses the “black box” problem. By providing the reasoning behind alerts, AI becomes more transparent and explainable for investigators and regulators. This builds trust, supports wider adoption, and strengthens the effectiveness of AI in fraud prevention.

From Paradox to Progress

The question is no longer if AI will be used, but who will use it more effectively. Simply layering more AI on top of existing systems will not defeat fraudsters, they have access to the same tools.

On its own, AI is just a tool. Combined with contextual insight and network-level detection, it becomes a defense. To outpace fraud, organizations must move beyond isolated models and start seeing the network, how entities connect, interact, and evolve over time.

This means integrating AI with richer data and broader context to reduce the blind spots and vulnerabilities that fraudsters exploit.

We've listed the best Identity Theft Protection of 2026: tried and tested protection from Aura, IdentityForce, Experian, and more.

This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.

The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit

TOPICS

Head of Fraud Solutions EMEA at Quantexa.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.