AI is scaling a billion-dollar fraud problem, and you’re the victim

A robot hand touching a locked digital shield blocking a human from accessing data
(Image credit: Blue Planet Studio/Shutterstock)

Ad fraud isn’t a fringe issue; it’s plagued the industry for years, having already cost advertisers tens of billions of dollars.

In fact, according to 2026 research on the subject, estimated global losses due to ad fraud topped $32.6 Billion last year alone, with analyzed traffic carrying an average fraud rate of 4.81%. (For some ad networks, observed fraud rates jumped as high as 21.8%).

Monique Tison

Global PR & Marketing Specialist at Spider Labs.

At the same time, AI-driven automated campaign types, such as Google’s Performance Max and Meta’s Advantage+, have quickly become the norm. And why wouldn’t they?

Article continues below

With increasing pressure for marketing teams to deliver more results faster than before with less human resources, these automated features promise simplicity, efficiency, and scale.

Instead of one overworked digital marketer hand-picking keywords, ad placements, and budget amounts for each campaign - then re-adjusting the whole thing after a few weeks of observation - they can leave all of that tedium to the algorithm.

Simply set the desired objectives, and the AI analyzes various data signals in real time to optimize the campaign automatically. Removing this burden of manual input should, in theory, improve productivity.

Loss of visibility and control

In practice, however, the relinquishing of human intervention comes with a grave tradeoff: the loss of visibility and control.

The more campaign delivery is automated (that is to say, the more decisions are made by AI tools), the less room there is for human oversight.

Where ads appear, what kind of inventory is being rewarded, how much budget is allocated…The Machine™ decides all. Choices that were once dictated by digital marketing managers and ad specialists are now being made automatically in a metaphorical “black box” without their professional, human input.

And because these warm-blooded marketing professionals are no longer directly involved in running and optimizing their ad campaigns, they have limited means to verify whether or not the decisions made - or more importantly, the data those decisions are based on - are actually sound.

Are those ad impressions coming from real users, or bots? Are those conversion statistics the result of high-intent, marketing qualified leads? Or click spamming? What dark corners of the internet are the banner ads for your non-profit childcare service popping up on? Not even The Machine™ can say.

What’s especially nefarious is that the consequences of businesses’ blind trust in these AI-run ad campaigns aren’t particularly obvious, nor are they readily apparent.

More often than not, it’s a “boiling frog” style situation of gradually compounding repercussions in the form of weakening lead quality, inconsistent ad performance, and mysteriously mis-allocated budget.

By the time any concerning patterns become obvious enough to notice, it’s likely too late; a large portion of budget has been irretrievably lost, and the ad delivery algorithm, having already absorbed fraudulent signals, will continue to optimize towards harmful outcomes.

Made-for-advertising

One place this effect is particularly showing up is in the expansion of MFA ad inventory.

“Made-for-Advertising”, or MFA sites for short, are low-value websites built mainly to generate ad impressions rather than provide meaningful content, and they’re nothing new.

Anyone who remembers the late 1990s and early 2000s will recall the scourge of ad-heavy, low-quality websites cluttered with pop-ups and trojan viruses that littered the internet. Fast-forward to 2026, and they haven’t gone anywhere. If anything, they’ve proliferated thanks to the modern accessibility of generative AI.

Content that once took (relative) time and effort to produce can now be generated in bulk - cheaply and quickly.

As a result, low-value inventory (or “advertising space” for those unfamiliar with the jargon) is spreading much faster than before.

The scale of that growth is already visible: the same 2026 study cited earlier observed in their dataset that placements on MFA sites surged 14 times year over year, while estimated losses tied to those placements rose 533%.

This is significant because, as stated before, AI-automated advertising platforms do not evaluate quality in the same way humans do: they simply respond according to the signals they receive.

Google or Meta’s machine learning algorithms currently can’t reliably discern whether or not a website contains well-crafted, useful content that draws organic users, nor can it tell if those ad interactions are a result of genuine human interest or malicious bots.

As long as an ad placement generates enough impressions, clicks, and conversions - all behavior of which ad fraud is capable - then the system views it as a success.

So while an AI-run ad campaign might look good on the surface according to the standard KPIs, in reality it might be simply responding to fraudulent signals that do little to support business outcomes.

Wrong outcomes

The risk that marketing and advertising professionals currently underestimate is thus: AI campaign tools learn from the inputs they receive.

If those inputs include invalid traffic, click spam, or misleading conversion activity, then the system won’t just make one bad decision; it will continue optimizing toward the wrong outcomes, compounding the damage over time.

Prior to campaign automation, the consequences of ad fraud were largely wasted impressions and budget leakage for that specific campaign run. In AI-driven environments, however, repercussions can now shape future campaign decisions as well.

Budgets get shifted based on noise, poor placements get rewarded, and low-quality traffic can influence targeting and bidding.

Feed the algorithm enough fraud, and it will start actively optimizing for even more fraud - all the while deprioritizing valid signals from actual potential consumers. Furthermore, thanks to the efficiency of AI and automation, all of this will happen much more quickly and quietly than can be realized.

No, this doesn’t mean businesses should avoid using P-Max or Smart Bidding or any of the other growing number of AI campaign delivery systems. They aren’t going anywhere, and they undeniably have many merits over old-school, manual campaign management.

Instead, this era calls for a new level of vigilance; AI and automation cannot be left to operate without human scrutiny. Gone are the days where dashboard KPI’s and vanity metrics could be trusted.

Advertisers now need to know where those conversions are coming from, which placements are driving them, and whether the traffic behind them is genuine. Ad performance can look perfectly strong in a dashboard despite being overrun by ad fraud.

If the promise of AI-driven advertising rests on the quality of the signals it’s fed, then protecting the integrity of that data is imperative to achieving actual business impact.

AI isn’t creating ad fraud from scratch - but it is making a long-standing problem more scalable, harder to detect, and potentially more expensive to ignore.

We've featured the best ecommerce platform.

This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.

The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit

TOPICS

Global PR & Marketing Specialist at Spider Labs.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.