I think I can spot an AI fake, but the latest expert research suggests I'm wrong — here's why
AI images and videos are getting harder to spot, but there is hope
Sign up for breaking news, reviews, opinion, top tech deals, and more.
You are now subscribed
Your newsletter sign-up was successful
AI images and videos are getting harder to spot — and that's only going to get worse. Most people either don't realize how convincing AI-generated content has become, or they're confident they could identify it if they saw it. But research suggests both of these assumptions are wrong.
In the study it was revealed that AI-generatd content is now so good that people are only able to distinguish between it and human-authored content 51% of the time.
We've already seen what's possible. For example, a fake robocall mimicking Joe Biden's voice told voters to stay home in some regions of the US in 2024. AI-generated images of public figures have been used to spread political misinformation and run financial scams.
Non-consensual, image-based sexual abuse created with AI has become a serious and growing harm. And it's not just individuals who are at risk here. Businesses are being targeted too. Deepfake-enabled fraud is projected to cost $40 billion globally by 2027. The stakes here are very real, and they're only going to keep rising.
Article continues belowMost of the best solutions we have right now focus on the technology itself. Think better detection tools, watermarking and regulation. All of that matters and, fingers crossed, it’ll advance alongside AI. But a growing body of research is asking a different question: what if we focused on people instead? What if, rather than building better defenses, we could “inoculate” people before they encounter any AI-generated content at all?
Psychological inoculation
We all know that vaccines introduce a weakened version of a pathogen to prime the immune system. Well, psychological inoculation works in much the same way. It exposes people to information about how misinformation works before they encounter the real thing. That way, they're primed to question it when they do.
I’ve been fascinated by a study from the University of Iowa, which put this idea of psychological inoculation to the test with political deepfakes. The researchers split participants into three groups. One received a simple text-based warning explaining how deepfakes work and what to look out for. Another group played an interactive game that challenged them to spot deepfakes. A third, the control group, received nothing. All three groups then watched deepfake videos showing either Joe Biden or Donald Trump making fabricated statements.
The results showed that both the text warning and the game made participants less likely to believe what they'd seen and more likely to want to investigate further to find the truth. And interestingly, those two approaches worked just as well. So, a brief written explanation beforehand was almost as effective as an interactive game.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
The simplicity of that finding is really interesting and it’s encouraging. It suggests that the barrier to helping people here might be much lower than we might assume.
The power of simple approaches
The University of Iowa study is part of wider research that’s been building for a while now. For example, back in 2022, researchers showed people short, animated clips to teach them how to recognize manipulation techniques, like emotional language and false dichotomies. These were shown as YouTube ads to more than 5 million users in the US. After seeing them, people were then able to better recognize those techniques, regardless of their political affiliations and education levels.
Similar research has been conducted on social media. In the UK, more than 300,000 Instagram users were targeted with a short, 19 second video about emotionally manipulative content. Those who watched it were more likely to identify manipulation in a headline and more likely to click on links to find out more.
A 2025 study also found that just 5 minutes of training, which involved showing people how to spot AI-generated faces, raised their detection accuracy afterwards. Games are also being developed, which could make the training more enjoyable. Cat Park is a gamified inoculation tool designed specifically for spotting image-based misinformation. Players learn to identify some common techniques and, after playing, they then found misleading content less credible.
The technical side
Beyond human detection, technical solutions are being developed. Google DeepMind's SynthID embeds watermarks into AI-generated images and video. Similarly, Meta has open-sourced a tool for video called Video Seal. These solutions aren’t foolproof, but it’s good to know that this sort of infrastructure for provenance-checking is being built by some of the biggest tech companies, at scale.
Then there’s the C2PA standard, which attaches metadata to content, working like a provenance label. This is currently backed by tech companies like Adobe, Microsoft and Google. For anyone who wants to check content as they browse, there are free tools like Hive AI Detector and DeepFake-o-Meter.
It’ll also be interesting to see what will happen with the EU AI Act in August of this year. That legally requires AI-generated content to be labelled, which is a huge step but could take a lot of time to enforce.
How to spot a deepfake
Psychological inoculation research is promising, but it hasn't been widely rolled out yet. So what should you actually look for?
The classic tells, like strange hands with too many fingers and nonsensical text on signs and posters, are becoming less reliable as AI improves. But there are several other things that still give content away.
In video, pay close attention to lips. A lag between what you're seeing and hearing, or a mouth that doesn't fully open and close, is a strong signal something's wrong. In images, look for things that defy physics, like fabric behaving weirdly, bag straps that go nowhere, faces that appear unnaturally smooth or symmetrical.
But rather than playing Where's Wally? every time you see a photo or video, the most powerful tool is context. Ask whether what you're seeing makes real-world sense. Who shared this and why? Would this person actually say this? Does it seem designed to make you angry or outraged? A quick reverse image search or fact-check takes seconds.
None of this is a complete solution. Detection tool accuracy is dropping as generators improve, and several of the studies cited here flag that the long-term durability of inoculation effects remains unclear.
Which is why the best advice isn't really about spotting specific tells at all. It's about building a general habit of scepticism. Slow down before you share. Double-check things. Ask who benefits from you believing something. As the research suggests, sometimes just stopping to think is enough.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.