Can we really tell what’s made by Sora 2? 11 tips to help spot AI-generated video

Sora 2
(Image credit: OpenAI)

Sora 2 is here. OpenAI’s upgraded text-to-video model can generate short, super realistic video clips from a prompt. While we already knew AI video was improving fast, Sora 2 crosses a new threshold: a lot of its output looks genuinely real.

Whether that excites or terrifies you probably depends on how you feel about AI in general. But this isn’t a debate about whether it’s good or bad. It’s about something more practical: Can we actually tell when a video was made by Sora 2? The answer is complicated.

Because no matter how “good” you think you are at spotting AI, you can still be fooled. Even people who work in this space get tricked. That doesn’t make you stupid; it just means the technology has evolved.

You might wonder why it matters so much. Isn’t AI just a bit of fun? Well, sure. Today, it’s a fake AI cat, and it’s mostly harmless. But if we don’t build the skill of spotting AI now, tomorrow it could be a fake politician, a fake arrest, or a fake friend.

The problem is that Sora 2 has quietly fixed most of the old AI giveaways. Things that used to expose AI instantly, like blurs, weird hands, and impossible physics. Sora 2 isn’t perfect, but it’s dramatically better at all of them. But there are cracks if you know where (and how) to look. Spotting Sora 2 videos isn’t just about what you see; it’s about how you think.

1. Look behind the subject

Sora 2 is great at making the main subject look convincing. However, the background can still give it away. Think buildings with impossible proportions, walls that shift, lines that don’t quite meet, and background characters doing bizarre things. We’re naturally drawn to the person or animal in the foreground, but with AI, the truth might be hiding just behind them.

2. Pay attention to physics

Real life has rules that AI doesn’t always play by. Watch for objects that suddenly appear or vanish, lighting that doesn’t match the environment, shadows that fall the wrong way, reflections that show nothing or motion that feels a little too smooth. Even when the overall aesthetic looks right, physics glitches are still one of the clearest tells.

3. Notice movement that feels “off”

Some people get an uncanny valley feeling when they look at fake humans in AI images and videos, which is often down to creepy movements. Like humans that might blink too much, smile too smoothly or move like jerky puppets. But even non-human things can glitch, like static objects that gently wobble, hair that blows in non-existent wind, fabric that moves for no reason. AI loves adding tiny animations everywhere. It makes the world feel alive, but in the wrong way.

4. Look for blurs, noise and smudges

Sora 2 is impressive, but compression still gets weird. You’ll sometimes see grainy patches, warped or melted textures, smudged areas where something was edited out or overly clean spots that look airbrushed. This is exactly why bodycam-style or low-res footage is already so popular on Sora 2 – and so dangerous. It naturally looks messy, which makes all of these flaws harder to spot, and Sora 2 can blend into that aesthetic almost perfectly.

6. Tap into your emotions

AI content is often engineered to provoke a strong emotion. Like shock, awe, sadness or anger. It doesn’t matter which, as long as you react and share. The problem is that when you’re emotional, you’re far less likely to stop and question what you’re seeing. If a video makes you instantly furious or deeply moved, that’s your cue to pause. Manipulation is easier when you’re overwhelmed.

7. Be wary of watermarks

Some Sora 2 videos include a subtle “Sora” watermark that moves through the frame. Perfect, right? Problem solved? Not so fast. Relying on watermarks is risky. People can crop them out, blur them or even add fake ones to make AI content look more authentic. And when a watermark has been removed, there are usually clues, like add aspect ratios, black bars or awkward cropping.

8. Scrutinize the account, not just the video

As content becomes harder to verify, the source becomes even more important. Always check the account sharing it, as there might be some obvious red flags. For example, is it a random viral page built on shocking or sensational clips? Much more likely to be AI. Do they ever include sources or context in the caption? If not, that’s another clue. The less transparent the account is, the more cautious you should be.

9. Check the comments

Comments are often the first place someone screams “AI!”, so they’re worth checking. But be careful. Creators can delete comments, filter out words like “fake” or “AI,” or turn comments off entirely. So just because no one is questioning the video doesn’t mean everyone believes it. Sometimes it just means no one is allowed to question it.

10. Cross-check with reality

If it’s genuinely news, then other reputable outlets are going to be covering it, so check them. Most newsrooms spend a lot of time authenticating video footage, checking where it’s come from, contacting sources, tracing the original upload and digging into the metadata. Whole teams are trained to verify video, so if it only exists in a single viral TikTok, be skeptical.

11. Slow down

This is probably the most important skill. We see so much content, scrolling and sharing at speed, and that’s exactly when we get caught out – especially by emotionally-charged videos. Slowing down gives your brain time to spot the cracks.

And go easy on yourself. You won’t catch every AI video. No one will. But learning to question what we see regularly and with curiosity is the new media literacy. It’s not just about avoiding embarrassment over a fake video. As AI and reality blur more and more, this skill won’t just be useful; it’ll be essential.

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

You might also like

TOPICS
Becca Caddy

Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality. 

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.