5 things AI does in movies and TV I'm still waiting on

The Mitchells vs. the Machines key art. Mochi the pug sits on the front of a car as the Mitchell family have smiles on their faces as robots blast at them in the background.
(Image credit: Netflix)

AI has been featured in media for more than a century and has been portrayed as everything from a snarky best friend to a silent technical aide, or even as one's true love, at least when it's not a smothering caretaker, a murderous military brain, or a tyrannical deity.

Still, benevolent, malicious, or neutral, AI has long been associated with certain abilities that, for better or worse, what we call AI today simply can't do yet, or at least not well.

I get that Hollywood’s version of AI doesn't need to conform to the technicalities of reality when it's focused more on plot and utility. Still, I can hope the (human-friendly) abilities of AI in movies and TV below become real someday, even if they are unlikely to be quite as smooth as portrayed in media in the next few years or decades.

AI that reads the room

Her

(Image credit: Netflix)

In movies, AI isn't just a chatbot that responds to direct requests and sticks to conversation; it's woven into your whole environment. And instead of just teaching the AI to know what tone you want it to take and some details about your life, it knows what you want before you even say anything.

Sometimes that's slightly sinister in a movie or show, but Samantha in the movie Her or Jarvis in Iron Man and other Marvel movies, the AI reads your very mood to play with the lights, curtains, music, and more. The AI should be able to tell you're tired at the end of the day and want a mellow playlist and simmer lighting, or play some punk music videos when you're getting hyped for a night out. Admittedly, Samantha causes an existential crisis and Jarvis gets decanted in the Vision, but the principle is the same.

In real life, AI is exploring this concept, experimenting with predicting what you want based on your requests for lighting, temperature, and even kitchen appliances, such as coffee makers. However, it's far from the same deeply empathic AI depicted in movies that adjusts your environment perfectly. No AI hears your footsteps and thinks, “He needs Billie Holiday and a cappuccino."

Instant recall

The Mitchells vs. the Machines key art. Mochi the pug sits on the front of a car as the Mitchell family have smiles on their faces as robots blast at them in the background.

(Image credit: Netflix)

In every spy thriller ever made, there’s an AI that zooms in on a grainy photo, makes an impossible enhancement, and immediately opens a window with a whole file on who it is, where they've been, and a range of other details. The same goes for the unfortunately not-so-benevolent PAL from The Mitchells vs. the Machines.

Now, I’m not asking ChatGPT to go full surveillance state. But in these stories, AI doesn’t just recognize faces; it connects dots across time, places, and context. It doesn’t just say, “That’s a cat.” It says, “That’s your cat, who just knocked over your fruit bowl.”

Image recognition is making huge strides in AI tools. They can now identify plants, skin conditions, and celebrities, but it takes time and isn't exactly perfect. In theory, a privacy-respecting, user-controlled version of this would be great. I'd love to be able to show a photo of a crowd and ask, “Who is that person I met at the wedding last September who liked ska?” and your AI just knows. Beats Where's Waldo.

Real-time translation

Star Trek: The Next Generation

(Image credit: CBS)

If futuristic movies have one common convenience explained with AI, it's instant translation. Language barriers are merely a temporary inconvenience usually resolved by a small glowing earpiece, or invisible implant in the throat, and running an AI that translates everything in real time with zero latency and 100% accuracy, even if you’re whispering underwater or mid-argument with a new species.

In Star Trek: The Next Generation, the universal translator works instantly across species mid-conversation, while the movie Arrival has a mathematically poetic linguistics toolset. Regardless of how the fictional AI works, the people using it always understand each other.

In reality, we are surprisingly close to this in at least some circumstances. Your phone can understand and translate many languages with different apps and built-in tools. Still, it's far from perfect, and it's more of a walkie-talkie that has to translate and repeat what you and your conversational partner say in your respective languages. It's great, but we’re still not at that magical moment where you walk into a crowded train station in Prague and instantly understand every announcement, gesture, and idiom without switching apps or fumbling for a signal.

Predictive visionaries

Devs series

(Image credit: Hulu via Twitter)

Fictional AIs often know what’s coming next – and not just because they read the script. In shows like Westworld or Person of Interest, the AI anticipates sabotage, plots revenge, or gently nudges humans away from bad choices with just the right amount of foreshadowing. In Devs, the quantum-powered AI doesn’t just predict events, it renders them with cinematic precision, allowing its creators to watch future conversations unfold before they happen. It’s part omniscient assistant, part narrative guide.

In real life, we’ve got data-driven models that can forecast weather, track supply chain issues, and nudge you about potential overdraft fees. But they’re not conversationally predictive. ChatGPT might help you plan a trip, but it won't say, “By the way, based on your last three vacations, you’re probably going to hate this itinerary by day three.”

I don't think a fully rendered video of the possible future is rolling out this year, but it's a fascinating idea that an AI with enough of the right data might help you see what's coming and plan accordingly. If only when it comes to planning around traffic and weather.

You might also like

Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.