Would you buy your child a ChatGPT‑powered Barbie? I’m queasy at the prospect of a real‑life Small Soldiers scenario

Small Soldiers
(Image credit: Getty Images)

Mattel is partnering with OpenAI to build AI‑powered toys, which might lead to some amazing fun, but also sounds like the premise for a million stories of things going wrong.

To be clear, I don't think AI is going to end the world. I've used ChatGPT in a million ways, including as an aide for activities as a parent. AI has helped me brainstorm bedtime stories and design coloring books, among other things. But that's me using it, not opening it up to direct interaction with children.

The official announcement is very optimistic, of course. Mattel says it’s bringing the “magic of AI” to playtime, promising age‑appropriate, safe, and creative experiences for kids. OpenAI says it’s thrilled to help power these toys with ChatGPT, and both companies seem intent on positioning this as a step forward for playtime and childhood development.

But I can’t help thinking of how ChatGPT conversations can spiral into bizarre conspiracy theories, except suddenly it's a Barbie doll talking to an eight-year-old. Or a GI Joe veering from positive messages about "knowing is half the battle," to pitching cryptocurrency mining because some six‑year‑old heard the word “blockchain” somewhere and thought it sounded like a cool weapon for the toy.

As you might have noted from the top image, the first thought I had was about the film Small Soldiers. The 1998 corny classic about an executive at a toy company deciding to save money by installing military-grade AI chips into action figures, leading to the toys staging guerrilla warfare in the suburbs? It was a satire, and not a bad one at that. But, as over-the-top as that outcome might be, it's hard not to see the glimmer of chaotic potential in installing generative AI in the toys children may spend a lot of time with.

I do get the appeal of AI in a toy, I do. Barbie could be more than just a doll you dress up, she could be a curious, clever conversationalist who can explain space missions or play pretend in a dozen different roles. Or you could have a Hot Wheels car commenting on the track you built for it. I can even picture AI in Uno as a deckpad that actually teaches younger kids strategy and sportsmanship.

But I think generative AI models like ChatGPT shouldn't be used by kids. They may be pared down for safety's sake, but at a certain point, that stops being AI and just becomes a fairly robust set of pre-planned responses without the flexibility of AI. That means avoiding the weirdness, hallucinations, and moments of unintended inappropriateness from AI that adults can brush off but kids might absorb.

Toying with AI

Mattel has been at this a long time and knows what it is doing, in general, with its products. It's certainly not to their advantage to have their toys go even slightly haywire. The company said it will build safety and privacy into every AI interaction. They promise to focus on appropriate experiences. But “appropriate” is a very slippery word in AI, especially when it comes to language models trained on the internet.

ChatGPT isn’t a closed-loop system that was built for toys, though. It wasn’t designed specifically for young kids. And even when you train it with guidelines, filters, and special voice modules, it’s still built on a model that learns and imitates. There’s also the deeper question: what kind of relationship do we want kids to have with these toys?

There’s a big difference between playing with a doll and imagining conversations with it, and forming a bond with a toy that independently responds. I don’t expect a doll to go the full Chucky or M3gan, but when we blur the line between playmate and program, the outcomes can get hard to predict.

I use ChatGPT with my son in the same way I use scissors or glue – a tool for his entertainment that I control. I’m the gatekeeper, and AI built into a toy is hard to monitor that way. The doll talks. The car replies. The toy engages, and kids may not notice anything amiss because they don't have the experience.

If Barbie’s AI has a glitch, if GI Joe suddenly slips into dark military metaphors, if a Hot Wheels car randomly says something bizarre, a parent might not even know until it’s been said and absorbed. If we’re not comfortable letting these toys run unsupervised, they’re not ready.

It’s not about banning AI from childhood. It’s about knowing the difference between what’s helpful and what’s too risky. I want AI in the toy world to be very narrowly constrained, like how a TV show aimed at toddlers is carefully designed to be appropriate. Those shows won't (hardly ever) go off script, but AI's power is in writing its own script.

I might sound too harsh about this, and goodness knows there have been other tech toy scares. Furbies were creepy. Talking Elmo had glitches. Talking Barbies once had sexist lines about math being hard. All issues that could be resolved, except maybe the Furbies. I do think AI in toys has potential, but I'll be skeptical until I see how well Mattel and OpenAI navigate that narrow path between not really using AI and giving the AI too much freedom to be a bad virtual friend to give your child.

You might also like

TOPICS
Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.