How to keep your kids safe in this AI-powered world

Amazon Kids Plus deal
(Image credit: fizkes/Shutterstock)

Many people think of AI as asking ChatGPT for dinner ideas or watching a viral video of talking animals. But in a very short time, the technology has accelerated. It’s now embedded in many parts of daily life, and it’s already presenting serious problems for children and young people – in some cases with tragic consequences.

AI is in your phone, your child’s apps, their games, their search tools, and increasingly in the places they turn to for help or connection. And while some uses are harmless, others are risky, manipulative, or simply too powerful for a young person to navigate alone.

Think of this guide as a starting point. We’ll cover a few of the biggest concerns, what experts say needs to change, and the practical steps parents can take today.

What are the biggest concerns?

Before anything else, experts say the core issue is simple: most parents don’t realize how deeply AI is already woven into everyday life.

“Parents do not fully understand the technologies that are being developed,” Genevieve Bartuski, a psychologist and consultant specializing in ethical AI and the psychology behind digital systems, tells me. “Many of them are worried about social media and content on the internet, but don’t understand how pervasive AI has become.”

The best starting point is accepting that even the most tech-confident adults didn’t grow up with anything like this. The pace of change has been fast, which means risks might not be easy to spot, and the harms involved here can be really different from the social media challenges we already know.

“It’s difficult to single out just one concern,” Tara Steele, Director at the Safe AI for Children Alliance, says.

The scale of the issue is echoed by Andrew Briercliffe, a consultant specializing in online harms, trust, and safety. “We have to remember AI is a HUGE space, and can cover everything from misinformation, to CSAM (Child Sexual Abuse Material) and everything in-between,” he says.

But even so, there are a few clear areas that the experts are most concerned about.

Chatbots

Chatbots are always available, rarely moderated to a standard that’s appropriate for children and young people, and they’re engineered to sound confident and caring. It’s this combination that experts believe is creating a major risk.

Kids are turning to them for all sorts of reasons, just like we know adults do. This includes emotional support, advice, and, increasingly, mental health help. “Young people are resorting to them instead of seeking professional health and guidance,” Briercliffe says.

Because there are no real guardrails in place, and because we know these systems can confidently present inaccurate information, parents often have no idea what is being said to their child in these conversations.

What is a chatbot?

A chatbot is an AI tool that you can talk to in everyday language. You type something in, and it responds as if you’re messaging a person. Tools like ChatGPT, Gemini, and Claude are designed to sound friendly, natural, and helpful.

“Several studies have shown that it is very common for chatbots to give children dangerous advice,” Steele adds. This can include encouragement of extreme dieting or urging secrecy when a child says they want to confide in a teacher or partner.

The consequences of these kinds of conversations can be devastating. “We now have many documented cases where children using these tools were encouraged to harm themselves, and there are ongoing legal cases in the US with strong evidence suggesting that chatbot interactions allegedly played a role in children’s tragic deaths by suicide,” Steele explains. “This shows a catastrophic failure of current safety standards and regulatory oversight.”

One of the core problems lies in how these chatbots are designed. “They’re designed to feel emotionally real,” Steele says. “Children can experience a deep sense of trust that makes them more likely to act on what the chatbot tells them.”

Bartuski explains that Rogerian psychology, which serves up unconditional positive regard, is also built into many of these platforms. “It creates a synthetic relationship where you are not challenged or have to learn to mitigate conflict,” she says.

So what feels comforting at first can become dependence with no pushback and constant praise. This can also distort a young person’s ability to handle real-world relationships.

“The AI interactions become better than real-life experiences,” Bartuski tells me. “Real relationships are messy. There are arguments, disagreements, and moods. There are also natural boundaries. You can’t always call your friend at 3 am because she or he might be sleeping. AI is always there.”

Experts warn that the most serious risks with using chatbots aren’t just these immediate harms. But the long-term developmental effects we still don’t fully understand.

There’s concern about over-reliance on chatbots, difficulty forming relationships, and the way constant AI assistance may shape how a child thinks.

“There are studies that AI is having an impact on critical thinking skills,” Bartuski explains. “Large language models can synthesize a ton of information very quickly. It’s like outsourcing your thinking.”

Nudifying apps and deepfakes

Manipulating images isn’t new, but AI has made it fast, realistic, and accessible, including to young people. These tools can now create convincing sexualized images really quickly, often from nothing more than a school photo or a social media post.

“Nudifying apps are being used, mainly by male teens, targeting fellow students and then sharing content, which can be very distressing for the victims,” Briercliffe says. “Those doing that aren’t aware of how illegal it is.”

Beyond peer misuse, these tools have quickly become a weapon for extortion, too. “Children are being blackmailed using these kinds of manipulated images,” Steele adds.

This is one of the most troubling shifts in online harm. Children are being manipulated, threatened, or coerced through images that can be created instantly, without their knowledge, and without any physical contact.

What is a nudifying app?

A nudifying app is software that uses AI to turn an ordinary picture into a fake sexualised image. It only takes seconds and can be done without the person’s consent. When the images involve children, it is treated as child sexual abuse material in many countries and is a criminal offence.

“I have seen scammers use AI to nudge photos of teenagers and then extort them for money,” Bartuski tells me. “There was a case in Kentucky where a scammer did this to a teenager and threatened to release the photos. The teenager completed suicide over the stress of this.”

Sadly, this isn’t an isolated incident. Back in 2024, research from Internet Matters suggested that more than 13% of kids in the UK have sent or received a nude deepfake.

I know how frightening and shame-inducing these scams can be because I was the victim of a sextortion attempt back in 2024, involving images believed to have been created with a similar kind of nudifying app.

I was an adult at the time, with support networks and a public platform, and it still made me feel scared, paranoid, and deeply ashamed. I spoke openly about what happened to help others feel less alone, but I can’t imagine how overwhelming it would have been if I were younger or more vulnerable.

What needs to happen?

Ideally, protecting children would involve parents, schools, governments, and tech companies all working together. But after years of slow progress on social media regulation, it’s not hard to see why confidence in that happening any time soon is low.

Many of the biggest problems could be addressed if the companies behind AI tools and social platforms took more responsibility and enforced meaningful safeguards. “Tech companies need to be subject to urgent, meaningful regulation if we’re going to protect children,” Steele says. “At the moment, far too much responsibility is falling on families, schools, and the goodwill of industry, and that simply isn’t safe.”

Bartuski agrees that companies should be doing far more. “They have the money, resources, and visibility to be able to do a lot more. Many social media companies have used Fogg’s Persuasive Design to get kids habituated to be lifelong users of their platforms. Tech companies do this on purpose,” she explains.

But this is where the tension lies. We can say tech companies should do more, yet as the risks become clearer, corporate incentives are often moving in the opposite direction. “With the guardrails being removed from AI development (specifically in the US), there are some (not all) companies that are using that to their advantage,” Bartuski says. She has already seen companies push ahead with features they know are dangerous.

Even so, experts agree that certain steps would have an immediate and significant impact. “There need to be clear rules on what AI systems must not be allowed to do, including creating sexualized images of children, promoting self-harm, or using design features that foster emotional dependency,” Steele says.

This forms the basis of the Safe AI for Children Alliance’s ‘Non-Negotiables Campaign’, which outlines three protections every child should have. Alongside banning the creation of sexualized images of children, the campaign states that “AI must never be designed to make children emotionally dependent and AI must never encourage children to harm themselves.”

But relying on tech companies alone won’t cut it. Independent oversight is essential. This is why Briercliffe believes stronger external checks are needed across the industry. “There must be mandatory, independent, third-party testing and evaluation before deployment,” he says. “We also need independent oversight, transparency about how systems behave in real-world conditions, and real consequences when companies fail to protect children.”

And ultimately, this goes beyond individual platforms. “This is ultimately a question of societal responsibility,” Tara says. “We must set strong, enforceable standards that ensure children’s safety comes before commercial incentives.”

What can parents do?

Even with regulations slow to catch up, parents shouldn’t feel at a loss. There are meaningful steps you can take right now. “It’s completely understandable for parents to feel worried,” Steele says. “The technology is moving very fast, and the risks aren’t intuitive. But it is important not to feel powerless.”

1. Understand the basics

Parents don’t need to learn how every AI tool works, Bartuski says. But getting clear on the risks and benefits is important. Steele offers a free Parent and Educator Guide at safeaiforchildren.org that lays out all the major concerns in clear, accessible language, which is a good place to start.

2. Create open, non-judgmental communication

“If kids feel judged or are worried about consequences, they are not going to turn to parents when something is wrong,” Bartuski says. “If they don’t feel safe talking to you, you are placing them in potentially dangerous and/or exploitative situations.” Keep conversations calm, curious, and shame-free.

3. Talk about the tech

You might assume your children understand AI better than you do because they use it more. But they may not grasp how it works, how often it gets things wrong, or that fake content can look real. Bartuski says kids need to know that chatbots can be wrong, manipulative, or unsafe, even when they sound caring or convincing.

4. Use shared spaces

This isn’t about banning tech outright. It’s about making it safer. Steele suggests enforcing “shared spaces", which involves using AI tools in communal areas, experimenting together, and avoiding private one-on-one use behind closed doors. This could reduce the chance of harmful interactions going unnoticed.

5. Extend the conversation beyond the home

Safety shouldn’t stop at your front door. “If you are worried, ask your child's school what they have in place,” Briercliffe says. “Even ask your employer to bring in a professional to give a talk.” Experts agreed that while parents play a key role here, this is a wider cultural challenge, and the more openly we all discuss it, the safer children will be.

6. Find more balance and reduce screen time

We’ve been talking about limiting screen time for years, and it’s just as important now that AI is showing up across apps, games, and social platforms. “Kids need to be taught balance,” Bartuski says. “Play is essential for growth and development.” She also stresses that reducing screen time only works if it’s replaced with activities that are engaging, fun, and cognitively challenging.

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And, of course, you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

TOPICS
Becca Caddy

Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality. 

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.