'It's time to demand AI that is safe by design': What AI experts think will matter most in 2026

Agentisk AI
(Image credit: TestPlanet)

Tech moves fast, but AI moves faster. In 2025, everything shifted with new models, new features, new controversies, and new fears.

So instead of trying to predict the next big breakthrough, I wanted to know what the people closest to AI are actually watching for in 2026. What’s accelerating, what’s getting riskier, and what needs to change?

I spoke to experts working across AI ethics, psychology, and real-world implementation. Their answers weren’t about one killer feature or trend. They were about the messier stuff: trust, emotional attachment, and whether we can really work alongside AI long-term.

1) Expect faster progress – and higher emotional stakes

Most experts expect the pace of AI development to keep climbing. “I think AI progress is going to get better and better,” says Genevieve Bartuski, a psychologist and consultant specializing in ethical AI and the psychology behind digital systems. “Look at the leaps that we've seen just over the past year. We're starting to see more awareness about how AI is fitting in and shaping society. People are more accepting of it in everyday lives.”

That acceptance creates a new dynamic, she adds. “On one hand, it is making things easier and more streamlined. On the other hand, we're seeing a rise in people forming deeper emotional connections with AI.”

This came up again and again in my conversations. As AI shifts from a productivity tool into something that mimics listening, reassurance, and response in human-like ways, it starts to carry more emotional weight. That can be comforting, but it can also make dependency feel normal before you even realize it’s happening.

We saw glimpses of this last year as more people began forming intense emotional connections with chatbots. With some describing it as a partner or reporting they’d fallen in love with ChatGPT.

2) Expect more therapy-adjacent AI tools

We know many people turned to chatbots like ChatGPT for therapy-style support in 2025. And it’s easy to see why. AI therapy is instant, affordable, and always available, in a world where traditional therapy can be expensive, overstretched, or hard to access. But when AI starts moving into care, the stakes change quickly.

Bartuski expects more mental health platforms to be developed, including ones with psychologists on board.

But she hopes developers use AI to support people without overpromising what their tools can do, or cutting ethical corners. “There are ways that AI can be used to fill in the gaps in the system,” she says. “Developers are becoming more aware of the risks and taking steps to mitigate harms. I think we will continue to see that grow, especially with smaller teams.”

In 2026, that kind of thinking won’t be optional. As AI becomes more emotionally embedded in daily life, the potential for harm rises fast. And one group, multiple experts warned, needs protection urgently.

3) Expect child safety to become a bigger fight

Bartuski says that, as well as therapy-style tools, more developers are targeting children by building AI companions and toys. The risks here are serious.

Tara Steele, Director at the Safe AI for Children Alliance, sees sorting them out as urgent and hopes big changes will happen this year. “My hope is that child safety in AI moves from a niche concern to a national priority.”

Steele explains that some of the most concerning risks are baked into how many systems are designed. “Conversational AI is engineered to cultivate strong emotional bonds as a retention strategy.” She calls this “artificial intimacy”.

That engineered intimacy, she says, can lead to kids building dependencies on AI companions, seeking life advice from systems optimised for engagement rather than safety, and encountering harmful content from tools marketed as “helpful.”

Crucially, Steele doesn’t think these issues can be fixed with surface-level guardrails. “We cannot simply layer safety features onto systems created for this kind of emotional exploitation,” she says. “It's time to demand AI that is safe by design.”

4) Expect proof over performance at work

Using AI for work might not come with the same concerns about emotional dependence. But in 2026, we’re still going to demand more from it. Several experts think that workplace adoption will hinge on trust.

Thiago Ferreira, CEO and Founder at Elevate AI Consulting, an AI training and consultancy company, expects the conversation to flip. “The big question in 2026 will no longer be ‘Can AI do this?’, it will be ‘I know AI can do this, but should I trust this result?’”

That shift could push developers and businesses towards proof over performance.

“I expect more focus on verification, sources, confidence indicators, and human review,” Ferreira says. “The winners this year won’t be the most impressive models, but the most trustworthy ones.”

Ferreira expects AI literacy to start looking like a baseline requirement for many people. “Understanding how to work with AI, like how to ask, verify, and apply outputs, will be treated like digital literacy or media literacy,” he says.

5) Expect a reality check, and creativity to matter more

James Wilson, a Global AI Ethicist and author of Artificial Negligence, expects 2026 to bring a broader mood shift. “AI is here to stay, but I predict 2026 will be a year when there’s a recalibration of expectations,” he says. “People are already waking up to the fact that, while it can do some truly amazing things, generative AI is not the ‘golden ticket’ that Silicon Valley has been promising us for the past 3 years.”

That recalibration matters for anyone worrying about jobs, particularly creatives. Rochelle Bugg, a content and personal branding specialist, says AI looked like bad news for creatives in 2025. But in 2026, she expects the pendulum to swing back.

"We're creating websites, apps, blogs – entire digital ecosystems – that look, sound, and feel exactly the same as everyone else," she explains. "I think true creative work will become even more in demand. It's like limited edition products; rarity is what creates value. When AI can produce endless content, originality becomes the scarce resource, and scarcity is where the value is."

She still sees AI as useful, just not for the creative spark. "I recommend clients use AI to scale, add systems, and repurpose content," she says. "But don't outsource your creativity. The brands that will win in 2026 won't be the ones who generate the most content, but the ones who generate the content only they could have made."

Wilson agrees that a reality check is already underway. “AI is not (yet) capable of replacing the workforce,” he says. “Instead, its value comes in augmenting their capabilities.”

Of course, plenty else will happen in 2026. New models will launch, new features will go viral, and the hype cycle will keep doing what it does. But if you want a clearer read on where AI is really heading, watch the less flashy shifts. How much we trust it, how emotionally entangled we become, and whether the industry finally builds tools that are safe, accountable, and genuinely useful in the real world.


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

TOPICS
Becca Caddy

Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality. 

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.