'It bothers me that this could be deployed by employers': your boss could soon know you’re struggling before you do — inside the rise of AI mental health prediction tools
AI wants to predict your mental health at work but experts have concerns
Ever since tools like ChatGPT and Claude went mainstream, there’s been a big debate about whether AI should be used for mental health support. Can a chatbot really replace a therapist? That’s a question I’ve asked many times before, and one that still doesn’t have a simple answer.
But AI tools may be able to do more than respond to distress — some may be able to anticipate it.
A new wave of tools — many aimed at workplaces — might be able to spot the early signs of depression, anxiety, or even suicide risk before someone is even aware of it. They're able to analyze patterns in behavior, language, voice and daily activity, looking for subtle signals that something may be wrong.
Article continues belowOn paper, it’s a really appealing idea. But the reality is much more complicated, and the questions go well beyond whether the technology actually works or not.
How can AI tools detect a mental health crisis?
It’s worth being clear upfront that these tools aren’t all the same. But many of them do rely on a similar set of ideas.
Most AI mental health tools collect data in two ways. The first is information that you actively provide — think mood check-ins, sleep logs, journal entries, or even conversations with a chatbot.
The second is everything else. Often referred to as passive sensing, this includes data gathered in the background, like how much you move, how often you message people, how you speak and how quickly you type. The data that’s collected will depend on what these tools can access, whether that’s information from your wearable, your computer, or apps you use.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
The premise is really simple. Changes in behavior often appear before someone consciously recognizes that they’re struggling. An AI system, continuously scanning enough of these signals, may be able to detect those shifts early, flag an issue, and get you help more quickly.
On top of this data layer, many tools use AI chatbots trained on therapeutic approaches such as Cognitive Behavioural Therapy (CBT) to offer support in the moment. They might suggest coping strategies, helping you to reframe thoughts or prompt reflection.
Some elements of this technology are already in use. For example, Meta has long used text and behavioral signals to identify users who may be at risk, while companies like Kintsugi focus on analyzing voice for signs of mental health conditions. Workplace platforms like Unmind have also explored similar approaches.
However, it’s difficult to map the full picture. Many of these capabilities are built into wider AI systems and aren’t always visible to users, so their use may be broader than what we publicly know.
When it comes to whether these tools actually work, the answer is: it depends.
There is some evidence that AI can detect patterns linked to mental health risks — particularly in areas like symptom monitoring and suicide risk screening. But the results are mixed, and performance varies widely depending on the population, the data being used and how the system is deployed.
In practice, most research suggests these tools work best as a supplement to clinicians, rather than a replacement for professional judgement. Reliable, real-world prediction remains much harder.
So, what I'm saying is much more research is needed before AI-driven mental health prediction can be considered robust or widely dependable.
"There are so many nuanced issues that this technology brings up," says psychologist and AI risk advisor Genevieve Bartuski of Unicorn Intelligence Tech Partners. "My fear is that it's hitting the market before they are fully addressed."
What are the concerns?
"When people know they are being watched, they tend to perform. It is an automatic response and often, people don't even realize they are doing it,” explains therapist Amy Sutton from Freedom Counselling.
This is known as the Hawthorne Effect. The tendency to change behavior when you know you’re being observed. In the context of AI monitoring your mental health that could mean people masking signs of distress, consciously or not.
On the flip side, if these tools are rolled out as part of workplace wellbeing programmes and people don’t know they’re being monitored, that raises serious questions about consent.
It also raises a more fundamental question: whose interests are these systems really serving — the individual’s wellbeing, or the organization’s risk management?
“It bothers me that this could be deployed by employers,” Bartuski tells me. “This is information that employers do not need to have or to know. They do not need information about a person's mental health, especially when it can be used against the employee.”
Even when participation is presented as optional, consent can quickly become murky. “Does it put the employee at risk of being negatively impacted if they do not want to participate? If so, that isn't really consent. It's coercive consent,” she says.
Sutton adds that workplace monitoring could actually worsen the problem it’s trying to solve. “With mental health stigmas still rife, AI observation would likely lead to greater efforts to hide evidence of struggles. This could create a dangerous spiral, where the greater our efforts to hide low mood or anxiety, the worse it becomes.”
There’s also the risk of false positives when it comes to AI — where someone is flagged as being at risk when they’re not — and the consequences of that can be serious, particularly in systems that trigger intervention.
Where does this leave us?
The pressure to develop these tools is real. The WHO estimates depression and anxiety cost the global economy $1 trillion a year in lost productivity. That's a number that makes early warning systems look attractive to a lot of employers.
But there’s a risk that prediction tools become a shortcut. An alternative to the slower, more expensive work of building environments where people feel able to say they’re struggling, investing in human support, and creating the conditions where someone notices when a colleague isn’t okay.
"We are being encouraged to give up a basic need of real human connection to be productive, and in turn productivity decreases due to the impact of loneliness and disconnection,” Sutton says.
It echoes a broader pattern I've noticed during my AI reporting over the past year. People often turn to AI for support when real-world networks fall short — sometimes with benefits, but often as a substitute rather than a solution.
AI systems that could genuinely flag a mental health crisis early — with meaningful consent and proper safeguards — might have a place. But without that, they risk doing the opposite of what they promise: making problems harder to see, and giving organizations a reason not to look.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.

Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.