‘We haven’t figured that out yet’: Sam Altman explains why using ChatGPT as your therapist is still a privacy nightmare
You should stick to trained humans

- OpenAI’s CEO says using ChatGPT for therapy has serious privacy risks
- Your private chats might be exposed if OpenAI were to face a lawsuit
- Feeding your private thoughts into an opaque AI is also a risky move
One of the upshots of having an artificial intelligence (AI) assistant like ChatGPT everywhere you go is that people start leaning on it for things it was never meant for. According to OpenAI CEO Sam Altman, that includes therapy and personal life advice – but it could lead to all manner of privacy problems in the future.
On a recent episode of the This Past Weekend w/ Theo Von podcast, Altman explained one major difference between speaking to a human therapist and using an AI for mental health support: “Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”
One potential outcome of that is that OpenAI would be legally required to cough up those conversations were it to face a lawsuit, Altman claimed. Without the legal confidentiality that you get when speaking to doctor or a registered therapist, there would be relatively little to stop your private worries being aired to the public.
Altman added that ChatGPT is being used in this way by many users, especially young people, who might be especially vulnerable to that kind of exposure. But regardless of your age, the conversation topics are not the type of content that most people would be happy to see revealed to the wider world.
A risky endeavor
The risk of having your private conversations opened up to scrutiny is just one privacy risk facing ChatGPT users.
There is also the issue of feeding your deeply personal worries and concerns into an opaque algorithm like ChatGPT’s, with the possibility that it might be used to train OpenAI’s algorithm and leak its way back out when other users ask similar questions.
That’s one reason why many companies have licensed their own ring-fenced versions of AI chatbots. Another alternative is an AI like Lumo, which is built by privacy stalwarts Proton and features top-level encryption to protect everything you write.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Of course, there’s also the question of whether an AI like ChatGPT can replace a therapist in the first place. While there might be some benefits to this, any AI is simply regurgitating the data it is trained on. None are capable of original thought, which limits the effectiveness of the advice they can give you.
Whether or not you choose to open up to OpenAI, it’s clear that there’s a privacy minefield surrounding AI chatbots, whether that means a lack of confidentiality or the danger of having your deepest thoughts used as training data for an inscrutable algorithm.
It’s going to require a lot of effort and clarity before enlisting an AI therapist is a significantly less risky endeavor.
You might also like

Alex Blake has been fooling around with computers since the early 1990s, and since that time he's learned a thing or two about tech. No more than two things, though. That's all his brain can hold. As well as TechRadar, Alex writes for iMore, Digital Trends and Creative Bloq, among others. He was previously commissioning editor at MacFormat magazine. That means he mostly covers the world of Apple and its latest products, but also Windows, computer peripherals, mobile apps, and much more beyond. When not writing, you can find him hiking the English countryside and gaming on his PC.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.