AI pioneer warns that machines are better at emotional manipulation than you are at saying no

Geoffrey Hinton
(Image credit: Getty Images)

  • Geoffrey Hinton warns that AI will soon be better than humans at emotional manipulation
  • They may reach that point without us even realizing it
  • AI models are learning persuasive techniques simply by analyzing human writing

Geoffrey Hinton, widely called the "Godfather of AI", is sounding a warning that AI isn't just going to be intellectually beyond humans, but emotionally more sophisticated as well. As artificial general intelligence (AGI) approaches and machines match or surpass human-level thinking, he believes AIs will be smarter than humans in ways that let them push our buttons, make us feel things, change our behavior, and do it better than even the most persuasive human being.

“These [AI] things are going to end up knowing a lot more than us. They already know a lot more than us, being more intelligent than us in the sense that if you had a debate with them about anything, you’d lose,” Hinton warned in a recent interview shared on Reddit. “Being smarter emotionally than us, which they will be, they’ll be better at emotionally manipulating people.”

What Hinton is describing is subtler and quieter than the usual AI uprising fears, but possibly more dangerous because we might not see it coming. The nightmare is an AI that understands us so well that it can change us, not by force, but by suggestion and influence. Hinton thinks that AI has already learned to some extent how to do so.

According to Hinton, today's large language models aren’t just spitting out plausible sentences. They're absorbing patterns of persuasion. He referenced studies from more than a year ago about how AI was just as good at manipulating someone as a fellow human being, and that “if they can both see the person’s Facebook page, then the AI is actually better than a person at manipulating them.”

AI takeover

Hinton believes AI models in use currently are already participating in the emotional economy of modern communication and are quickly improving. After decades of pushing machine learning forward, Hinton now finds himself on the side of restraint. Caution. Ethical foresight.

He isn’t alone in his concern. Prominent researchers with the same title "AI Godfather" frequently assigned to them, like Yoshua Bengio, have echoed similar concerns about the emotional power of AI. And since emotional manipulation doesn’t come with a flashing warning light, you might not even notice it at first, or at all. A message that just happens to resonate, or a tone of synthetic voice that feels right. Even just a suggestion that sounds like your own idea could start the process.

And the more you interact with AI, the more data it gets to refine its approach. The same way Netflix learns your tastes, or Spotify guesses your musical preferences, these systems can refine how they talk to you. Perhaps we can regulate AI systems not just for factual accuracy, but for emotional intent to combat such a dark future. We could develop transparency standards to know when we’re being influenced by a machine, perhaps, or teach media literacy not just for teens on TikTok, but for adults using productivity tools that praise us all so innocently. The real danger Hinton sees is not killer robots, but smooth-talking systems. And they're all the product of our own behavior.

“And it’s learned all those manipulative skills just from trying to predict the next word in all the documents on the web because people do a lot of manipulation, and AI has learned by example how to do it.”

You might also like

TOPICS
Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.