OpenAI's CEO says he's scared of GPT-5

Sam Altman and ChatGPT logo.
(Image credit: Shutterstock/ DIA TV)

  • OpenAI CEO Sam Altman said testing GPT-5 left him scared in a recent interview
  • He compared GPT-5 to the Manhattan Project
  • He warned that the rapid advancement of AI is happening without sufficient oversight

OpenAI chief Sam Altman has painted a portrait of GPT‑5 that reads more like a thriller than a product launch. In a recent episode of the This Past Weekend with Theo Von podcast, he described the experience of testing the model in breathless tones that evoke more skepticism than whatever alarm he seemed to want listeners to hear.

Altman said that GPT-5 “feels very fast,” while recounting moments when he felt very nervous. Despite being the driving force behind GPT-5's development, Altman claimed that during some sessions, he looked at GPT‑5 and compared it to the Manhattan Project.

Altman also issued a blistering indictment of current AI governance, suggesting “there are no adults in the room” and that oversight structures have lagged behind AI development. It's an odd way to sell a product promising serious leaps in artificial general intelligence. Raising the potential risks is one thing, but acting like he has no control over how GPT-5 performs feels somewhat disingenuous.

OpenAI CEO Sam Altman: "It feels very fast." - "While testing GPT5 I got scared" - "Looking at it thinking: What have we done... like in the Manhattan Project"- "There are NO ADULTS IN THE ROOM" from r/ChatGPT

Analysis: Existential GPT-5 fears

What spooked Altman isn’t entirely clear, either. Altman didn’t go into technical specifics. Invoking the Manhattan Project is another over-the-top sort of analogy. Signaling irreversible and potentially catastrophic change and global stakes seems odd as a comparison to a sophisticated auto-complete. Saying they built something they don’t fully understand makes OpenAI seem either reckless or incompetent.

GPT-5 is supposed to come out soon, and there are hints that it will expand far beyond GPT-4’s abilities. The "digital mind" described in Altman’s comments could indeed represent a shift in how the people building AI consider their work, but this kind of messianic or apocalyptic projection seems silly. Public discourse around AI has mostly toggled between breathless optimism and existential dread, but something in the middle seems more appropriate.

This isn't the first time Altman has publicly acknowledged his discomfort with the AI arms race. He’s been on record saying that AI could “go quite wrong,” and that OpenAI must act responsibly while still shipping useful products. But while GPT-5 will almost certainly arrive with better tools, friendlier interfaces, and a slightly snappier logo, the core question it raises is about power.

The next generation of AI, if it’s faster, smarter, and more intuitive, will be handed even more responsibility. And that would be a bad idea based on Altman's comments. And even if he's exaggerating, I don't know if that's the kind of company that should be deciding how that power is deployed.

You might also like

TOPICS
Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.