Sam Altman is back in the driver's seat at OpenAI – next stop Judgement Day?

Sam Altman
(Image credit: Future / Lance Ulanoff)

Sam Altman is back at Open AI, and we may never know what spooked his own board so much that it made the rash and stunning decision to oust him from the non-profit AI company he helped build.

The six-day news cycle that saw his ouster, hiring at Microsoft (along with company President Greg Brockman and potentially everyone else who threatened to quit OpenAI over the board's move), intense negotiations over a possible return, and then a triumphant, late-night reinstatement to – almost – back where we started was almost unprecedented in the tech news cycle.

Typically, there are rumors of turmoil or signs that a company is in trouble before someone eventually gets the boot (some might consider X, or Twitter, and Elon Musk to be in this cycle right now). All of the machinations can take weeks, if not months. What happened here is more in line with the cultural news cycle, where we fall in love with some pop icon only to find out 24 hours later that they're horrible. From hero to canceled zero in 48 hours.

What spooked the board?

Things moved so quickly that OpenAI's blog post about Altman's removal is still up as I write this, and there's been no official announcement from OpenAI. I agree with others who've suggested that Altman download and frame that post.

On the business side, there have been big boardroom changes that will prevent this from happening again, but I want to focus on what it means for Altman to be back, and more firmly in control of OpenAI than ever, because maybe it isn't such a good thing.

Look, I'm not saying I agree with what the OpenAI board did, but this paragraph from the initial announcement has stuck with me:

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

We never did get any clarity on how Altman was "inconsistently candid" with the board. Even when the interim CEO (and, I'm guessing, now back on vacation) Emmett Shear asked the board, he got no clarity. That should not be construed as a lack of reason. In fact, I'm convinced the board had its reasons.

Most people I talk to think Altman scared the board. OpenAI was designed after all to "advance digital intelligence in the way that is most likely to benefit humanity as a whole.” Implicit in that statement is safety; you can't benefit humanity if your technology is sparking the singularity, and moving us closer to Skynet.

I no longer think it's a coincidence that Altman's removal came just a couple of weeks after his OpenAI Dev Day keynote. That was the moment when, alongside announcing a faster and better GPT, Altman introduced the idea of AI agents.

The concept kicked off with a baby step in the form of GPTs chatbots, which are tailored versions of ChatGPT that, when fed with your own data, will do exactly what you want and not respond in the more generalized fashion we've come to expect from ChatGPT.

You can't benefit humanity if your technology is sparking 'the singularity,' and moving us closer to Skynet.

I saw this update, which would include a sort of AI App store, as a natural evolution in AI development, but I realize now that I did not pay enough attention to Altman's preamble. To introduce GPTs, Altman explained:

"We know that people want AI to be smarter, more person, more customizable, and do more on your behalf. Eventually, you'll just ask the computer for what you need and it will do all these tasks for you. These capabilities are often talked about in the AI field as 'agents'"

If you look up AI agents they're described as agents that can "act in an intelligent manner" and, more distressingly, can perceive their environment and take action autonomously to achieve a goal.

Which, of course, sounds a lot like what we squishy-brain humans do.

Put another way, Altman was describing the kernel of Artificial General Intelligence (AGI), which is the concept of AIs that can think and act more like humans.

Altman quickly added that OpenAI believes "gradual iterative deployment is the best way to address safety issues," and GPTs are surely just a baby step in the AI Agent revolution.

[OpenAI] pulled the conductor out of the coal car and hoped that OpenAI could slow down or even roll to a temporary stop

However, if the last 12 months have taught us anything it's that AI development moves at an exponential pace, accelerating every quarter. Developments we expected to take years are arriving every three months.

Some are now predicting that we arrive at AGI sometime next year, which is way ahead of schedule.

This, I think, is what broke the OpenAI board. They panicked when they realized that no matter the safeguard, we could not slow down this AI development train. So, it pulled the conductor out of the coal car and hoped that OpenAI could slow down or even roll to a temporary stop.

That chance is, of course, now gone because, as Microsoft CEO Satay Nadella rightly concluded, the OpenAI board had bad governance. it didn't communicate its concerns with Altman, and it certainly didn't communicate them well with us. That left them looking ridiculous and out of control, and they quickly lost the support of anyone with a rational (not AI) brain.

Whatever OpenAI was trying to do or stop, that's over now. Altman is in control (along with its partner, Microsoft) and the AI agents are coming. How quickly AGI and the singularity follow is anyone's guess. Just remember, some of you asked for this.

You might also like

Follow TechRadar on TikTok for news, reviews, unboxings, and hot Black Friday deals!

Lance Ulanoff
Editor At Large

A 38-year industry veteran and award-winning journalist, Lance has covered technology since PCs were the size of suitcases and “on line” meant “waiting.” He’s a former Lifewire Editor-in-Chief, Mashable Editor-in-Chief, and, before that, Editor in Chief of PCMag.com and Senior Vice President of Content for Ziff Davis, Inc. He also wrote a popular, weekly tech column for Medium called The Upgrade.

Lance Ulanoff makes frequent appearances on national, international, and local news programs including Live with Kelly and Ryan, the Today Show, Good Morning America, CNBC, CNN, and the BBC.