‘I think we’ve achieved AGI’ — er Jensen… I don't think we have

Nvidia GTC 2025 Jensen Huang keynote
(Image credit: Nvidia)

Nvidia CEO Jensen Huang just said, “I think we’ve achieved AGI,” while on a podcast.

Of course, this has generated a lot of buzz as, if he’s correct, it would be a major leap forward in AI capabilities.

Spoiler, we haven’t made AGI.

Article continues below
Jensen Huang: NVIDIA - The $4 Trillion Company & the AI Revolution | Lex Fridman Podcast #494 - YouTube Jensen Huang: NVIDIA - The $4 Trillion Company & the AI Revolution | Lex Fridman Podcast #494 - YouTube
Watch On

In response to Fridman's question of how many years that kind of capability is away from launch, Huang said, “I think it’s now. I think we’ve achieved AGI.”

He then went on to explain that an AI might not achieve Nvidia's lasting success, but it could maybe make a viral paid app that costs 50 cents and sell it to a lot of people before going out of business.

He explained it could be “some social application that, you know, feeds your little Tamagotchi or something like that, and it becomes an out-of-the-blue instant success. A lot of people use it for a couple of months, and it kind of dies away.” Meanwhile, the odds of AI producing an Nvidia, even 100,000 agents doing so, are “zero.”

The thing is, this isn't AGI — even an AI that mimics Nvidia's success wouldn't be AGI. It would be impressive, sure, but artificial general intelligence is something much more special.

Demystiying AI

(Image credit: Future)

No, we haven’t achieved AGI

Artificial General Intelligence is the holy grail of AI development. It would be a digital form of human intelligence — that is, rather than an AI needing to be trained on each specific task being asked of it (like existing, so-called narrow AI are right now), the bot would be able to apply its existing information to new situations just like a human can.

AGI would combine self-learning, common sense, contextual understanding, and the ability to think abstractly at high speed into one system.

It would be a monumental leap forward for what AI is capable of, but as you might expect, it isn’t something researchers have been able to crack quickly — with some debating that we might never achieve it.

Even if we do construct AGI, most researchers believe that we aren’t anywhere close to that eventuality. The majority of the 475 AI researchers surveyed by the Association for the Advancement of Artificial Intelligence (76%) said that scaling up our current AI efforts is unlikely or very unlikely to result in AGI.

AGI isn’t just an upgraded LLM; it’s a whole different AI architecture, and it requires its own research and development — imagine trying to build an incredible airplane by making better and better cars, that’s sort of what’s happening with AGI and LLMs.

At the same time, AGI isn’t a well-defined thing — partly because it’s hard to define something which doesn’t exist yet. There’s a difference between AGI and an AI that can simply do lots of different things, but where the line is drawn is difficult to determine.

Nvidia CEO Jensen Huang presenting Grace Blackwell at Computex 2025

AGI isn't DLSS 5 slop either (Image credit: Nvidia)

What further muddies the water is the financial incentive companies have to deliver AGI, or to at least promise it’s almost here.

For example, OpenAI’s deal with Microsoft gives it some incredible benefits if AGI is achieved.

AGI, and making it feel close, is also how you appeal to investors. The economic potential of AGI is huge for how it could truly revolutionize every industry, and the promise/hope it’s just around the corner is what could convince investors to hold onto their stake in the AI company of their choice for longer — or risk feeling the ultimate financial FOMO — rather than selling out.

This is true too for Nvidia, which, as the pan and pick seller in this AI gold rush, wants to keep hype up. If it falls, demand for chips would drop too, and that would seriously affect Nvidia’s bottom line.

At the same time, many have noted that AGI isn’t the be-all and end-all. Just because an AI isn’t a jack-of-all-trades doesn’t mean it can’t be a master of one, and just like humans, having someone/something that hyperspecializes in a key area is more useful in a way than something that’s okay at lots of tasks.

AI quantization

AGI is still some time away, if it ever happens (Image credit: Future/Flux)

I don’t care if my surgeon is a decent horticulturist, could teach me to be a confident skier, and moonlights as a vintage car restorer — I simply want them to be a leading expert in human biology as they cut me open.

As AI stretches into medicine, law, manufacturing, and everything, a series of individual experts is more than fine, it’s arguably ideal — even if it isn’t as flashy as AGI.

So no, AGI isn’t here yet, but AI disruption is, and it will only creep further into our lives. Today’s as bad as AI will ever be. It will only get better, and it’s only a matter of time before AI morphs from being an assistive tool to seriously eating up whole jobs — with or without AGI.


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Hamish Hector
Senior Staff Writer, News

Hamish is a Senior Staff Writer for TechRadar and you’ll see his name appearing on articles across nearly every topic on the site from smart home deals to speaker reviews to graphics card news and everything in between. He uses his broad range of knowledge to help explain the latest gadgets and if they’re a must-buy or a fad fueled by hype. Though his specialty is writing about everything going on in the world of virtual reality and augmented reality.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.