Nvidia: Current CPU-GPU balance in PCs is 'obscene'

Nvidia logo
Nvidia says the CPU and GPU are unbalanced in many systems

Just forked out for a new rig with a fast processor on board? Then Nvidia has some very bad news for you. Your PC is "obscenely" imbalanced thanks to an overpriced, underperforming CPU - probably courtesy of Intel.

It's just the latest salvo in the burgeoning war of words between Nvidia and Intel this year. But what exactly is Nvidia getting at? Talking to TechRadar earlier this week, Nvidia's VP of Content Relations Roy Taylor outlined a developing strategy for leveraging Nvidia graphics technology to accelerate a wide range of PC applications. Very soon, the world will discover just how pathetic conventional CPUs really are.

If Taylor is correct, the initiative will deliver a massive, unprecedented boost in PC performance. We're not talking the 2x or 3x boosts in performance that the PC industry delivers on a regular basis. It could promise as much as 20x or even 100x the performance of todays multi-core CPUs. Yikes.

CUDA cometh

The basic premise is the use of Nvidia's CUDA programming platform (itself closely related to the C programming language) to unlock the increasingly programmable architecture of the latest graphics chips.

On paper, it's extremely plausible. In terms of raw parallel compute power, 3D chips put CPUs to shame. A good recent example is the new room-sized, high density computing cluster installed by Reading University.

Designed to tackle the impossibly complex task of climate modelling, it weighs in at no less than 20 TeraFlops. That sounds impressive until you realise that just a single example of Nvidia's next big GPU, due this summer, could deliver as much 1TFlop. So, a few four-way Nvidia GPU nodes will soon offer the same raw compute power as a supercomputer built using scores of CPU-based racks.

General purpose GPU

A little bit closer to home, one of the early applications Nvidia is promoting as a demonstration of the general purpose prowess of its GPUs is a video encoding application known as Elemental HD.

Downsizing a typical HD movie for an iPod using a conventional PC processor can take up to eight hours or more, even with a decent dual-core Intel chip. Nvidia says the same job can be done in just over 20 minutes on an 8800 series Nvidia graphics board.

"When you look at the question of whether you should transcode video on a GPU or CPU, when you consider it in performance-per-buck terms, it's currently obscenely the wrong way round," Taylor says.

And the solution is simple enough. Don't spend any more money overall. Just spend a little less money on your Intel CPU and a little more on your Nvidia GPU.

Hardware PhysX

What's more, Taylor says plans to support the recently acquired PhysX physics-simulation engine on Nvidia's GPUs are also nearing launch. Before the end of May, a total of eight games with GPU-based PhysX are due to announced. 30 to 40 such titles will be available this time next year.

So, that's it then. The game is up for the CPU and Intel alike? Not so fast. For starters, there's a good reason why CPUs don't deliver the raw compute power of contemporary GPUs.

CPU cores are big, complex beasts, designed to turn their hands to almost any task and make a decent fist of it while not excelling in any one area. GPUs, even the most recent and programmable examples, are still a lot less flexible. When they're good, they're great. When they're not, well, they simply won't do the job at all.

"At the moment general-purpose GPU applications are admittedly very high end. But increasingly people are asking why are scientific research industries including medicine and climate modelling are using GPUs," Taylor says.

The answer is the unbeatable bang-for-buck performance ratio that GPUs deliver. Taylor reckons Nvidia has a large number of partners with consumer-level applications lining up to key into its GPU technology. Several are due to be revealed later this summer.

Waiting game

Until then, however, it's impossible to say whether the benefits will be as spectacular as Nvidia claims. Likewise, we'll have to wait and see just how smoothly it all works. The only non-3D consumer application for GPUs that has been widely tested on the market so far is video decode assist. And that has been a distinctly hit and miss affair.

But even if Nvidia can deliver reliable, transparent hardware acceleration for a wide range of applications with its GPUs, it will still have a huge fight on its hands from Intel.

Intel's intriguing new GPU, known as Larrabee, is due out in late 2009 or early 2010. Apart from the fact that it will be based on an array of cut-down X86 processor cores, little is known about its detailed architecture. But as Intel's first serious effort to compete in the GPU market, it's a game-changing product.

For Taylor, of course, the Larrabee project merely confirms that the GPU is where the action is. “Why does Larrabee exist? Why is Intel coming for us? They're coming for us because they can see the performance advantage of our GPUs,” Taylor says.

He's probably right. It will be a fascinating contest.

Contributor

Technology and cars. Increasingly the twain shall meet. Which is handy, because Jeremy (Twitter) is addicted to both. Long-time tech journalist, former editor of iCar magazine and incumbent car guru for T3 magazine, Jeremy reckons in-car technology is about to go thermonuclear. No, not exploding cars. That would be silly. And dangerous. But rather an explosive period of unprecedented innovation. Enjoy the ride.