For a company that prides itself on showing the opposition how to do it, the past six months or so have been tough for Nvidia. Rival AMD was first to market last September with graphics chips supporting DirectX 11, Microsoft's latest multimedia API. Since then, Nvidia has been very much in second spot.

But that was then. The now is the arrival of Nvidia's fabled Fermi graphics chip. Rumours abound regarding its troubled conception. Some said it was down to failures by Nvidia's production partner, Taiwanese chip foundry TSMC; others pointed to deep flaws in Fermi's architecture. We'll probably never know the full details.

What we can say for sure is that Nvidia originally intended to release Fermi cards last year. The ensuing delay handed the advantage to AMD.

At launch, two graphics cards based on Fermi are available. The top board is the GeForce GTX 480, tested here in the shape of Asus' ENGTX480. The GTX 470, meanwhile, slots in below as a slightly cut-down second stringer.

With 480 stream shaders, the GTX 480 has twice the computational complexity of Nvidia's previous flagship GPU, the GeForce GTX 285. The GTX 470 loses another 32 shaders for a total of 448.

All hail the giant

If that sounds like a typically impressive increase from one generation to the next, it's not the whole story. The Fermi chip itself packs 512 shaders. Nvidia has fused off 32, no doubt in order to increase production yields on what is a tricky GPU to manufacture.

Weighing in at three billion transistors and spanning more than 500mm2, it's easily the largest and most complex computer chip ever to make its way into the PC. Not only does it dwarf AMD's Radeon HD 5800 series, it makes even the latest six-core CPUs look positively puny.

Along with the 480 shaders, the GTX 480 packs 60 texture address units, 60 texture filters and no fewer than 48 render output units. The latter is 50 per cent more than AMD's Radeon HD 5870. It also boasts a 384-bit memory bus and 1.5GB of graphics memory, again both 50 per cent better than its key rival.

Put the two together and you have an architecture optimised for operating at ultra-high resolution and detail settings – just what PC gaming enthusiasts want to hear.

Tessellators for the win

Another advantage Nvidia claims over AMD's GPUs involves the hardware tessellation engine. Introduced into the DirectX 11 API with the aim of improving geometric detail in games, Nvidia has taken a very different approach to AMD. Instead of placing a single tessellation engine at the front end of the geometry pipeline, Nvidia has given Fermi chips a hefty 16 tessellators operating in parallel.

Admittedly, AMD and Nvidia's tessellators are not directly comparable, and even Nvidia probably wouldn't claim that the GTX 480 has 16 times the tessellation performance of a Radeon HD 5870.

However, early benchmarks suggest a clear lead in performance in this area for Fermi. The final key differentiator for Nvidia's new wonder chip concerns GPGPU (in other words, running general-purpose software rather than graphics engines on the GPU). More than any previous graphics chip, Fermi has been optimised for this concept. Time will tell how important this will prove.

For now, there are few desktop applications beyond graphics that are able to hook into the huge parallel performance of a modern graphics chip. Thanks to both the inclusion of the Compute Shader in DirectX 11 and the increasing maturity of its open-source alternative, OpenCL, that's expected to change soon.

Then again, we've been expressing similar sentiments for the better part of two years. Put simply, the benefi t of GPGPU is unproven. In the meantime, Nvidia has arguably the fastest graphics chip on the market in the GeForce GTX 480.

However, in our benchmarks (which included games such as Crysis, Just Cause 2 and Call of Duty: Modern Warfare 2), the gap between it and a standard Radeon HD 5870 is much too small and inconsistent to justify the 50 per cent price premium.

That said, Asus' ENGTX480 ups the GTX 480's ante by enabling the core voltage to be increased and opening the door to extreme overclocking. Whether that's actually a good idea for a chip that runs extremely hot even at stock voltages and clockspeeds is another matter.

Follow TechRadar Reviews on Twitter: http://twitter.com/techradarreview