Add an extra graphics card. Enjoy improved gaming performance. As concepts go, multi-GPU graphics is a bit of a no brainer.

But is it the next logical step for 3D rendering, the graphics equivalent of multi-core CPUs? Or is it a gimmick that does little more than prove the fact that some people are stupid enough to buy into anything?

Back at the original launch of Nvidia's SLI platform in 2004, it was actually hard to believe SLI was real. Surely a solution as complex and expensive as SLI could never become mainstream? Certainly, the fact that Nvidia's main rival – back then that was ATI before it was swallowed whole by AMD – quickly followed suit with a copycat technology known as CrossFire, wasn't enough to prove the idea had mainstream merit.

At the time, ATI suits admitted off the record they weren't convinced there was a real market for multi-GPU. Then something strange began to happen. Although the number of actual multi-GPU systems remained miniscule, PC enthusiasts began to buy SLI-capable mobos in their droves. Even if they didn't run two graphics cards in parallel, they did want to give themselves that option.

More than anything else, the idea of adding another, cheaper copy of your current graphics card when it begins to run out of puff is extremely seductive. Ironically, however, the ultimate proof that multi-GPU is here to stay comes from AMD, not Nvidia.

AMD has given up engineering really massive GPUs and has instead decided to use multi-GPU technology as the basis for all its future flagship graphics boards. But that doesn't make multi-GPU the best hammer for cracking every graphics nut.

Does a pair of mid-range boards, for instance, really deliver better performance than a single high-end card, for instance? What about the law of diminishing returns as you go beyond two GPUs? Moreover, have we reached the stage where either or both of CrossFire and SLI have become truly reliable?

For the answer to these questions and much, much more, you know what to do.