Complete guide to DirectX 11

ATI Radeon 5800
Microsoft has been working closely with both ATI and Nvidia in the development of DirectX 11

Have you read our coverage of AMD's new Radeon HD 5870 graphics card? If you have you'll already be marvelling at the sheer, giddy silliness of the frame rates it's capable of, not to mention basking in the glory of nearly three teraflops of raw processing power.

It's simply the sexiest slice of silicon since a sliver of Megatron's mainboard accidentally slipped down Megan Fox's frilly bits. Of course, the 5870 is also the very first graphics card to support DirectX 11.

For the uninitiated, that's the latest in a long line of multimedia APIs from Microsoft and apparently it's going to rock more than just your PC gaming world. At least that's what both AMD and Microsoft would have you believe.

But perhaps you've heard it all before. While similar claims were made for DirectX 10, that API pointedly failed to either ignite the imagination of gamers or generate enthusiastic support from game developers. Believe us, therefore, when we say we understand your skepticism.

Ultimately, only time will tell whether DirectX 11 really turns out to be a game changer.

There are many good reasons to expect that it will bring a big step forward in entertainment on the PC, at least since the concept of programmability appeared in DX8 and quite possibly ever. Certainly, some aspects of DX11 have far wider reaching implications than any previous DirectX API.

There's much more to DX11 than the usual tweaks to shader definitions or a spot of funky lighting technology.

Major new rendering tech

Not only does it debut a major new rendering technology in tessellation, it also aims to bring general purpose processing on the GPU into the mainstream, all the while delivering a more widely compatible DirectX API than ever before. It's ambitious stuff that's been a long time coming.

But with the arrival of Windows 7 and the Radeon HD 5800 series, along with the promise of a rapid uptake by game developers, there's plenty to get excited about.

Before we get down to business with details of DirectX 11, let's quickly remind ourselves of what exactly DirectX is and does.

In simple terms, it's a software layer known specifically in codemonkey jargon as an application programming interface, or API for short. Its job is essentially twofold.

Firstly, it makes it easier for application developers to access the multimedia capabilities of various PC components, including sound and graphics hardware. These days, it's most commonly associated with the latter, but graphics on the PC is actually only a subset of DirectX, known as Direct3D.

Anyway, along with providing a framework for software developers, DirectX sets the parameters for multimedia tasks on the PC in terms of the hardware definitions for various components, again most notably graphics and sound chips. Consequently, Microsoft works closely with the likes of Nvidia and AMD when developing the next version of DirectX.

Over time, the PC's remit has expanded to the point where today it has become arguably the most flexible and adaptable machine on the planet, equally adept at hardcore scientific number crunching as it is keeping the frag-happy teenagers entertained. Inevitably, as this remit has broadened, so has the scope of DirectX.

With the arrival of version 11 comes perhaps the most significant expansion yet for DirectX and the first of three key developments delivered by DX11. Known as Direct Compute, it effectively opens out DirectX, or perhaps more accurately the Direct3D pipeline, to almost any kind of computational task. The only really significant requirement is that the task lends itself to parallel processing.

The target component, of course, is the GPU, by far the most parallelised chip inside the PC and potentially the most powerful, if only its resources could be harnessed for general computing.

At this point, you may sense a whiff of déjà vu in the air. Isn't the idea of general purpose computing on the GPU already well established under the GPGPU banner?

There are indeed several ongoing GPGPU initiatives that predate the arrival of Direct Compute and DX11, the most high profile of which is Nvidia's CUDA platform. But the key difference with Direct Compute is that it sets out common standards to which both application and hardware developers must adhere.

In other words, where CUDA requires the presence of Nvidia graphics cards, Direct Compute will guarantee compatibility whatever the branding of your graphics card.

Definite plus

However, Direct Compute does more than just guarantee compatibility. It also provides hardware definitions that ensure graphics chips are actually up to the job of general purpose computation.

Admittedly, DirectX 10 also included an early version of the Direct Compute standard. But it was, frankly, an afterthought that reflected the abilities of graphics-centric hardware rather than attempting to extend to better support general purpose processing.

To take just one example, DX11 chips must provide 32kb of shared access memory for each general purpose thread. Previous versions of DirectX only required a pitiful 256 bytes per thread. Hence, you could say that all previous implementations of GPGPU have really been the side effect of efforts to create more programmable graphics rendering pipelines.

With Direct Compute 11, graphics vendors are required from the get-go to architect their graphics chips with general purpose computing in mind. Suffice to say here that the impact of a truly general purpose GPU on the likes of physics and AI simulation or media encoding will be massive.

For such highly parallel tasks, graphics chips could well turn out to be 10, 20, perhaps even 30 times faster than even the most powerful CPUs. And it will be DirectX's Compute Shader that enables all that parallelised goodness.

Contributor

Technology and cars. Increasingly the twain shall meet. Which is handy, because Jeremy (Twitter) is addicted to both. Long-time tech journalist, former editor of iCar magazine and incumbent car guru for T3 magazine, Jeremy reckons in-car technology is about to go thermonuclear. No, not exploding cars. That would be silly. And dangerous. But rather an explosive period of unprecedented innovation. Enjoy the ride.