Intel's parallel processing vision

TechRadar: The hardware developed for multi-core operations has far outstripped the rate of software development. What is Intel doing to get the software companies to code applications and operating systems that step up to the potential of the hardware?

Andrew Chien: In the long run, people need to write code that is fully scaleable, and frankly we need research breakthroughs to put the whole industry on that basis.

We’re calling upon governments as well as other players in the industry to invest in the five to 10 year future of scaling on parallelism. Because we need to move away from the traditional legacy of parallelism, which was just to get linear speed up, and to get maximum efficiency out of every CPU and every clock.

For parallelism to be successful we ultimately need to move to a world in which people write code that is inherently parallel; that won’t necessarily just get one more element of performance for every core that is added.

Intel is doing everything it can to create the urgency for this. I think one of the interesting challenges for parallelism and all application software is just where does it come from? Every time we’ve had one of these major changes in [processing] capabilities, often the largest consumers of the [processing] cycles turn out to be new applications.

In this space we’ve been out making people aware of a new class of workload called RMS – Recognition, Mining and Synthesis. It’s all about data streams and analysing large quantities of noisy data. It’s about finding insights from within [this data]; it’s about synthesising the whole graphic and the 3D visual experience.

These applications have staggering amounts of parallelism, so there’s no question that if those kinds of capabilities become increasingly part of other applications, then that alone could saturate many of these parallel processors.

This interview was conducted by PC Plus magazine Editor, Ian Robson