Multi-core processors: hype or the real deal?

AMD's Chuck Moore, Senior Fellow Design Engineer for Accelerated Computing, was even more realistic. "An important aspect is something called Amdahl's Law. While not widely known outside the silicon design engineering and software development communities, it's highly relevant to the parallel processing and programming issue." Amdahl's Law is a formula for working out the performance gain of a parallel system.

If all of the workload can be parallelised, the speed-up is equal to the number of processors. However, if even a small percentage of the program can't be run in parallel, the level of improvement drops dramatically. For example, if five per cent of a program can't be parallelised, the performance gain is limited to 20 times – no matter how many cores are used.

However, these restrictions only apply to homogeneous cores, a standard which could be replaced in the near future. "AMD is now focusing heavily on bringing heterogeneous multi-core solutions to market, as part of our Accelerated Computing initiative," Moore told me. "The title Accelerated Computing comes from the idea that in the future, AMD and the industry plan to increasingly produce microprocessor designs that combine varying mixtures of scalar processing cores, parallel processing cores, and fixed-function accelerators on-chip.

AMD calls this new category of processor an Accelerated Processing Unit (APU). By creating the optimum mix of these three types of blocks, an APU can be more highly tailored to accelerate the software that matters most to a particular end-user. AMD's first APU will be 'Swift', which is targeted at the notebook space. This APU combines x86 scalar processing cores, a parallel processing core (based on ATi GPU technology) and a universal video decoder (again based on ATi technology) on-chip."

Multi-threading software

Certain types of software are more likely to benefit from the multi-core approach than others. So what applications will benefit the most from the new approach, and which already available software can take advantage of an increase in the number of cores?

I asked Mike Taulty, Developer Evangelist at Microsoft UK, what types of application are multi- threaded today; and received a somewhat unexpected answer. "On a Windows machine, it would be easier to list applications that are not multi-threaded. Most applications are multi-threaded," he told me. This is rather surprising.

An application like Word, for example, spends most of its time waiting for a key depression – so there would seem to be no great imperative to use multi-threading. I put this point to Taulty, and asked whether in this case, multi-threading is used as a programming convenience rather than for performance gains.

"Yes," he answered. "Client applications often push work to separate threads to partition the programming of that work. However, client applications like Outlook or Word might make use of secondary threads so that they can be doing things like re-indexing your mail while you're typing a message."

Intel's James Reinders also had some ideas about which applications will most lend themselves to multithreading on multi-core processors. "Any program which processes a lot of data is generally quite easy to optimise for parallelism. Concurrent processing of data is the easiest to find, and it's usually easy to modify a program to achieve this.

We call this type of parallelism data-parallelism. Programs which process photos, videos or scientific data tend to exploit their data-parallelism. The other type of parallelism is task-parallelism. This is definitely harder to grasp for almost everyone, but the concept is simple: doing multiple things at once. But figuring out the things in a program which can be done at the same time eludes people all the time."

AMD might have part of the solution. "The need for better parallel programming techniques is real and increasing," Chuck Moore explains. "AMD clearly understands and is taking action to solve the challenges of programming for parallelism.