The hottest PC technology for 2009

The next phase, he suggested, concerns the variability of the production process at very small feature sizes. If the gate oxide is three atoms thick, one just two atoms thick is hugely different. This would mean that transistors could be used to keep physical variations under control.

As an example, Rudy spoke about how to deal with a chip that was designed for a power consumption of 100W but was experiencing power levels varying from 85W to 150W. The solution is for the component to measure its own power consumption and then adjust voltage and frequency itself to keep its power usage within the limits.

Lauwereins went on to give us a glimpse of an exciting new development – 3D chips. Here, chips are manufactured with copper needles sticking out through the silicon, making contact with the top metallic layer of a chip sitting below. This would provide lots of interconnects between chips.

Since the very first IBM PC, processors have become at least 20,000 times faster, while memory has only become 10 times faster for random access. The reduction in the cell size of memory chips hasn't helped because memory is still connected to a bus that is limited in its width; even with the new triple-channel DDR3, this is only 192 bits.

However, 3D stacking provides thousands of connections between the processor and the memory, with the roadmap showing a doubling every two years. This technology has the potential to remove the memory bottleneck.

Memory and core futures

The static RAM that's used for cache is also due for an overhaul before too long. Lauwereins suggested that it will be harder to continue putting more cache on processors. The Core i7 is evidence of this. It has 8MB of L3 cache, which is a pretty modest increase over the Core 2's maximum of 6MB of L2 cache. But in going to 3D, other types of memory can be used on a different layer.

DRAM, for example, is very fast but only when lots of consecutive memory locations are accessed – so it could be used in combination with other types of memory to provide fast access for every situation. Today, many different memory types are being evaluated and new technologies are starting to appear. They all have different characteristics from what we are used to, and this could affect the way we build processors.

Lauwereins went on to suggest that the increase in the number of cores is a trend that won't continue much further. He thinks that AMD's planned six-core processors and Intel's eight-core Core i7 might be about the limit because of the problem with cache coherency. Solving this problem means making sure that the data is valid in the L1 cache for each core, a task that rises quadratically with the number of cores.

"To solve this problem," Lauwereins told us, "they'll have to write programs in parallel languages, a skill that can't be mastered overnight." So what about Intel's 80-core processor, which was demonstrated some time ago? "It doesn't have shared memory," he explained, "and it had software specifically written in a parallel language."

Finally, we asked what trends we are going to see in terms of decreasing feature size over the next few years and why there's still an interest in decreasing feature size given that the increases in clockspeed that demanded ever-smaller feature sizes have now plateaued. "Today we are at 45nm, we will see 32nm next year, then 22nm and 16nm and they will just continue unless economy puts an end to it," said Lauwereins.

"Cost is the driving force. Historically, the move to the next process technology halved the cost because the area of silicon halved, but today the cost per square millimetre increases from one process to the next so we no longer get a factor of two saving. This means that economics might eventually bring decreasing feature sizes to an end."

-------------------------------------------------------------------------------------------------------

Now read Top tips: how to make Windows slicker and faster

First published in PC Plus, Issue 276