Solid state drives: all you need to know

SSDs
SSDs are still considered to be in their infancy but when they get it right they promise to be lightning fast

With processors and graphics chips becoming faster seemingly by the day, the relatively slovenly development of messy mechanical hard disks has become a serious drag.

Sure, hard drives have gotten a lot bigger in recent years, but they're barely any faster. As a result, you might achieve great frame rates in-game, but you'll still be waiting just as long for those tedious level loads.

Enter, the solid state drive. By replacing conventional hard disks based on spinning magnetic platters with integrated circuits, SSDs were supposed to be the final piece of the PC performance puzzle.

At last, storage will benefit from the ever smaller, faster and cheaper electronics that enable CPUs and GPUs to pretty much double in performance and all-round prowess every couple of years.

Factor in better reliability, less noise and even reduced power consumption and a shift to solid state technology for storage is quite simply a no-brainer. Unfortunately, it hasn't quite worked out that way. In fact, the early history of SSD technology has been a big, smelly letdown.

Specifically, SSDs have often flattered to deceive with great out-of-the-box performance rapidly turning into a laggy, stuttering mess with extended use. Matters have been made worse due to confusion caused by firmware updates and a general lack of transparency regarding the problems afflicting SSDs and the steps being taken to address them.

The SSD lottery

In short, buying an SSD currently feels like a total lottery. You're not quite sure what you're getting and whether it's going to keep on working properly. With all that in mind, what exactly has been holding solid state storage technology back, what's being done about it and when will it be safe to go solid state?

To understand why SSDs have been slightly sucky, you have to appreciate the foibles of the flash memory that provides the storage. The first problem springs from the odd fact that flash memory wears out with use. Write and erase data from a flash memory cell enough times and it'll eventually become unresponsive.

Typical multi-level cell memory, as used in consumer SSDs, has a life expectancy around 2,000 to 10,000 write-and-erase cycles. The solution is so-called wear levelling. The idea here is intelligent management of the available cells.

The drive's controller chipset keeps track of cell usage and adjusts write and erase calls with a view to spreading wear evenly. The point is that in an attempt to keep memory cells healthy, commonly used data sets may have to be regularly shunted around the drive.

That in turn translates into disk activity that isn't directly related to getting data in and out of the drive. And that means less performance during periods of peak disk activity.

Writing data to SSDs

The other major issue involves the mechanics of how data is written and stored in flash memory. Basically, memory cells are organised in blocks, typically 512k in size. Problem is, whenever any data is written it must be done so by the block, even if the total amount is much less than 512k.

In other words, even when writing a small amount of data, perhaps a few k, an entire memory block is reserved. That's just fine when you have lots of spare blocks. But when you don't, it becomes necessary to reuse partially filled blocks. And that requires the contents of a block to be copied to cache before adding the new data and then writing the whole lot back into the block. What a palaver.

If that wasn't bad enough, current SSDs generally don't actually erase blocks when data is deleted from them. Blocks are simply marked as available for writing by the file system. Erasing only happens when the time comes to refill the blocks with data. Put it all together and you have a perfect storm of stuttering disk access.

Imagine requesting lots of small, individual disk writes. Each one might require juggling all kinds of partially filled and marked for-deletion blocks. We therefore put it to you that it's easy to see why SSD performance goes down the crapper as capacity dwindles. What, then, is the answer?

Improved wear-levelling algorithms help. Intel's X25-M is a case in point. Early examples of that drive suffered from rapid and rather hideous performance degradation. Intel has since released a new firmware with improved wear levelling that did a very good job of cleaning up performance.

As for the problems relating to write and erase methods as capacity is used up, there are a number of different efforts in various stages of development, some more effective than others (see the "Give your SSD a TRIM" and "Heal the pain" sections on the next page).

But the overall moral is that the race is on and progress is being made. It's just possible that a year from now, all these SSD woes will be but a distant memory.

Contributor

Technology and cars. Increasingly the twain shall meet. Which is handy, because Jeremy (Twitter) is addicted to both. Long-time tech journalist, former editor of iCar magazine and incumbent car guru for T3 magazine, Jeremy reckons in-car technology is about to go thermonuclear. No, not exploding cars. That would be silly. And dangerous. But rather an explosive period of unprecedented innovation. Enjoy the ride.