Has solid state finally come of age?

We test three of the latest hard disk replacements

Solid state drives

Solid state technology based on flash memory offers enormous advantages for data storage. Well, that's the theory, anyway. The supposed benefits are clear: with no moving parts, solid state drives (SSDs) should deliver a boost in robustness compared with conventional magnetic disks.

Equally, flash memory cells traditionally have the edge over spinning platters when it comes to shunting data around, so performance should also benefit. And then there are the minor matters of reduced power consumption and essentially silent operation.

In other words, configuring your PC with an SSD seems like a no-brainer. Unfortunately, the early history of solid state mass storage has been inauspicious. The first problem is pricing. SSDs are very expensive, particularly given the smaller storage capacity they offer. In an age when image and video files are pushing storage demands into terabyte territory, the sub-100GB capacity of sensibly priced SSDs doesn't cut it.

But the biggest issue has been patchy performance. This is partly a consequence of offsetting the contradictory demands of capacity and speed. The fastest type of flash memory is composed of single-level memory cells (SLC), so-called because each cell is capable of storing just one bit of data.

Multilevel cells (MLC) retain more than one bit per cell and therefore deliver much higher storage density. However, with that capacity increase comes a penalty in terms of data writing performance. SSDs aimed at consumers are almost exclusively based on MLC flash and have therefore suffered from conspicuously asymmetrical read and write speeds, with the latter often less than half of the former.

Read TechRadar's OCZ Vertex review

OCZ vertex

LIMITED LIFE: SSDs currently have a limited life but new techniques are attempting to improve that

Disappointing as that was, by itself it wouldn't have been a deal breaker. After all, half of extremely fast is still fairly quick. No, the thing that has really prevented SSDs from delivering on their promise is inconsistency, not outright pace. The reasons for this are complex.

The first factor is the need for wear levelling algorithms. Put simply, flash memory wears out with use, eventually causing cells to fail. The number of erase-write cycles before failure varies, but consumer class MLC flash memory is typically pitched around the 10,000 mark. Given the huge data traffic generated by modern multimedia PCs, that's a major drawback to the technology.

Levelling wear

The solution is wear-levelling, which minimises the load on a cell by distributing data writes across the entire drive, regardless of how full it is. Combined with a dollop of spare cells to provide replacements while retaining capacity, the useful life of an SSD can be vastly extended.

Without wear-levelling, a busy drive may expire in six months. With it, life expectancies rise to five to 10 years. However, wear-levelling also demands that commonly used data must be constantly shunted around the drive, which in turn leads to fragmentation and compromised performance. This is exactly the problem that afflicted initial drives, such as Intel's X25-M.

Read TechRadar's Intel X-25M review

Intel x 25m

EARLY ADOPTION: SSDs have suffered with issues over long term performance

The other major factor dragging SSD performance down over time involves the internal structure and hierarchy of flash memory. Simply put, each flash memory chip is divided into 'blocks', and each block is in turn composed of 'pages'. The specifics vary, but a typical example might be 4kB of data per page and 128 pages per block.

In this scenario, a single block has a capacity of 512kB. This matters because flash memory is read in pages and written in blocks. Data doesn't always come in perfectly sized chunks, so blocks are often left only partially filled following a write cycle. When the disk is relatively empty, any unused blocks can be filled and performance doesn't suffer.

However, as the drive fills up, data will eventually be written to partially used blocks. When that happens, the entire block must be copied to the drive's cache memory before being erased and rewritten with a combination of the existing and new data. Needless to say, this process takes much longer than writing to an empty block.

The combination of flawed wear-levelling algorithms and the need to write data in full blocks can take a bite out of performance. Taking Intel's X25-M as an example, following several months of intensive use we found write performance had dropped from around 80Mbps to just 30Mbps, while read rates plummeted from 250Mbps to just 60Mbps.

If that sounds bad, the user experience was worse. The drive became increasing laggy during certain types of workloads, such as software installations.

Coming of age?

Since these problems are related to the inherent nature of flash memory, all SSDs will suffer from them to some extent. Thus the wear-levelling algorithms and memory management of an SSD's controller chip are just as important as the flash memory itself.

Producing good SSD controllers is so difficult that only a handful of companies make them. In any case, drive manufacturers now appear to be well aware of these problems, as all three drives here testify. Intel has released a firmware update for the X25-M that appears to address its increasing sluggishness with heavy use.

Likewise, Samsung delayed the shipment of our PB22-J so that it could be updated with new firmware designed to improve the longevity of the drive's performance.

Read TechRadar's Samsung PB22-J review

Samsung pb22 j

FIRMWARE UPDATE: The latest firmware release is meant to address some of the concerns raised, but it's not a magic remedy

The key question is this: have we finally reached the point where SSDs are mature enough to earn an unqualified recommendation? The answer is nearly, but not quite. The drives tested here represent the fastest SSDs available, and for the most part they deliver impressive performance. However, indications remain that degradation still occurs with certain workloads.

We also can't help but notice that the drive that came last in many of our tests – Intel's X25-M – also happened to have been subjected to heavy use over several months, while the others were brand-new. That result is surely no coincidence.

Article continues below