Thanks for the memory: the story of storage

A technological lag by the West could spell doomsday. Punched cards and paper tape were the most obvious low-tech solutions for data and program input, and they were universally successful. In fact, the original Colossus machines at Bletchley Park used paper tape input to prevent synchronisation problems.

As mainstream computer use exploded, lowly data entry clerks would transfer information from written forms; punching holes into cards and paper tape ready for loading into the computer. The cheapness of this method meant that paper-based storage survived well into the late 1970s. In fact, my first experience of computing at secondary school was punching cards for my O-level Computer Studies assignments and sending them to Manchester University to be loaded into one of its computers.

Punched cards had a unique drawback: each 80-column card corresponded to a single statement, so your finished program was a stack of punched cards. Problems occurred if you dropped or knocked over the stack, which then needed to be put back in order before it could be loaded into the computer's card reader.

Going magnetic

Magnetic tape came into use from the early 1950s as a general data-storage medium. Though it was fast, could store far more data than paper tape and was rewriteable, it was still only capable of serial access. This meant that if you wanted to insert a record into the data stored on a tape, you generally read the data on one tape drive and wrote it to a tape mounted on another.

At the appropriate point, you inserted the new record into the data stream. Though a large tape library gave computers access to huge amounts of backing storage, what was also required was a form of storage that didn't waste time waiting for an operator to fetch it from the library, and which was also truly random access.

The first solution was magnetic drum storage, which became available from the mid-1950s. Inside the unit, each drum – which was coated with iron oxide – rotated several thousand times per minute. A row of read-write heads traversed the drum – one for each track – and read or wrote information quickly and at will.

Magnetic drums quickly led to the development of the virtual memory that we still see in use in today's operating systems. Computer manufacturers realised that when a program runs, it doesn't need all of its code or working data in RAM all of the time. Because access to data stored on a drum was fast, RAM could be freed up by copying blocks of memory out onto the drum until the operating system required them. Suddenly, computers could have huge 'virtual' memories and run programs larger than the physical RAM would normally allow.

Into the light

The physical properties of materials have been a fertile ground for computer scientists looking to increase both storage density and speed of data access. Now scientists are beginning to investigate the possibilities of using light as a storage medium in the not-too-distant future.

The data density offered by optical storage dwarfed PC hard disks when the Compact Disc was introduced in 1982. At a time when hard disks still held around 20MB, the first CD-ROMs could store 650MB. Though magnetic disks have since regained the storage crown, optical disks could eclipse them once again. As scientists learn more about how to exploit light's properties, the possibilities for not only storage and communications but also computing itself are becoming increasingly apparent.

Upcoming light-based storage methods easily outstrip the capacity of current hard disks. Holographic techniques, for example, promise optical disks that are able to hold 3.9TB. With research into light-based microprocessors advancing every year, we may finally witness the "white heat of technology" talked of in the 1960s. Back then, no one could have predicted where the story of storage has already led us – and who knows where we'll be in 2050.


First published in PC Plus Issue 277

Now read The ultimate guide to Intel's Core i7