Beyond the touchscreen: interfaces of the future

Beyond the touchscreen
Microsoft and Nvidia have teamed up to create the 3D PC for consuming 3D content

We're living in interesting times. Tech firms want to take over our TVs, our phones are more powerful than some recent PCs, and we can control games consoles through the medium of dance.

New interfaces are all around us, from touch screens to augmented reality, and the way we interact with technology is being transformed. But which interfaces are genuine leaps forward and which are digital dead ends? What makes a good user interface anyway?

Microsoft kinect

Forget the ropey speech recognition systems of the 1990s: voice control is back, and this time it works. That's partly because modern systems now specialise. Hardware is now much more powerful, and cloud computing allows remote processing and real-time results.

"The way we interact with technology is becoming more natural, allowing our devices to work on our behalf instead of just at our command," McCarthy says. "You can already envision the world we're imagining through Microsoft research projects like a prototype of an automated receptionist; Microsoft LightSpace, a technology prototype showing the potential of using depth-sensing cameras to naturally interact with any surface; and Project Gustav, a realistic painting prototype that lets artists become immersed in the digital painting experience, just as Kinect helps people become the controller in the gaming experience."

The magic touch

Touch control has been around for a long time. What's different about today's touch technology is that touch has become multi-touch: instead of prodding with a stylus we're pinching and pulling with one, two or ten fingers. That means tablets can be typewriters, pianos, canvases, or anything else we fancy playing with.

Done well, multi-touch removes abstraction - instead of moving a device like a mouse to point at something, you just point at it; instead of clicking on piano keys or trying to remember which keyboard key corresponds to each note, you just play the note.

Augmented reality

Multi-touch is a good illustration of Fitts's Law, which was proposed by Paul Fitts in 1954. The law states that the time needed to move to a target area is a function of the distance to and the size of the target. In effect, that means big icons are easier to hit than little ones, top-of-screen menus are easier to click on than top-of-window ones and pop-up menus are faster than pull-downs.

There's more to effective UIs than big icons, of course. Good UIs remove complexity and obstacles, so for example, a mouse-and-windows OS is more intuitive than a command line interface and a multi-touch OS can be more intuitive than a mouse-based one.

Designers can do several things to streamline interfaces. They can use metaphors to make things more obvious - for example, we all know what desktops are, what control panels do and what recycle bins are for - and they can use icons and nested menus to reduce visual clutter. They can use context-sensitive menus and hinting, where the interface offers visual clues such as tooltips, and they can add indicators to icons.

However, if you keep adding features to the underlying system, eventually you'll run out of tweaks. As Microsoft program manager Jensen Harris recalls, by Office 2003 the UI was beginning to feel bloated, "like a suitcase stuffed to the gills with vacation clothes".

No wonder - it was essentially the same UI that Office had in 1989. As Harris explains, "There's a point beyond which menus and toolbars cease to scale well. A flat menu with eight well-organised commands on it works just great; a three-level hierarchical menu containing 35 loosely-related commands can be a bit of a disaster."

Microsoft redesigned the Office UI and the result was the Ribbon, which makes Office less intimidating and features easier to find. It annoyed power users though, demonstrating that when it comes to UIs, you can't please everybody.