The advent of touchscreen devices has transformed personal computing, and according to Gartner's Van Baker it "will be one of the most disruptive technologies of the decade". Research director Angela McIntyre elaborates: "The immediacy and simplicity of the multitouch user interface is compelling to users, regardless of their technical proficiency."

The combination of touch input and touch-oriented interfaces like Windows 8, Apple's iOS, Microsoft's Surface or Google's Android is powerful because it does something important: it hides the computer. With traditional desktop operating systems there's a layer of technology between you and what you want to do: the OS is a middle man whose job is to interpret what you want the computer to do.

Touch-based operating systems still do that, but they do it invisibly. Take photos, for example. When you're browsing a photo library, your natural impulse is to point at the one you want to see next. To achieve that on a traditional desktop OS there's a lot of mousing and clicking, but with a touch-based OS you swipe, point or prod.

Tablet-toting tots have become a YouTube cliche, but there's no doubt that when it's done properly, a touch-based OS means computing isn't something you have to learn, it's something you just do.

Hiding the computer is significant for all kinds of reasons. As Angela McIntyre points out, it opens up computing to people who are "less familiar with technology - often people who are less educated and with lower incomes", it's much more accessible for people with disabilities that limit their dexterity, and it's useful in countries where the native language uses non-Western characters - scribbling on a touchscreen is much more intuitive than messing around with a keyboard. "Touch interfaces will enable more people with disabilities to be included in the workforce," McIntyre writes.

IBM's Human Ability and Accessibility Center develops innovations in touch interfaces, which when combined with other assistive technologies, makes it possible for a wider range of people to earn an income.

A key characteristic of emerging devices is 'multi-modal interfaces' that seamlessly link multiple input options, like voice, touch, pressure and gestures." McIntyre predicts that gesture control will be an increasingly common control mechanism. "Gartner has a Strategic Planning Assumption that by 2016, half of consumers in mature markets will wave more frequently to their digital devices than to their friends," she writes.

Microsoft's Kinect is a key technology here, especially now that it's coming to Windows, but even basic camera hardware can track simple gestures like waving or grabbing. Intel reckons speech will be important too, and it's probably right - although we suspect that massively powerful cloud-based recognition will have the edge over local software - and of course, keyboards will be around for a long time too.

"Touch can be more intuitive than a mouse, particularly for content consumption and navigation," Rogers says. "However, people also want to be creative and productive and right now the keyboard is still a better data entry device than a touch screen."

The eyes have it

Eye Asteroids

One of the most intriguing interface ideas is the stuff of science fiction: eye tracking. Eye tracking specialist Tobii Technology already makes a range of devices to help people with mobility or literacy problems use PCs, and it believes the technology is relevant to mainstream users too.

To demonstrate it in action, it has built two unusual bits of kit: an arcade cabinet with a game of Asteroids, and a Windows 8 PC - both controlled by eye movements.

Sara Hyléen is Tobii's corporate marketing manager. She believes eye tracking is more natural than multi-touch on a typical PC. With multi-touch, "you still have to take the intermediary step of finding the mouse pointer and moving it to the place on the screen where you want to click. However, with [eye tracking] you simply don't have to take that extra step. Since you are already looking at the link or item that you want to click on, you just have to give the command."

Tobii's technology is called Gaze, and it's designed to speed things up by working in conjunction with your existing input devices. "You just give any command on the track pad (click, swipe, zoom, and so on) while focusing on the target with your gaze. When zooming you will auto-centre on the right spot, when clicking you will get super-fast response; when swiping you will get the feeling that you are virtually reaching out to the screen without having to lift your arm to touch it."

Eye tracking isn't about replacing the mouse, it's about speeding up other features. For example, you might bring up a screen showing open application and document windows and select the one you want by looking at it.

"The neat thing is that this can be applied not just to open documents, but to any layer in your information structure," Hyléen says. "Files, folders, programs, all the way up to slides in a slideshow presentation, Excel spreadsheets or layers in your photo editing software."

Hyléen suggests that CAD, graphic design, imaging and gaming would be good candidates for eye tracking. If you're thinking it sounds like Kinect for your eyeballs, you'd be right - and like Kinect, you'll need dedicated hardware.

"To get the accuracy and robustness needed for an interactive Gaze interface you have to use a dedicated eye tracking system," Hyléen says. Tobii's hardware uses near-infrared illuminators, image sensors and processing hardware. "A webcam cannot offer this capability."

There's another input device that you might not even recognise as such: simple sensors. "Gesture and voice control are more akin to the way humans communicate, and so also provide options for additional interaction with the device," Rogers says. "Add to the mix sensor-based technologies that allow proactive management of our environments, and the options for the device to become increasingly personalised and contextually aware are also increased. What we're talking about here is personalised computing."

Your PC will be the centre of a wider world, connecting wirelessly to all kinds of devices. It might even keep an eye on your health.


The latest iteration of Bluetooth wireless communication, Bluetooth Smart, introduces a new kind of connected device: smart monitors.

Bluetooth Smart's combination of decent range and ultra-low power consumption makes it well suited to single-issue devices like heart rate monitors, pedometers and other simple bits of kit, or to connecting domestic appliances for home monitoring and automation.

It's important to take automated home promises with a big pinch of salt, as we've been promised such wonders since the 1930s. However, now that the cost of adding sensors and automation to domestic goods is plummeting, we might just see the Android-controlled lightbulbs LightingScience demoed at last summer's Google I/O conference.

The problem with entirely automated homes is incompatible standards. Will your Samsung PC play nice with a Zanussi fridge, Sony TV or Hotpoint Cooker, or will we end up with a mess of supposedly smart kit that only talks to a few selected partners?

History suggests the latter - and the rise of multiple, closed ecosystems from the likes of Apple and Amazon suggests that history's right.