The arguments about interfaces might still be going on, but the touchscreen has won.
If Windows 8 isn't rushing off the shelves, at least some of the blame belongs to how few PC makers believed Microsoft when it said buyers would want a notebook with a touchscreen. All Haswell Ultrabooks will have touchscreens when they launch in a few months.
Even the Chromebook Pixel has a touchscreen (leaving the Mac as the last holdout). We're hoping some tablet or makers pick up Synaptic's touchscreen that sees when your thumb is on the screen and reflows text around it so you can read it.
That's this year though; in the future, we're going to control out systems with a flick of the wrist, a roll of the eye or even by thinking at them - and yes, by shouting a bit as well. The technologies to give us interfaces that feel more natural than mouse, keyboard and touchscreen are on the way.
Just smile and wave
Voice interfaces aren't new; we've been dictating documents and shouting at automated phone systems for years. Until voice-driven Google Glass arrives, Siri in iOS and voice recognition in Kinect for controlling Xbox are about the state of the art.
Intel is trying to move that on by including Nuance's Voice Assistant in the 'perceptual computing' kit it's seeding developers with. We're going to see voice in home entertainment systems as well; Voco showed off voice control for iOS and Android apps that let you search for the track you want to hear on its multi-room music streaming players. The hard part is getting unusual music names correct; the system gets 50 Cent but not Florence + the Machine.
There are issues with talking to technology from background noise to strange looks from those around you. How about just looking at it? MIT spinout Affectiva can already measure your emotional reaction to adverts by scanning your micro expressions through your webcam; you might say you didn't find the Samsung advert mocking iPhone users standing in line funny, but the Affdex system could tell if you were smiling when you watched it or not even paying attention.
More useful and less creepy is Tobii's gaze tracking system which works alongside your mouse and keyboard. Want to zoom in on a map? Instead of painstakingly centring it on screen before you click to zoom in (and then dragging it back to the right place), when we tried out the Tobii prototype at CES this year the map automatically zoomed in on the area we were looking at.
We were also able to control a PowerPoint presentation, flicking back and forth between slides (and to blow up a few asteroids in a game). Using your eyes doesn't replace the mouse or keyboard; it's the combination that works so well. Think of all the times you forget to click on the document you're looking at and end up typing into the last window you were using.
Tobii will turn its prototype into a peripheral you can buy by the end of 2013, but the developers are sure they can make it much smaller and they're talking to OEMs about building it right into a monitor or even a laptop.
Kinect-style gestures are on their way to laptops as well, although probably not using the pricey 3d infrared cameras in Kinect or Intel's USB 3D camera for developer just yet (although we quite like Intel's idea of using a 3D camera and muscle movements to replace passwords with combined facial and voice recognition.)
Those sensors cost about $70 according to Amnon Shenfeld of eyesight; the CMOS sensors in Webcams cost more like $1 – cheap enough to put two of them in a laptop to use for gesture tracking. Like the human eye, having two sensors means you can get an approximation of 3D; good enough for gestures.