In terms of interaction with technology, we're just at the beginning. While touch technology is well advanced, Microsoft Kinect shows us that NUI (Natural User Interface) technology is rapidly advancing beyond touch and increasingly machine learning is being employed to make systems more reactive.
We grabbed a few minutes with Professor Andrew Blake, managing director of Microsoft Research in Cambridge. He specialises in the areas of machine learning and computer vision and was part of the team that worked to develop the technology behind Microsoft Kinect.
"From the Microsoft point of view there are great opportunities here in developing the ways people interact, not only with computers, but with devices that are not perceived as computers but that have embedded computing in them," says Blake.
"You can think about the evolution of interaction with computers. We had line-by-line, then GUI, then touch and the next generation is no-touch… but you can still have a powerful interaction with [the computer]."
YOU ARE THE CONTROLLER: Kinect was developed using Research from Cambridge
Machine learning is a keyarea for Microsoft Research, explains Blake: "All the [Microsoft] labs have machine learning competence because it's such a pervasive technology. Suppose you want to do object recognition. People have got burnt over many years… to describe what a cup is, for example.
"You can come up with a set of rules telling you when you have a cup and when you don't, but it turns out it's far too hard to come up with human coded rules – you need to wheel out machine learning and train by example to make these kind of examples."
Microsoft Research employs 1,000 scientists across the globe in the USA, UK and India among other labs.
Blake believes that devices such as the iPhone and Kinect made people realise they could "see the potential to interact with a machine in a completely different way".
"Even if Minority Report had never been made I think it would still be clear to people that this is a whole new class of capability.
NEW INTERACTION: Blake and his team worked on the development of Kinect
"Nobody quite knows what creativity will come off the back of this. You already see some of that creativity in the gaming arena because what Microsoft has built on the Xbox is an enabling layer of, if you like, an agnostic technology that doesn't make any kind of strong steer about what games you should build.
"We didn't know when we produced that layer what kind of technology would come out. We went to great pains to make it agnostic to enable the full power of the games industry and they've come out with all kinds of stuff we didn't anticipate."
Gestures in the home
Blake says the same gesture technology used in Kinect could have other uses within the home – but what about inputting large amounts of information?
"That's not how I think of this revolution. You can input a lot of information with a keyboard, of course, and also with speech. I think in a new way it's not so much about a high level of information but about a light level of interaction – you're getting a lot of leverage from the intelligence inside the machine.
"I'd actually like my DVD player to work in a NUI kind of way. The remote control doesn't work for me; I can't see it very well so I'd love it if I could just say 'console wake up' or something and a menu appears on the screen or something else I could gesture [at].
"In fact we can do a lot better than making a remote control appear on the screen. What would be much more fun would be to make intuitive signs such as one you'd use to stop traffic to stop the video and double finger pointing one way to go fast forward and things like that. What we don't know is how reliably we can make those gestures interpretable."