AMD's Richard Huddy also thinks that Natal "and its derivatives afterwards" will go much further beyond facial recognition and motion recognition. So by 2020 you will be able to control things very finely indeed, even at a distance and, he claims, "you will be able to use finger configurations to indicate fine grain detail, in very much the same way that you can on a keyboard at the moment… I really don't think there is any reason to think that you should be able to type onto absolutely nothing."
The AMD man also likes to envisage playing sports games where players can start to give each other a secret code – very much in the same way that you do in a game of Rugby or American Football – "where you all huddle down and give each other a secret signal, holding up three fingers behind your back, or whatever… I'd like to see that kind of thing do-able in a videogame, so that if I am part of a team game then I can just signal in the same way that I do in the real game."
Frown and the game helps you
In addition to that, he hopes to see more games that understand that you are struggling at a certain puzzle or point in the action, purely from your facial expression.
"There is a recent Indiana Jones game that I have struggled with for hours to try to crack one puzzle," he candidly admits. "And I just cannot crack it! It's embarrassing to admit, because it is a LEGO game, but it would be fantastic if the computer can just see the frown on my face and realise that I just cannot move beyond this point in the game.
PUZZLING: LEGO Indiana Jones - a surprisingly difficult game!
So expression recognition will certainly be a big thing in 2020 – where the computer recognises different levels of frustration or elation and pushes you to do the right things to enjoy the game better.
"Very much in the same way that you and I can tell things about a person that we are communicating with, through facial expression, gaming systems ought to soon be able to do the same thing," thinks Huddy.
Plus, in addition to motion control, we will also see massive steps forward in the area of speech recognition and synthesis, technologies that are currently improving rapidly.
"I expect us having conversations with incredibly realistic NPC voices (while having no requirement for massive amounts of voice acting!) and intelligent understanding of what the player is saying," thinks games developer Simon Barratt, MD of Four Door Lemon.
Barratt also reminds us that motion control "is no longer something that occurs in just the living room, personal motion control devices which we carry permanently on our person will soon interact with all devices we want. "
Ultimately, the goal of any interface is to remove the barriers between human desire and controllable object, and some developers are already thinking in terms that are way beyond our current comprehension of motion-control and facial recognition.
One such developer is Creative Director at Adept Games, Daniel Boutros, who thinks that the future is not merely better, more responsive types of motion control but also mind control, with "[mind controlled] devices becoming easier to make, and the cost of goods to build such devices (currently only at a 'binary' on/off stage of functionality at the consumer level) coming down rapidly."
He notes that there are already a number of mind-controlled gaming patents that exist, "making it hard for innovators to go in full throttle without fear for more sophisticated devices in the more immediate term." Which means that we are probably at least another generation away before we will be able to control our games meaningfully with brain power alone.
"Depth of sophistication is tougher to build, at least at the consumer level of affordable manufacture, though it has existed within the military for some time," adds Boutros.