They use gestures to get more information about any planes they're worried about and wear a headset so they can speak to the system to allocate those aircraft to a specific controller - who might get the information on an iPad, drag the planes around on screen and send the new routes back to the big board.
Instead of just one interface, it's a multimodal system. You could gesture, talk or touch the screen for different tasks. You could even have different gestures for your left and right hands. And the technology isn't the real problem. The difficulty is in the execution, with issues such as working out how to take turns at talking, and prefecting the ergonomics right so users don't get backache.
Mixing your methods
Gartner analyst Angela McIntyre says that being able to mix different methods of control when they make sense is important to the success of new interfaces: "Being able to do one thing with your hands, another with your voice and a third with touch. Well, it's the way we normally interact with people and things in our life. You could be typing in a document and say the name of a song you want to hear and have it start playing in the background as you're working."
Wild suggests voice recognition systems could borrow a trick from the movie 2001 and use a camera to detect when you are talking to them; "When I want to talk to someone in a crowded room, I look at them. You could put images on the wall of the room that hide the camera and you hold your gaze on an image for a couple of seconds so the computer knows you're addressing it."
At CES this year we were impressed by the way Tobii's gaze tracking system uses what you're looking at on screen to scroll the right window or zoom the right part of the map, so maybe the combination of vision and sound is what we need.
Feel the feedback
But in the real world, points out Chris Ullrich of Immersion (the company which designs the haptic feedback used in most phones), "There's a deep seated human desire to get some kind of physical component to interactions. If you take that away you lose your confidence in how things work. It's not conscious but it makes things feel unsatisfactory. As you increase the physical reality of an experience you increase the pleasure and satisfaction you get from it."
So painting on a tablet screen is fun but it would feel more realistic if the paint that hadn't dried yet still felt 'wet' on screen. The same haptic feedback that buzzes when you type on your phone (or makes you feel that a marble rolling across the screen has fallen down a hole in a game) could do that.
In the longer term, an array of ultrasonic projectors or even a focused pulse of air against your hand could give you that feedback when you're making gestures without a touchscreen. More practical is giving people something to wear around their wrist. It might even be the smartwatch or a fitness tracking device you're already wearing.
The Fitbit Flex uses haptics to tell you when you've taken the number of steps to reach your daily goal. Why couldn't it also give you some feedback when your hand is in the right place to grab an icon in a gesture display? In three to five years, Ullrich thinks it will.
Make everything a button
Haptic startup Redux Labs is taking a different approach using a combination of transducers that turn screens into speakers and transducers (piezoelectric or electromagnetic, depending on the size of device) that propagate microscopic "bending waves" to deliver the sensation in exactly the right place - so it feels like it's under your finger.
They will be able to make a touch button on a phone or a microwave or a car dashboard feel like a physical button you're pressing or let you feel a scrollbar or scroll wheel you're dragging your finger over. Or you could get a sensation when you slide your finger over a key so you can get your hand in the right place, but not type anything until you press down firmly - like a real button.