If haptics will make how we control computers more natural and subtle, it’s advances in AI and natural language processing that will bring the most impressive informational features.
Nils Lenke, senior director of corporate research at says: “We are now working on automotive assistants that learn from the previous behaviour of the driver to direct them to their favourite cuisine when asking ‘find me a takeaway on my way home’,” says Nils Lenke, senior director of corporate research at .
Armed with some natural language processing they may be, but Alexa et al don't know much – and they certainly don't know you very well, which makes unscripted two-way conversations impossible. But this will eventually change.
"If we really want to communicate with bots in a more meaningful way they have to become smarter, more proactive and learn you as a user, your preferences, your behaviour, and start anticipating and suggesting things," says Unit4's Staven.
"There will be a lot of forgiveness in the beginning if the bot doesn’t always get it right, as long as it improves and learns from your and its own behaviour."
Empathy and meaningful relationships
The next stage is bots that can understand the nuances of human behaviour, such as humour, wit and sarcasm.
“Coupled with AI being given the chance to prove itself to a user, it will mean people beginning to trust AI and bots more,” says Matty Mariansky, co-founder of Doodle, which developed the AI scheduling assistant. Mariansky has noticed people beginning to treat bots as if they were people.
“We recently taught our bot to recognise ‘shut up’ as a command meaning ‘stop these reminders’, but when we told one user to say this to the bot she replied that she felt bad telling a chatbot to shut up when it isn’t his/her fault,” he says.
Empathy for software may seem strange at first, but given the technology why wouldn’t a designer include the powerful human emotion as part of a new product’s appeal?
While Siri and Alexa are famously faceless, a fleet of new Japanese ‘homebots’ such as , Lynx, Yumi, and put the face first. Essentially these are just user interfaces for a cloud-powered digital assistant that occasionally shows a cute face or expression on a touchscreen while speakers play childish voices, chirps or digital cooing.
Most of them also have cameras installed in their ‘heads’, which use face recognition technology so they can customise services to specific people.
Giving a bot its own face and human-like personality is all the rage, but is that a dead-end for the zero user interface?
“Maybe the younger generation would like something with more of a personal touch – if you can configure it yourself – but we need to think beyond Japanese comic-like avatars,” says Staven. “Because that is almost embarrassing.”
The majority of bots will be used by older people, and in the workplace, and a professional, business-like bot absolutely does not chirp like a happy cat.
What the zero UI will end up like is anyone's guess, but perhaps the most important feature is that it joins the dots between devices to become a kind of central nervous system.
“Assistants, bots, things, cars and systems will all be connected to the internet, but how will they all cooperate to help their human users?” asks Lenke of Nuance Communications.
Interoperability between many systems is where the future of the user interface lies, because there’s nothing worse than an artificially intelligent argument.