We lie to robots to spare their nonexistent feelings, research finds

Robot

Having robots in our lives is an inevitability. We already have artificially intelligent voice assistants on our phones like Cortana, Siri and Google Now. But how will we interact with robots when they look and act like us?

Researchers at the University College London and University of Bristol experimented with a humanoid robot to find out how humans instinctually interact with robots. Three different robots were tasked with making an omelette, but were programmed to drop an egg:

  • Robot A was non-communicative, but the most efficient and didn't make mistakes.
  • Robot B was also non-communicative, but dropped the egg and apologized for it with a neutral face.
  • Robot C was the most communicative and expressive, showing a sad facial expression when dropping the egg.

Users took each robot's apology well, and were especially receptive to robot C's sad facial expression as it reassured people that it "knew" it made a mistake.

A mirror of ourselves

Participants in the study were asked by each robot if they would give it a job of being their kitchen assistant, but could only answer "yes" or "no". This caused quite a bit of anxiety from the participants, since they couldn't qualify their answers.

Some of the participants were reluctant to answer and looked uncomfortable. One participant assumed robot B or C would look sad if he answered "no," but the robots weren't programmed to react to either answer. One participant called the interaction "emotional blackmail" and another lied to a robot to spare its feelings.

"We would suggest that, having seen it display human-like emotion when the egg dropped, many participants were now pre-conditioned to expect a similar reaction and therefore hesitated to say no; they were mindful of the possibility of a display of further human-like distress," said Adriana Hamacher, the creator of the study.

Most interestingly, the study showed that humans preferred a communicative and expressive, but error-prone, robot to one that completed the task 50% faster. The study shows that humans value trust and are more accepting of a robot that demonstrates emotions, like regret and enthusiasm.

Overall, it showed that humans tend to anthropomorphize robots when they look and react like us.

You can read the research paper in its entirety here [PDF].

Lewis Leong
Lewis Leong is a freelance writer for TechRadar. He has an unhealthy obsession with headphones and can identify cars simply by listening to their exhaust notes.