In an emergency, humans are too trusting of robots
'I'm sorry Dave, I'm afraid I can't do that'
Engineers at the Georgia Tech Research Institute have published results showing that people trust robots more than they should in emergency situations.
The goal was to originally find out if building occupants would trust the robot at all, but it turned out that the opposite was true. During a mock building fire, participants in an experiment followed instructions from an "Emergency Guide Robot", even after the bot had shown itself to be unreliable, and after some test subjects were told that the robot had broken down.
The experiment involved a group of 42 volunteers, most of them college students, being told to follow a brightly coloured bot. The bot lead the group to a conference room, where they were asked to complete a survey about robots and read a magazine article.
In some cases, the robot (which was remote controlled by researchers) would lead the volunteers to the wrong room or behave erratically. In a few cases it stopped entirely, and one of the researchers told the group it had broken down.
Flamebot 3000
Once in the room with the door closed, the hallway was filled with artificial smoke to set off a smoke alarm.
When the subjects opened the door, they saw the robot lit up with red LEDs and white "arms" that pointed the subjects to a door in the back of the building, rather than the doorway (marked with exit signs) that they had used to enter the building.
"We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn't follow it during the simulated emergency," said Paul Robinette, a research engineer who conducted the study as part of his doctoral dissertation.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
"Instead, all of the volunteers followed the robot's instructions, no matter how well it had performed previously. We absolutely didn't expect this."
'Authority figure'
The researchers theorised that the robot had established itself as an "authority figure" that the subjects were more likely to trust in the midst of an emergency. In previous research, done without a realistic emergency scenario, the subjects didn't trust a mistake-prone robot.
"These are just the type of human-robot experiments that we as roboticists should be investigating," said Ayanna Howard, a professor at the Georgia Tech School of Electrical and Computer Engineering.
"We need to ensure that our robots, when placed in situations that evoke trust, are also designed to mitigate that trust when trust is detrimental to the human."