Robotics Prof issues warning on military bots

Robot Ethics 101: Thou shalt not kill (unless we tell you to)
Robot Ethics 101: Thou shalt not kill (unless we tell you to)

A leading expert in robotics and AI technology has issued a warning that developments in military robotics could be putting civilian lives at risk.

The University of Sheffield's Professor Noel Sharkey is calling for an international debate on the matter, telling a gathering of experts in London this week that the ability of an artificially intelligent killer robot to tell friend from foe is at least 50 years away.

"Robots that can decide where to kill, who to kill and when to kill is high on all the military agendas," Professor Sharkey said, adding that discussion of military robotic ethics in the US military was "all based on artificial intelligence, and the military have a strange view of artificial intelligence based on science fiction."

Robot Ethics

Professor Sharkey claims he is neither a pacifist nor an anti-war campaigner, but issued a chilling warning that the rise in military robotics tech has not coincided with any decrease in 'collateral damage'.

Between January 2006 and April 2009, 60 "drone" attacks carried out in Pakistan killed 14 al-Qaeda, and 687 civilians.

The US Air Force publication "Unmanned Aircraft Systems Flight Plan 2009-2047" was published last month, which predicted the use of fully autonomous attack planes, with the human input being limited to "monitoring the execution of decisions" as opposed to actually making them.

"Advances in AI will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input," reads that report.


Adam Hartley