A lengthy conference aiming to lay down rules on what role lethal autonomous weapons systems can play in future warfare has failed to come to an agreement.
The week-long meeting in Geneva aimed to agree key definitions that would be necessary to word a protocol to restrict the use of so-called 'killer robots'. While the discussion was deeper than previous attempts to draw up regulations, there was no agreement between nations.
Activists, including the Campaign to Stop Killer Robots, are hoping for a complete ban on the practice. That view is shared by some countries, including Cuba and Ecuador.
Oh the humanity
But Britain, which has a national interest in the use of drone technology thanks to BAE systems, is opposed to such a ban and may use its influence to block a protocol completely. The UK argues that existing humanitarian laws are sufficient to regulate the technology.
A key point of dispute is over whether machines can be programmed to observe the principles of humanitarian law - like distinction, proportionality and precautions. The UK believes that one day they will, and so research into the technology should not be banned.
Most countries agree that autonomous weapons that operate without "meaningful human control" are unwelcome, but no-one can agree on how that phrase should be defined - let alone what "autonomous" means.
While the conference did move the debate forward on how we'll be waging wars in the future, there's still a long way to go.
Article continues below