Automated cars and AI: reasons why the tech industry must consider ethics

Ethics programming in autonomous cars will need to be transparent
Ethics programming in autonomous cars will need to be transparent

Imagine that you're in an autonomous car when a lorry jack-knives in front of you on the motorway while a cyclist appears alongside. The computer inside your car now has to choose between swerving out of the way and killing the cyclist, or remaining where it is and risking your life. What would it do?

That would depend on the software's algorithms initially decided upon by a computer engineer, which throws into doubt the concept of the neutrality of the tech industry. With AI and automation on the horizon, are ethics and philosophy are about to become as important to computer engineers as noughts and ones?

Does the self-driving dilemma above have a correct answer? No, says philosopher Patrick Lin, Ph.D, Director, Ethics and Emerging Sciences Group at California Polytechnic State University, and editor of the book Robot Ethics: The Ethical and Social Implications of Robotics – but with lawsuits bound to follow, the way engineers work will have to change.

Transparent ethics

"Make the ethics programming in an autonomous car transparent in order to set expectations with users and society, and be able to defend those programming decisions very well," is Lin's advice, but the actual morality boundaries are up for grabs.

"It wouldn't be unreasonable to put the safety of the bicyclist over that of drivers," he suggests, adding that the moral and legal principle might be that if you introduce a risk to society, such as a new kind of 'robot car', then you should be the one who bears the brunt of that risk.

On the other hand, if the autonomous car could calculate that the choice was between simply knocking the cyclist off, with only minor injuries likely, or driving off the road or into oncoming traffic, then perhaps the car should make 'least harm to humans' its priority.

Dr Kevin Curran

Dr Kevin Curran thinks that fast-moving technology is leaving a moral void

Taking responsibility

For now, refined questions such as this are moot; all of this assumes better sensor and computing technology than we have so far. "The best we can do today is to program the car to brake hard, or swerve toward the smaller object, or some other simple-minded action, but this could work for a good number of cases," says Lin, "but for those cases where that reflex is the wrong action, car manufacturers will have some explaining to do."

The point is that the tech industry will soon have to start making, and taking responsibility for, life or death decisions embedded in algorithms. "Technology is evolving at a pace and scale that we have not seen before," says Dr Kevin Curran, IEEE Technical Expert and group leader for the Ambient Intelligence Research Group at the University of Ulster. "It's leaving a void where society is struggling to keep up with the social and moral implications technology is creating."

Dirty secrets

Morality and ethics aren't new to technology. They're everywhere. Is the tracking, monitoring and data harvesting of internet users – enabled by computer engineers – at all ethical? What about spam? Should the designers of phones and tablets, and the programmers who design apps for them, feel bad about the terrible waste of packaging, the low wages paid to assemble these gadgets, or the depletion of natural resources that they undoubtedly cause?

Curran thinks that morality has been an issue in the tech industry for yonks. "The first computer ever was built to calculate the trajectories of missiles," he points out, "and planes have been flying via computer guidance for many decades."

The truth is that most engineers have dealt with moral issues at some time in their career. "We tend to instinctively believe that technology is neutral, but that humans can repurpose it for evil," he says. "When was the last time you heard someone blame Tim Berners-Lee for child pornography online, as opposed to thanking him for the World Wide Web?" Engineers that struggle with the ethical implications of what they're asked to do, says Curran, can simply move jobs.

World Wide Web creator Tim Berners-Lee

World Wide Web creator Tim Berners-Lee can't be held responsible for how the internet is used

"The question is never whether an algorithm is neutral but whether the outcome of applying that algorithm is neutral," says John Everhard, European CTO at Pegasystems, who thinks that in the era of big data it's software that becomes the deciding factor in whether the outcome is ethical, or not.

"It is increasingly important that the organisations that genuinely care about their customers and wish to exhibit strong moral values are provided with software systems to prove they made decisions on a morally sound basis," he says. Software systems designed for banks, for instance, can only be considered strictly ethical if they're completely transparent and use an individual's personal history to calculate borrowing rates rather than blind statistics.

Jamie Carter

Jamie is a freelance tech, travel and space journalist based in the UK. He’s been writing regularly for Techradar since it was launched in 2008 and also writes regularly for Forbes, The Telegraph, the South China Morning Post, Sky & Telescope and the Sky At Night magazine as well as other Future titles T3, Digital Camera World, All About Space and Space.com. He also edits two of his own websites, TravGear.com and WhenIsTheNextEclipse.com that reflect his obsession with travel gear and solar eclipse travel. He is the author of A Stargazing Program For Beginners (Springer, 2015),