Automated cars and AI: reasons why the tech industry must consider ethics

There are obviously some technologies and products that are created for completely unethical ends. "Ethics, or a lack thereof, may be found in the design process and intrinsically linked to the creation," says Lin. Think gas chambers, torture devices, missiles and robotic weaponry.

"Computer viruses and malware are 'evil'," says Curran, adding another to the long list. "They have no positive uses whatsoever and are a clear example of a non-ambiguous piece of technology with nothing but evil as its payload."

Some smartphone apps are almost accidentally unethical. A great example is Facebook's 'Year in Review' feature last year, which pushed a recap of the user's most popular photos posted – unintentionally forcing users to confront recent tragedies like the death of their child.

Into the same category of badly thought out ideas go the recent swathe of fitness apps. "Without any conscious sexism, there's an all-too-familiar trend of health monitoring apps that fail to consider tracking a woman's menstrual cycle even though women make up half the world," says Lin, adding that humanoid robots are typically designed to resemble men. "Those might not be unethical in the sense of evil, but they're certainly challenging if you care about social impact and the complicated role of gender," observes Lin.

What about the internet?

But the ultimate unethical, badly thought through modern creation? It's got to be the internet? Its core architecture makes possible cyberbullying, copyright theft, spam email, and all kinds of other activities that could be defined as unethical. "If ethics weren't an afterthought but part of the internet's design, then perhaps we could have headed off a lot of the problems we face today – not just security vulnerabilities, but also intellectual property issues, cyberbullying, privacy issues, and so on," says Linn.

Robot floor scrubber

The decisions made by automated products have to be pre-programmed

Automated morality: who decides?

An automated car's core ethics need to be well understood, but to be acceptable those ethics need to come from the societies they'll operate in. "The work of programming moral decision-making into a product must include input from broader society, not just in a bubble of a particular company or even Silicon Valley, which may have radically different values than the rest of the world," says Lin.

Artificial intelligence can also be dangerous; last month a woman was 'attacked' by her robot vacuum cleaner and had to call someone to free her. "It's recommended we always have a managed kill switch that has protected code built-in and secured by design, which realises the machine shouldn't continue trying to hoover-up this poor lady," says Neil Thacker, Information Security & Strategy Officer at Websense. "When building and modelling AI behaviour, it remains vital to build in the kill switch to protect against rare anomalies such as this."

It's a similar story with driverless cars. "Google initially stated they wanted their driverless cars to have no steering wheel or foot controls because they were driverless and accurate, but in the worst case scenario and the AI fails or cannot make a decision based on the data, safety remains paramount," says Thacker. "People are not ready to change or trust AI in the next ten years, but we will see future generations challenge this rule."

Incentivising ethics

The incentive to be 'moral' is the threat of litigation. "The motivation to build rigorous and secure systems should be there because it is quite possible that all involved in its design could be held liable if a defect caused or even contributed to a collision," says Curran, who thinks that as the role of computer programmers comes to play a bigger part than drivers in the way vehicles move, manufacturers will build the cost of litigation and insurance into their vehicles.

Lin thinks that the social impact of any product should be part of the product's launch plan. "Even if the creators don't care about responsibility and ethics, they should care about how these issues might harm their brand, product adoption, and financial bottom line, for instance, if legal troubles arise," he says. "This is particularly true with artificial intelligence."

With automation, artificial intelligence and the Internet of Things on the horizon, the ethics and intentions of the creators and programmers of such devices are becoming more important. Does the tech industry need a code of ethics?

Actually, there already is one – the IEEE Code of Ethics – though what forces companies to take responsibility for pre-programmed ethics and morality will be the threat of litigation, since people aren't going to buy automated cars, robots or even control apps if they themselves are liable for their decisions. No one is going to want an automated car without knowing what the risks are.

Jamie Carter

Jamie is a freelance tech, travel and space journalist based in the UK. He’s been writing regularly for Techradar since it was launched in 2008 and also writes regularly for Forbes, The Telegraph, the South China Morning Post, Sky & Telescope and the Sky At Night magazine as well as other Future titles T3, Digital Camera World, All About Space and Space.com. He also edits two of his own websites, TravGear.com and WhenIsTheNextEclipse.com that reflect his obsession with travel gear and solar eclipse travel. He is the author of A Stargazing Program For Beginners (Springer, 2015),