Computer algorithms can be as biased as humans

Algorithms can be as biased as humans

Job applications are often sorted and ranked by software algorithms, but the resulting lists can often be as biased as if a human had sorted them. Computer scientists think they might have the answer.

A team led by Suresh Venkatasubramanian at the University of Utah has developed a test that determines whether an algorithm is discriminatory. It's based on a legal concept called "disparate impact", which considers a policy biased if it has an adverse effect on any group based on race, religion, gender, sexual orientation, or other protected status.

Venkatasubramanian's test tries to predict a person's race or gender based on the data being analysed - even if race or gender info is hidden. If it can do so successfully, then there's a potential bias problem.

Proof of Concept

That problem is usually easy to fix, though, by using the test to work out what data is creating the discrimination, and then redistributing it so the bias disappears.

Venkatasubramanian showed off his technique and explained how it works at the 21st Association for Computing Machinery's SIGKDD Conference on Knowledge Discovery and Data Mining in Sydney, Australia.

"The irony is that the more we design artificial intelligence technology that successfully mimics humans, the more that A.I. is learning in a way that we do, with all of our biases and limitations," said Venkatasubramanian. "It would be ambitious and wonderful if what we did directly fed into better ways of doing hiring practices. But right now it's a proof of concept."

Image credit: Vincent Horiuchi

Duncan Geere
Duncan Geere is TechRadar's science writer. Every day he finds the most interesting science news and explains why you should care. You can read more of his stories here, and you can find him on Twitter under the handle @duncangeere.