In a letter delivered to the United States Congress, IBM (opens in new tab) CEO Arvind Krishna has declared the company will no longer provide any form of general purpose facial recognition software.
IBM also later confirmed it will halt all research and development activities associated with the controversial technology over concerns it can be misused.
The decision, according to Krishna’s letter, was motivated by the potential for facial recognition to facilitate mass surveillance, aggravate racial prejudices and result in the miscarriage of justice - as well as worldwide protests following the death of George Floyd.
- IBM publishes free data set to assist AI developers (opens in new tab)
- IBM, HP announce major job cuts (opens in new tab)
- IBM: Transition to cloud now 'an existential question' (opens in new tab)
Facial recognition software
While facial recognition technology has evolved dramatically in recent years and has the potential to assist in legitimate police investigations, its application has always been contentious.
Concerns about the opportunity for mass surveillance and social scoring are compounded by the issue of AI bias (opens in new tab), which could see individuals discriminated against based on their physical attributes and is particularly problematic in the context of law enforcement.
Methods for auditing data sets that underpin AI models (including facial recognition software) for bias remain inconsistent and unregulated, increasing the possibility the technology could serve to further disadvantage minority demographics.
“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with out values,” wrote Krishna.
“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
In 2018, IBM published a diversity-optimized data set for public use, designed to minimize bias in facial recognition products. But its latest announcement suggests the firm has reevaluated the viability of bias-free facial recognition software.
Krishna is not proposing a blanket abandonment of AI, which he sees as pivotal to the future success of business (opens in new tab), but rather reiterated earlier calls for transparency and responsible use.
“Artificial intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of AI systems have a shared responsibility to ensure that AI is tested for bias, particularly when used in law enforcement, and that such bias testing is audited and reported,” he said.
- Here's our list of the best VPN services (opens in new tab) on the market
Via The Verge (opens in new tab)