AI governance is now an even bigger problem, as most governance tools are “limping along”

The lady of justice comes for AI
(Image credit: Future / James Cutler)

A report has uncovered that many AI governance tools are ineffective in measuring the fairness and explainability of AI systems due to “faulty fixes”.

Many of these tools, developed by companies such as Microsoft, Google and IBM, are used by governments to determine the fairness and accountability of AI systems.

But a report released by the World Privacy Forum claims many such systems are often used improperly due to a lack of specific instructions on their use, and a lack of guidelines and requirements for quality assurance.

Faulty tools and a rickety framework

The report reviewed 18 governance tools designed to reduce the risk of AI bias and accuracy and found that a lack of regulation, framework and baseline requirements meant that many of the tools are being used incorrectly.

“Most of the AI governance tools that are in use today are kind of limping along," noted Pam Dixon, founder and executive director of the World Privacy Forum. "One big problem is that there’s no established requirements for quality assurance or assessment. There are no instructions as to the context that it is supposed to be used for, or even a conflict of interest notice.”

A number of scholars interviewed as part of the report criticized AI governance tools that “mention, recommend, or incorporate off-label uses of potentially faulty or ill-suited tools,” which could compromise AI fairness and explainability. The report also noted that a number of governance tools feature disparate impact benchmarks that only apply in specific contexts.

One such example is the Four-Fifths rule, which is widely recognized in the US employment field as a measure of the fairness of recruitment selection processes. However, a 2019 study found that the rule had been coded into a number of tools used to measure AI fairness in contexts with no relation to employment, without regard for the potential impact on their systems.

The report found that, “standards and guidance for quality assessment and assurance of AI governance tools do not appear to be consistent across the AI ecosystem.” The lack of universal quality assurance means that there are significant disparities in how AI governance tools are regulated, and the report stated that more needs to be done, “to build an evaluative AI governance tools environment that facilitates validation, transparency, and other measurements.”

The report summarized that, “Incomplete or ineffective AI governance tools can create a false sense of confidence, cause unintended problems, and generally undermine the promise of AI systems.”

Via VentureBeat

More from TechRadar Pro

Benedict Collins
Staff Writer (Security)

Benedict Collins is a Staff Writer at TechRadar Pro covering privacy and security. Benedict is mainly focused on security issues such as phishing, malware, and cyber criminal activity, but also likes to draw on his knowledge of geopolitics and international relations to understand the motivations and consequences of state-sponsored cyber attacks. Benedict has a MA in Security, Intelligence and Diplomacy, alongside a BA in Politics with Journalism, both from the University of Buckingham.