IBM is hoping to make AI more accountable with a new bias detecting service.
The computing giant is launching a new platform aimed at shining a light on exactly how AI tools such as its iconic Watson platform make the decisions they do.
The cloud-based Trust and Transparency capabilities looks to offer an explanation of how AI makes decisions in an effort to help reduce the number of errors and false decisions being made as the technology is adopted by more and more organisations.
This includes potential bias in policing services using AI, or insurance companies looking for more information into a crime scene.
It will also give customers a real-time overview of how the algorithms they use are making the decisions, and what factors are influencing this, as well as tracking accuracy and performance over time.
“IBM led the industry in establishing trust and transparency principles for the development of new AI technologies,” said David Kenny, SVP of cognitive solutions at IBM.
“It’s time to translate principles into practice. We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”
The open-source platform will also ensure businesses looking to extend their AI operations are compliant with regulations such as GDPR, with the likes of Tensorflow, SparkML, AWS SageMaker, and AzureML able to benefit alongside Watson.
The launch comes as IBM also reveals the findings of a major study in the perception of AI in the workplace.
The study, which surveyed 5,000 C-level executives from across the globe, found that 82 percent of enterprises are now considering or moving ahead with AI adoption.
IT, security and customer service were all named as some of the leading use cases of AI by CEOs, with digital-friendly industries such as financial services set to be the speediest adopters of the new technology.
However there are a number of barriers to adopting AI, with 60 percent saying that they feared liability issues, and 63 percent believing that they lack the skills to harness AI’s potential.