Checks and balances in AI deployment

(Image credit: Image Credit: Geralt / Pixabay)

The Roman poet Juvenal famously posed the question “who will guard the guards themselves?”, with regard to the checks and balances required to control the actions of those in power. Two thousand years later, the same could be asked of artificial intelligence (AI). 

As machines begin to make decisions on our behalf, sometimes without human oversight, who should keep them in check? Will humans, or are we engineering the machines to do this well enough themselves? Indeed, earlier this year, the UK Government launched the Centre for Data and Ethics Innovation, an advisory body charged with building a common understanding of how to ensure the safe, ethical and innovative deployment of AI.

We’re a long way from the sentient AI of The Terminator’s Skynet, however. Beyond the adoption of virtual assistants such as Amazon Echo or Google Home, AI’s influence on the human condition has not yet been particularly significant. That said, its importance shouldn’t be downplayed. 

Analysing very large data sets to find patterns in areas such as healthcare, or making rapid trades for financial institutions, and media buys for brands, AI is delivering value across a range of industries. And as businesses look to find meaning within an ever growing volume and variety of data – much of it generated by machines – AI will only become more valuable.

Managing a world of data

The introduction of millions and, eventually, trillions of connected ‘edge’ devices that will make up the Internet of Things, virtually mapping every detail of the physical world in real time, will generate an amount of data eclipsing anything produced today. These devices, and the yottabytes of data they will produce – each equivalent to a trillion terabytes – will enable innovative new technologies such as autonomous vehicles which, themselves, will each produce gigabytes of data every day.  

With traditional computing unable to cope with such huge volumes of data, AI will be key to managing this new environment. This includes making decisions on factors such as what is and what isn’t relevant, what’s alarming, and what should be ignored or deleted. 

Abiding by Asimov’s Three Laws

The time will soon come when machines will become trusted advisors. By truly understanding us, and the wealth of data that surrounds us, AI will be able to help manage and even improve our lifestyles, augmenting our poor decisions with something more informed, and making us smarter, healthier, and more productive, much as foreseen by science fiction pioneer Isaac Asimov.

Asimov believed robots would be integrated into society, constrained and controlled by the Three Laws of Robotics, which he introduced in his 1942 short story, Runaround, and which would make them largely benign – serving and protecting their human masters. 

Likely to be software and rely on complex algorithms, machine learning and AI, the majority of today’s robots are in a form that Asimov simply couldn’t have imagined. Could his Three Laws therefore still apply over 75 years after they were devised? And what if they were overruled, either deliberately and maliciously, or as a result of over-reaching ambition? 

It’s important that we guard against this possibility, and ensure the checks and balances are in place to prevent it occurring. Much of this, however, comes down to how we choose to use AI in the first place. 

Zachary Jarvinen, Product Marketing Lead for AI and analytics at OpenText

Zachary Jarvinen

Zachary leads Product Marketing for Artificial Intelligence at OpenText.  Prior to this, he ran marketing at for a Data Analytics company that reached #87 on the Inc 5000, was a part of the Obama Digital Team in 2008, and is a polyglot with an MBA/MSc from UCLA and the London School of Economics. He is a trilingual with 15 years of experience growing digital products across the globe for both start-ups and Fortune 500 companies.