AI systems are a threat – but not the way Elon Musk claims

Artificial intelligence
Image credit: geralt on Pixabay (Image credit: Pixabay)

In 2017, Elon Musk claimed that AI is one of the greatest threats to the human race. “AI is the rare case where I think we need to be proactive in regulation instead of reactive,” he told students from Massachusetts Institute of Technology (MIT). “Because I think by the time we are reactive in AI regulation, it’ll be too late. AI is a fundamental risk to the existence of human civilization.”

However, according to Tom Siebel, founder and CEO of C3, there are two much more pressing concerns: privacy and the vulnerability of the Internet of Things.

Siebel is one of the leading names in AI. He began his career as a computer scientist at Oracle and founded his own company, Siebel Systems, in 1993. By 2000 the company had 8,000 employees in 29 countries and $2bn in revenue. The company merged with Oracle in 2006, and Siebel founded C3 in 2009. 

C3 has spent about 10 years and half a billion dollars building a platform for an AI suite, and its clients include the US Air Force, Shell and John Deere to develop industrial-scale applications. Its systems help reduce greenhouse gas emissions, predict hardware failure for offshore oil rigs, fighter jets and tractors, and assist banks with preventing money-laundering.

Elon Musk panel discussion

Nick Bostrom, Elon Musk, Nate Soares, and Stuart Russell discuss AI and risk at the Effective Altruism Global conference in August 2015. Image credit: Robbie Shade under CC by 2.0 license

He says that instead of attempting the impossible task of regulating AI algorithms (a proposal he says was a publicity stunt from Musk), we should be focusing on the far more real threat AI systems pose to our privacy. 

AI for social good

“Let's think about using AI for precision medicine, which will be done at massive scale,” he says. “We might aggregate the healthcare records for the population of the UK, or the population of the United States – pharmacology, radiology, health history, blood count history, all of this data. That’s a big data set. And then in the future, we’ll also have the genome sequence for all these people.”

These systems could be used for predicting the onset of disease, and providing the best possible treatment for an individual.

Tom Siebel

Tom Siebel, CEO and founder C3. Image credit: Ethan Pines, the Forbes Collection

“We can use AI to assist physicians making diagnoses,” says Siebel, “for example, reading radiographs or CAT scans and advising them. But we're looking all the data – blood chemistry, whatever – and advising on which diseases [they] should be looking for.

We’re seeing that when it comes to personal identifiable data, corporations are not regulating themselves

Tom Siebel, C3

“And then, when one is selected, we’ll have again human-specific or genome-specific treatment protocols. So we’ll be able to predict with very high levels of precision adverse drug reactions, who is predisposed towards addiction (for example, to opiates). And efficacy – what is the optimal pharmaceutical product or combination of pharmaceutical products to treat this disease?

“And so, for example, if we could for a population size the United States or the UK, identify who is predisposed to come down with diabetes for the next five years with high levels of precision, we can treat those people clinically now rather than treat them in the emergency room in five years. And the social and economic implications of that are staggering.”

Your privacy at risk

So far, so positive – but there’s another side of the equation.  “Now we know who's gonna come out with diabetes, we know who's going to be a diagnosed with terminal illness as well,” Siebel says. “Do you want to know that? I'm not sure I do – but either the government (medical service) knows it, or the insurance aggregator knows it, and what are they going to do with those data? We’re seeing that when it comes to personal identifiable data, corporations are not regulating themselves.” Siebel cites Facebook as the most obvious example.

The idea that we’re going to have government agencies that are going to regulate AI algorithms is just crazy. When does a computer algorithm become AI? Nobody can draw that line

Tom Siebel, C3

“So how will these data be used? Will they be used for prioritizing who gets treatment? Will they be used for setting insurance rates? Who needs to know?”

As Siebel notes, in the United States people who have a pre-existing condition often find it hard (or very expensive) to secure health insurance – and with AI-supported healthcare, things could be even worse.

“Who cares about pre-existing condition when we know what you're going to be diagnosed with? So the implications of how people deal with these kinds of data are really very troubling.”

Bringing down the grid

Then there’s the Internet of Things, which Siebel says is extremely vulnerable to attack – with potentially catastrophic consequences. 

“I think there are troubling issues associated with the how fragile these systems are, like power systems and banking systems,” he says. “If you shut down the power system or the utility system of the UK or the United States, I think something like nine out of ten people in the population die. All supply chains stop.

“Electrical power is the bottom of Maslow's Hierarchy of 21st century civilization. All other systems – whether it's whether it's security, food supply, water distribution, defense, financial services – they're all dependent upon it,  so if the grid doesn't work there's no milk on the shelf in the grocery store. So these are very troubling issues.”

Taking action

So what's the answer? Siebel says the EU has started to put a dent in these problems with its General Data Protection Regulation, but together with national governments, it needs to go a lot further. 

“GDPR includes the right to be forgotten, and that’s important, but I think there needs to be, in the terms of use where everyone clicks ‘I agree’ – those terms of use are granting great latitude to these data aggregators to use and misuse those data.  I think they need to come up with a standard terms of use for how they can use that data. If they use it in a different way, they should be in violation of the law, they should be criminal offences and they should be prosecuted.”

What certainly shouldn’t happen is the creation of a government agency to audit AI algorithms. “Elon is one of the smartest people in the information technology industry in the world," says Siebel, "but with all due respect, a lot of his comments in the last three years do not appear to be that well-grounded.

"The idea that we’re going to have government agencies that are going to regulate AI algorithms is just crazy. When does a computer algorithm become AI? Nobody can draw that line, and if you put some government agency on it, it’s just going to be a big mess. But privacy is something they can protect, and they need to protect.

"That might fly in the face of First Amendment rights, but if they don't act, a lot of people are going to be hurt."

Cat Ellis

Cat is the editor of TechRadar's sister site Advnture. She’s a UK Athletics qualified run leader, and in her spare time enjoys nothing more than lacing up her shoes and hitting the roads and trails (the muddier, the better)