Our existence as a species is, in all likelihood, limited.
Whether the downfall of the human race begins as a result of a devastating asteroid impact, a natural pandemic, or an all-out nuclear war, we are facing a number of risks to our future, ranging from the vastly remote to the almost inevitable.
Global catastrophic events like these would, of course, be devastating for our species. Even if nuclear war obliterates 99% of the human race however, the surviving 1% could feasibly recover, and even thrive years down the line, with no lasting damage to our species' potential.
There are some events that there's no coming back from though. No possibility of rebuilding, no recovery for the human race.
These catastrophic events are known as existential risks – in other words, circumstances that would cause human extinction or drastically reduce our potential as a species.
It’s these existential risks that form the basis of the new 10-part podcast called ‘The End of The World with Josh Clark’ (opens in new tab) who you may already know as the host of the Stuff You Should Know (opens in new tab) podcast (which recently became the first podcast to be downloaded 1 billion times (opens in new tab)).
The new podcast sees Clark examining the different ways the world as we know it could come to an abrupt end – including a super intelligent AI taking over the world.
Over the course of his research into existential risk, Clark spoke to experts in existential risk and AI, including Swedish philosopher and founder of the Future of Humanity Institute Nick Bostrom, philosopher and co-founder of the World Transhumanist Association David Pearce, and Oxford University philosopher Sebastian Farquhar.
We spoke to him about the new podcast, and why he, and experts in the field of existential risk, think humanity’s advances in artificial intelligence technology could ultimately lead to our doom.
What is existential risk?
Some might say that there are enormous risks facing humanity right now. Man-made climate change is a prime example, which, if left unchecked could be “horrible for humanity”, Clark tells us. “It could set us back to the Stone Age or earlier”.
Even this doesn’t qualify as an existential risk, as Clark explains, "we could conceivably, over the course of tens of thousands of years, rebuild humanity, probably faster than the first time, because we would still have some or all of that accumulated knowledge we didn't have the first time we developed civilization."
With an existential risk, that's not the case. As Clark puts it, "there are no do-overs. That's it for humanity."
It was philosopher Nick Bostrom that first put forward the idea that existential risk should be taken seriously. In a scholarly article (opens in new tab) published in the Journal of Evolution and Technology, he defines an existential risk as "one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential."
Clark explains that, in this scenario "even if we continue on as a species, we would never be able to get back to [humanity's development] at that point in history."
While it can feel somewhat overwhelming to consider the ways that we could bring about our own demise, it feels more accessible when put through the lens of Clark's End Of The World podcast series.
When we asked him why he took on such a formidable subject matter, he told us that, “the idea that humans could accidentally wipe ourselves out is just fascinating.”
And perhaps the most fascinating of all the potential existential risks facing humanity today, is the one posed by a super intelligent AI taking over the world.
The fundamentals of artificial intelligence
In recent years, humanity has enjoyed a technological boom, with the advent of space travel, the birth of the internet, and huge leaps in the field of computing, changing the way we live immeasurably. As technology has become more advanced, a new type of existential risk has come to the fore: a super intelligent AI.
Unravelling how artificial intelligence works is the first step in understanding how it could pose an existential risk to humanity. In the ‘Artificial Intelligence’ episode (opens in new tab) of the podcast, Clark starts by giving an example of a machine that is programmed to sort red balls from green balls.
The technology that goes into a machine of this apparent simplicity is vastly more complicated than you would imagine.
If programmed correctly, it can excel at sorting red balls from green balls, much like DeepBlue excels in the field of chess. As impressive as these machines are, however, they can only do one thing, and one thing only.
Clarke explains that, “the goal of AI has never been to just build machines that can beat humans at chess”, instead, it is to “build a machine with general intelligence, like a human has."
He continues, “to be good at chess and only chess is to be machine. To be good at chess, good at doing taxes, good at speaking Spanish, and good at picking out apple pie recipes, this begins to approach the ballpark of being human.”
This is the key problem that early AI pioneers encountered in their research - how can the entirety of the human experience be taught to a machine? The answer lies in neural networks.
A neural network is a type of machine learning which models itself after the human brain. This creates an artificial network that, via an algorithm, allows a computer to learn by incorporating new data.
A common example of a task for a neural network using deep learning is an object recognition task. Here the network is presented with a large number of objects of a certain type, such as a cat or a street sign.
The network, by analyzing the recurring patterns in the presented images, learns to categorize new images.
Advances in AI
Early artificial intelligence created machines that excelled at one thing, but the recent development of neural networks has allowed the technology to flourish.
By 2006, the internet had become a huge force in developing neural networks, thanks to the huge data repositories of Google Images and YouTube videos, for example.
It's this recent explosion of data access that has allowed the field of neural networks to fully take off, meaning that the artificially intelligent machine of today no longer needs a human to supervise its training (opens in new tab) – it can train itself by incorporating and analyzing new data.
Sounds convenient right? Well, although artificial intelligence works far better thanks to neural nets, the danger is that we don’t fully understand how they work. Clarke explains that “we can’t see inside the thought process of our AI", which could make the people that use AI technology nervous.
A 2017 article by Technology Review (opens in new tab) described the neural network as a kind of "black box" – in other words, data goes in, the machine’s action comes out, and we have little understanding of the processes in between.
Furthermore, if the use of neural networks means that artificial intelligence can easily self improve, and become more intelligent without our input, what’s to stop them outpacing humans?
As Clark says "[AI] can self improve, it can learn to code. The seeds for a super intelligent AI are being sown” – and this, according to the likes of Nick Bostrom, poses an existential risk to humanity. In his article on existential risk (opens in new tab) for the Journal of Evolution and Technology, he says "When we create the first super intelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so."
What are the risks posed by a super-intelligent AI?
“Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind.”
This is a quote from British mathematician I. J. Good, and one Clark refers to throughout the podcast and in our conversation, as a way of explaining how a super-intelligent AI could come to exist.
He gives the example of an increasingly intelligent machine that has the ability to write code – it would have the potential to write better versions of itself, with the rate of improvement increasingly exponentially as it becomes better at doing just that.
As Clark explains, "eventually you have an AI that is capable of writing an algorithm that exceeds any human's capability of doing that. At that point we enter what Good called the 'intelligence explosion'...and at that point, we are toast."
Benevolence is a human trait
So why does this pose an existential risk? Clark asks us to imagine "an AI we created that has become super intelligent beyond our control."
He continues, "If we hadn't already programmed what AI theorists call 'friendliness' into the AI, we would have no reason to think it would act in our best interests."
Right now, artificial intelligence is being used to recommend movies on Netflix, conjure up our social media feeds, and translating our speech via apps like Google Translate.
So, imagine Google Translate became super intelligent thanks to the self improvement capabilities provided by neural networks. "There's not really any inherent danger from a translator becoming super intelligent, because it would be really great at what it does" says Clark, rather "the danger would come from if it decided it needs stuff that we (humans) want for its own purposes."
Maybe the super intelligent translation AI decides, that in order to self improve, it needs to take up more network space, or to destroy the rainforests in order to build more servers.
Clark explains, in creating this podcast, he looked into research from the likes of Bostrom, who believes we would then "enter into a resource conflict with the most intelligent being in the universe – and we would probably lose that conflict", a sentiment echoed by the likes of Stephen Hawking (opens in new tab) and Microsoft researcher Eric Horvitz (opens in new tab).
In the journal article we mentioned previously, Bostrom provided a hypothetical scenario in which a super intelligent AI could pose an existential risk: "We tell [the AI] to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question."
So, the problem isn't that a super intelligent AI would be inherently evil – there is of course no such concept of good and evil in the world of machine learning. The problem is that an AI that can continually self improve to get better at what it is programmed to do wouldn't care if humans were unhappy with its methods of improving efficiency or accuracy.
As Clarke puts it, the existential risk comes from "our failure to program friendliness into an AI that then goes on to become super intelligent."
Solutions to the AI problem
So what can be done? Clark admits that this is a "huge challenge", and the first step would be to "get researchers to admit that this is an actual real problem", explaining that many feel generalized intelligence is so far down the road that it's not worth planning for it being a threat.
Secondly, we would need to "figure out how to program friendliness into AI", which will be an enormously difficult undertaking for AI researchers today and in the future.
One problem that arises from teaching an AI morals and values, is deciding whose morals and values it should be taught – they are of course, not universal.
Even if we can agree on a universal set of values to teach the AI, how would we go about explaining morality to a machine? Clark believes that humans generally "have a tendency not to get our point across very clearly" as it is.
Why should we bother planning for existential risk?
If a super intelligent AI poses such a huge existential risk, why not just stop AI research in its tracks completely? Well, as much as it could represent the end of humanity, it could also be the "last invention we need ever make", as I. J. Good famously said.
Clark tells us that, 'we're at a point in history, where we could create the greatest invention that humankind has ever [made], which is a super-intelligent AI that can take care of humans' every need for eternity.
"The other fork in the road goes towards accidentally inventing a super-intelligent AI that takes over the world, and we become the chimpanzees of the 21st century."
There's a lot we don't know about the route artificial intelligence will take, but Clark makes one thing clear: we absolutely need to begin taking the existential risk it poses seriously, otherwise we may just screw humanity out of ever achieving its true potential.
Main image: Franck V via Unsplash (opens in new tab)