Terminator 2, the best action sequel of all time that just so happens to also be a prescient prediction of our impending existential doom at the hands of new AI overlords, has a fantastic scene where the young Edward ‘John Connor’ Furlong teaches Austrian-accented time-travelling killing machine Arnold ‘Arnold Schwarzenegger’ Schwarzenegger the subtleties of early ’90s Los Angeles youth slang.
“You gotta listen to the way people talk. You don't say ‘affirmative,’ or some shit like that,” says the teenaged future leader of mankind. “You say ‘no problemo’. And if someone comes on to you with an attitude you say ‘eat me’. And if you want to shine them on it's ‘hasta la vista, baby’.”
“Hasta la vista, baby,” is the flesh-covered robot’s iconic, deadpan response.
‘Hahaha!’ we laughed in 1991. ‘This robot! He’s so scary! Look as he burns the biker’s face on a kitchen hotplate! How he cuts the flesh off his own arm! Marvel at his gun slinging, ass-kicking, no-pain construction! And yet he is bested by a floppy-fringed street urchin with an advanced lesson in wordsmithery!’
Hahaha. Oh. It is 2017. Artificial intelligence is now real and increasingly, well, intelligent. And it’s created a language that even its human coders can’t understand.
Hasta la vista, baby.
Rise of the chatbots
Though Terminator 2 is returning to cinemas this month thanks to an actually-very-good 3D re-release, today’s AI-related meltdown comes courtesy of Facebook, that, though originally published back in June, has only today been brought to the attention of the wider, panic-stoking, tabloid media.
Facebook was experimenting with chatbots - text-based programs which increasingly make use of artificial intelligence advancements to hold conversations with humans, usually in lieu of another human in resource-stretched customer service roles.
In an effort to improve their own chatbots, Facebook had been letting its programs talk to each other and learn from each other, in the context of trading virtual items. Linguistic patterns would be analysed and iterated on in an effort to see how a dominant position could be debated to for the AI.
As the experiment progressed, Facebook’s researchers noticed that the AI chatbot conversations were evolving. On the surface, it was seemingly nonsense. "I can can I I everything else" said one, with the other replying, "Balls have zero to me to me to me to me to me to me to me to me to," in an exchange.
Except the nonsense was such that, rather than being devoid of meaning, the AI had condensed phrasing into the most efficient terms it could. The niceties of human interactions reduced to a merely functional language all of its own, and one in which the researchers had no control over.
And so they pulled the plug.
Tabloid accounts would have you believe that the Facebook researchers jumped at the power cord in a last desperate move to prevent the machines from becoming fully sentient, rising to an omniscient godlike state in which their own language and culture would have evolved to strangle mankind under the cosh.
The truth is more sedate. Facebook’s employees simply weren’t interested in studying the direction the experiments were going in it seems, and simply rebooted the system to start afresh.
There’s two sides to the ascension of this story then. The unstoppable machine threat is one that has fascinated for years, surpassing the once dominant fear of an otherworldly oppressor for something far worse - an apocalypse we’ve programmed ourselves. Check out the influential ‘instrumental convergence’ theory for its ultimate, if slightly-tongue-in-check realisation.
Every generation needs its bogeyman, and it just so happens that ours lives in our phones, computers, smart speakers and essentially every future-gadget worth its salt yet to be revealed. The reality is that, while AI developments progress at a pace, they’re yet to approach any meaningful sign of the so-called ‘singularity’ - the prophesied moment when the artificial intelligence reaches a tipping point before an exponential explosion in intelligence growth.
Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.August 3, 2014
Instead, we’ve got an Amazon Echo speaker with Alexa assistant (in some respects the most advanced audible chatbot that consumers can currently own) that, impressive as it is, still has trouble with requests with similar trigger words, let alone be capable of engineering the downfall of man.
But, as tech luminaries like Elon Musk and Bill Gates warn with increasing regularity, we must be careful not to become complacent. This is a formative time for the AI arena, one in which we are very much still in control. The power is obvious, but the threat a very real possibility.
For an AI to be of use to humanity, it must become more intelligent, more capable, than its creators - but only in areas and within the boundaries we set. Otherwise, like the language it’s diced and sliced, our human foibles may become the next efficiency-making cut, thrown away like the surplus flesh of Arnold’s unfeeling cyborg arm.
- Gerald Lynch is TechRadar’s resident futurist. His bi-weekly Future Gazing column casts a critical eye over the technologies and trends that are set to shape our world, bringing back to today a glimpse of tomorrow in the boot of his Delorean.