The hype around artificial intelligence (AI) has reached a fever pitch in the past few months, as tech giants such as Google, Microsoft, Facebook and Apple have unveiled new AI technology that could bring it out of the realm of science fiction and into the mainstream.
Amazon's dedicated staff of 1,000 Alexa developers is fending off the big G by reportedly teaching its software to recognize your emotional state as it sells you stuff. And Facebook, IBM, and other tech giants are using AI to study your social media presence and search history to sell you goods and advertise products.
Meanwhile, Apple announced at this week's WWDC that it's placing a smarter Siri on not just every iOS 10 device, but also macOS and allowing third-party developers access to the Siri SDK. iDevice owners will soon be able to link their apps to Apple's AI so they can order an Uber ride or check their calendar simply by asking Siri, without needing to open an app or even unlock their phones.
In the midst of this apparent AI renaissance, some experts have raised worries about how these intelligences could threaten humanity, or make us obsolete or stupid by comparison; ideas for AI kill switches, cybernetic neural enhancements, and a pre-programmed love of humanity have all been suggested. Others raise privacy concerns about AI studying our data and giving companies too much private, profitable information in the name of personalization.
I spoke with Brown University professor Michael Littman, who studies artificial intelligences and spearheads the Humanity-Centered Robotics Initiative, to wade through empty speculation and premature promises to determine how our AI future may actually evolve in the next few years - and whether you should be optimistic or concerned.
A battle of chatty assistants
Google CEO Sundar Pichai predicted in recent weeks that we will evolve "from a mobile-first world to an AI-first world," with his company leading the pack. Whether this prediction comes true depends entirely on how quickly tech companies can teach AI the art of conversation.
AI assistants like Siri have always used voice recognition software to understand carefully phrased questions and give a pre-programmed response. But new iterations of home AI platforms will use contextual clues and data from online searches to provide specific, personalized information to any question, no matter how poorly phrased.
For example, if you ask, "Where is Finland?" followed by, "What's its capital?", current-gen AIs immediately forget the prior question and fail to make the connection between "Finland" and "its". Next-gen software will access its language database and memory storage of your past questions to better contextualize your question before responding "Helsinki" - the same way humans remember earlier parts of a conversation.
Interestingly, the only way to teach an AI to speak is through another AI. Google is using DeepMind - a data-mining AI that analyzes your web searches and clickthroughs to discover how you ask questions - to teach its Google Assistant to better answer your vocal queries. Considering Google Search processes 1 trillion queries per year, DeepMind has an enormous foundation of knowledge to teach Google Assistant how people typically phrase questions and how to answer them.
Facebook's DeepText AI, another data-mining AI that searches through your Facebook posts and Messenger chats, is already interpreting users' posts and web searches with near-human accuracy in seconds. Both Google and Facebook plan on connecting their respective AI assistants to your data in real time as you ask questions, to better understand your colloquial expressions, remember important contextual information, and improve responses over time based on feedback.
Unfortunately, this may fall short, according to Littman.
He says Google's vision of "reliably engaging in dialogues of more than two or three turns is beyond the state of the art. Companies are racing to get [a breakthrough], but I'm not aware of any existing technology that they can tweak to get close to it."
Littman studies reinforcement machine learning - the ability of an AI to learn something without being explicitly programmed through evaluative feedback - and the technology hasn't matured enough that AIs know if they've interpreted you correctly or answered your question well, which means they can't yet evolve to better serve your needs.
Until a company like Google has a major breakthrough, natural AI conversation is probably too high a barrier to consider for the near future. But, for simple, screen-less searches, the technology will only become more practical and ubiquitous as AI learn to speak our language.
Household robots ... eventually
Teaching an AI robot to navigate a person's home and assist with chores is at least as challenging as teaching AI to speak naturally. And where Google, IBM, and other tech giants have access to endless amounts of free, text-based data to search for the next breakthrough, data on robotics is comparatively impossible to find online and incredibly expensive to produce.
Professor Stefanie Tellex, Littman's colleague at Brown University, recently released the Baxter prototype line of household robots, shown above. Each of her 300 robots uses cameras and infrared sensors to examine a specific object before picking it up with different grips, as reported by Wired. Once it knows the ideal grip for that shape of object, it transmits the data to the other 299 robots to help them collectively evolve.
"It may be the broadest deployment of identical robotic hardware of all time, but 300 is a pretty paltry number compared with the [for example] half a billion people on Facebook," says Littman of Tellex's program. Facebook's AI can examine 300 posts of complex language data in less than a second, whereas collecting effective data for robot maneuvering might take weeks.
Until a larger investment is made in personal robotics, online-only AI will likely remain the focus of major corporations, where the technology is much more viable - and profitable.
AI advertising and the end of privacy?
If the helper robots science fiction has envisioned for decades are still far from reality, another, less glamorous prediction for the future has already come true: advertising powered by artificial intelligence.
In 2002's Minority Report, Tom Cruise's character is harassed by smart advertisements keyed into his retinal scan that remember previous purchases and make corresponding recommendations.
Now, this becoming a reality. Weather.com, recently purchased by IBM, plans to sell advertising space powered by the Watson AI. Companies will lease Watson and ad space to speak directly to consumers, and respond to their questions and concerns.
The Wall Street Journal describes how consumers could ask "an allergy medication brand ... about whether the product is appropriate for children or what sort of side effects it might cause," or query a food ad about potential recipes.
IBM is counting on natural conversation and informational dialogue getting people past their usual distaste or automatic dismissal of online ads.
Littman thinks this could be effective, but says it depends on the "naturalness of the dialogue," and adds that if the "salesmanship comes across as blatantly manipulative, people will definitely push back."
This same principle applies to the other, more subtle aspect of AI advertising: companies' efforts to turn social media services into hubs of shopping and personalized advertising, with potentially disturbing repercussions for privacy.
Facebook's DeepText AI bots regularly check every post you make, searching for ways to "turn users into buyers", as AdAge notes. If, for instance, you mention needing to buy Valentine's Day flowers for your partner, a third-party chatbot from 1-800-Flowers will message you through the Messenger app, and you will be able to make a purchase without having to leave the app.
While some might see this as incredibly convenient, others will undoubtedly resent the blatant surveillance and invasion of privacy. And many other companies, led by Google, are keeping close tabs on your personal data, either to improve personalization or make money off of you, depending on your interpretation.
Both IBM and Google are spending hundreds of millions purchasing your medical data to improve algorithms for medicinal diagnostics. You must opt out to stop Google Home from recording everything you say to the cloud. And both Google and Amazon are freely storing your email and purchase data to "improve" their AIs' intelligence.
Ever since the technology for improving recommendations emerged in the 1990s, we've all but accepted that companies should store our data to better personalize our internet experience. But Littman says that if AI bots make this fact uncomfortably plain to users, this could backfire on the companies housing our data.
"As a society, we're in the process of significantly renegotiating the boundaries over privacy," says Littman. "I don't know where it's going to lead. I'd like to think that we can walk the tightrope between using personal data to improve lives and support individuality and using personal data to control people and impose conformity … [But] I worry that society doesn't really know what it wants or the implications of what it might get."
Ultimately, it's up to us, individually and as a society, to accept corporate ownership of our information as the price for personalized services and evolving intelligences, or to reject it.
AI overlords? Not likely
Google's DeepMind team has developed plans for an AI kill switch to its learning capabilities, should it gain sentience or hatred for mankind. Some tech pioneers are pushing to give AI emotions so the machines love us rather than kill us. Elon Musk argued we must give ourselves cybernetic enhancements to avoid becoming "house cats" to our AI protectors.
Littman isn't impressed or amused by these predictions of an AI dystopian future.
"[Musk] has consistently been blurring the line between AI technology of the kind that we're familiar with and a kind of super-intelligent, willful entity that devises its own high-level strategies to thwart humanity's attempts to keep it under control," he says bluntly. "The latter is pure fantasy and a significant distraction from attempts to develop technology to help people, in my opinion."
We're attributing human emotions and ominous traits to computer programs designed specifically without desires or instinct, when we're not remotely close to that level of sophisticated programming. We have no reason to worry about Skynet or Cylons, but these baseless fears could easily derail the positive potential for the technology.
Despite his optimism for the technology, Littman warns we need strong safety nets in place so that improvements in automation don't create economic disruptions in some industries.
Littman recently attended the second of four AI for Good conferences at the White House, which discussed how AI could prevent medical errors in hospitals, track and thwart poachers, improve city infrastructure, and produce specialized curriculum for students. Government, then, is becoming involved in shaping our AI future.
"My belief is that the White House is responding to the kind of Musk-fueled concerns about AI," argues Littman. "Highlighting the ways AI technology is being used to help people today and in the future seems like a great way to tamp down fears."
Article continues below