Researchers warn that AI could turn self-driving cars, drones into weapons
Emphasis on “could”, not “will”
In our humble opinion, AI should be embraced, not feared. Most people associate this technology with HAL or Skynet, but the potential positive benefits of AI seem to outweigh fears of our creations gaining sentience and destroying us all.
In truth, AI will only be as benevolent or evil as human intentions make it.
A 101-page report titled The Malicious Use of Artificial Intelligence from researchers at Yale, Oxford, Cambridge and OpenAI stresses that the real danger of this technology could come from hackers wielding malicious code to target vulnerabilities in AI-automated systems, giving people greater capacity to cause physical or political harm.
Released Wednesday and first covered by CNBC, this report stresses the “dual-use” nature of AI: automated programs built with the best of intentions can be twisted into harmful technology. What should we fear the most? “[AI] surveillance tools can be used to catch terrorists or oppress ordinary citizens,” was one example given by the report, or commercial drones for grocery delivery being weaponized.
Another outlined scenario involved self-driving cars. They warn that hackers could alter just “a few pixels” of an AI’s concept of a stop sign, a small enough change that humans might not notice the difference, but could cause an entire fleet of cars on one company’s server to begin ignoring safety laws.
The authors organized potential security threats into digital, physical and political categories:
- AI are already used to study patched code vulnerabilities and extrapolate what new ones could be for bots to exploit; in the future, AIs could give bots human-like browsing habits, making DoS attacks impossible to defend against
- Automation could “endow low-skill individuals with previously high-skill attack capabilities”; in theory, one person could feasibly control a “mini-swarm” of drones using a coordinating AI script
- Political repercussions could include “deepfake” technology being used to smear a political leader’s reputation, or automate the disinformation and trolling that Russia currently enacts to influence future elections.
Anticipating worst-case scenarios
These examples, as frightening as they are, mostly remain hypothetical, and the report isn’t meant to imply that artificial intelligences should be banned as inherently dangerous.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Instead, the report’s authors stress that certain action items should be put into motion by governments and businesses now, while the industry is still nascent.
Policymakers in world governments, they say, should start studying the technology, and work with experts in the field to effectively regulate AI creation and use. AI devs, meanwhile, need to self-regulate, always anticipating the worst-possible implications of their technology and warning policymakers about them in advance.
The report also urges AI developers to team up with security experts in other areas like cybersecurity, and see if principles that keep those technologies safe could also be used to safeguard AI.
The full report goes into far more detail than we can summarize here. But the gist is that AI is a powerful tool, and as with any new technology, governments and stakeholders in their development have to study it to make sure it isn’t exploited for nefarious purposes.
- For all of the positive applications and implications of AI, check out our AI Week Hub to see how AI is improving our lives.
Michael Hicks began his freelance writing career with TechRadar in 2016, covering emerging tech like VR and self-driving cars. Nowadays, he works as a staff editor for Android Central, but still writes occasional TR reviews, how-tos and explainers on phones, tablets, smart home devices, and other tech.