Skip to main content

Smart cars could be hijacked via AI hacking

(Image credit: Shutterstock)

Self-driving cars could be hijacked through outside forces able to change the Artificial Intelligence (AI) systems that power the vehicles, new research has claimed.

A report from McAfee says that a vehicle's AI system can be manipulated, possibly impacting the future and safety of autonomous vehicles.

This process, which McAfee calls "model hacking", is able to completely override the software systems within vehicles that are available to buy today, including models from Tesla.

Model hacking

McAfee’s Advanced Threat Research (ATR) and Advanced Analytics Teams were able to use "minuscule modifications" to create a black-box targeted attack on the Mobileye EyeQ3 camera system found in many modern vehicles, including certain Tesla models. 

This attack allowed the researchers to cause a manipulate the AI technology within a Tesla model S implementing Hardware pack 1 to misclassify a speed limit sign that read 35, making it autonomously speed up to 85 mph.

Model hacking works by attacking the algorithms that go into defining rules for an AI system, such as the one found in autonomous vehicles. By negatively influencing or "poisoning" the training set used to create these AI models, hackers can affect nearly every aspect of how a software platform recognises and interacts with the world around it.

"Simply by perturbating – changing the magnitudes of a few features (such as pixels for images), zeros to ones/ones to zeros, or removing a few features – the attacker can wreak havoc in security operations with disastrous effects," McAfee researchers Steve Povolny and Celeste Fralick wrote in a blog post describing the attack.

McAfee says that while there has been no documented report of model hacking in the wild yet, the interest in the field is growing, meaning hackers may have their own interests piqued soon. And with the number of vehicles with autonomous driving capabilities set to reach nearly 750,000 by 2023 according to Gartner research, the need to spot such new threats before they evolve is paramount.

"The good news is that much like classic software vulnerabilities, model hacking is possible to defend against, and the industry is taking advantage of this rare opportunity to address the threat before it becomes of real value to the adversary," the researchers noted.