AI malware, Gemini lures and more: Google reveals how hackers are actually using AI

Two robotic faces in green and red indicating a good bot and a bad bot representing the positive and negative impacts of AI and chatbots.
(Image credit: Shutterstock)

  • GTIG finds threat actors are cloning mature AI models using distillation attacks
  • Sophisticated malware can use AI to manipulate code in real time to avoid detection
  • State-sponsored groups are creating highly convincing phishing kits and social engineering campaigns

If you’ve used any modern AI tools, you’ll know they can be a great help in reducing the tedium of mundane and burdensome tasks.

Well, it turns out threat actors feel the same way, as the latest Google Threat Intelligence Group AI Threat Tracker report has found that attackers are using AI more than ever.

From figuring out how AI models reason in order to clone them, to integrating it into attack chains to bypass traditional network-based detection, GTIG has outlined some of the most pressing threats - here's what they found.

Article continues below

How threat actors use AI in attacks

For starters, GTIG found threat actors are increasingly using ‘distillation attacks’ to quickly clone large language models so that they can be used by threat actors for their own purposes. Attackers will use a huge volume of prompts to find out how the LLM reasons with queries, and then use the responses to train their own model.

Attackers can then use their own model to avoid paying for the legitimate service, use the distilled model to analyze how the LLM is built, or search for ways to exploit their own model which can also be used to exploit the legitimate service.

Illustration of model extraction attacks

(Image credit: Google Threat Intelligence Group)

AI is also being used to support intelligence gathering and social engineering campaigns. Both Iranian and North Korean state-sponsored groups have utilized AI tools in this way, with the former using AI to gather information on business relationships in order to create a pretext for contact, and the latter using AI to amalgamate intelligence to help plan attacks.

GTIG has also spotted a rise in AI usage for creating highly convincing phishing kits for mass-distribution in order to harvest credentials.

Moreover, some threat actors are integrating AI-models into malware to allow it to adapt to avoid detection. One example, tracked as HONESTCUE, dodged network-based detection and static analysis by using Gemini to re-write and execute code during an attack.

HONESTCUE malware

(Image credit: Google Threat Intelligence Group)

But not all threat actors are alike. GTIG has also noted that there is a serious demand for custom AI tools built for attackers, with specific calls for tools capable of writing code for malware. For now, attackers are reliant on using distillation attacks to create custom models to use offensively.

But if such tools were to become widely available and easy to distribute, it is likely that threat actors would quickly adopt malicious AI into attack vectors to improve the performance of malware, phishing, and social engineering campaigns.

In order to defend against AI-augmented malware, many security solutions are deploying their own AI tools to fight back. Rather than relying on static analysis, AI can be used to analyze potential threats in real time to recognize the behavior of AI-augmented malware.

AI is also being employed to scan emails and messages in order to spot phishing in real time at a scale that would require thousands of hours of human work.

Moreover, Google is actively seeking out potentially malicious AI usage in Gemini, and has deployed a tool to help seek out software vulnerabilities (Big Sleep), and a tool to help in patching vulnerabilities (CodeMender).


Best antivirus software header
The best antivirus for all budgets
TOPICS
Benedict Collins
Senior Writer, Security

Benedict is a Senior Security Writer at TechRadar Pro, where he has specialized in covering the intersection of geopolitics, cyber-warfare, and business security.

Benedict provides detailed analysis on state-sponsored threat actors, APT groups, and the protection of critical national infrastructure, with his reporting bridging the gap between technical threat intelligence and B2B security strategy.

Benedict holds an MA (Distinction) in Security, Intelligence, and Diplomacy from the University of Buckingham Centre for Security and Intelligence Studies (BUCSIS), with his specialization providing him with a robust academic framework for deconstructing complex international conflicts and intelligence operations, and the ability to translate intricate security data into actionable insights.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.