AI malware, Gemini lures and more: Google reveals how hackers are actually using AI
Threat actors are using AI in new, creative, and dangerous ways, Google warns
Sign up for breaking news, reviews, opinion, top tech deals, and more.
You are now subscribed
Your newsletter sign-up was successful
- GTIG finds threat actors are cloning mature AI models using distillation attacks
- Sophisticated malware can use AI to manipulate code in real time to avoid detection
- State-sponsored groups are creating highly convincing phishing kits and social engineering campaigns
If you’ve used any modern AI tools, you’ll know they can be a great help in reducing the tedium of mundane and burdensome tasks.
Well, it turns out threat actors feel the same way, as the latest Google Threat Intelligence Group AI Threat Tracker report has found that attackers are using AI more than ever.
From figuring out how AI models reason in order to clone them, to integrating it into attack chains to bypass traditional network-based detection, GTIG has outlined some of the most pressing threats - here's what they found.
How threat actors use AI in attacks
For starters, GTIG found threat actors are increasingly using ‘distillation attacks’ to quickly clone large language models so that they can be used by threat actors for their own purposes. Attackers will use a huge volume of prompts to find out how the LLM reasons with queries, and then use the responses to train their own model.
Attackers can then use their own model to avoid paying for the legitimate service, use the distilled model to analyze how the LLM is built, or search for ways to exploit their own model which can also be used to exploit the legitimate service.
AI is also being used to support intelligence gathering and social engineering campaigns. Both Iranian and North Korean state-sponsored groups have utilized AI tools in this way, with the former using AI to gather information on business relationships in order to create a pretext for contact, and the latter using AI to amalgamate intelligence to help plan attacks.
GTIG has also spotted a rise in AI usage for creating highly convincing phishing kits for mass-distribution in order to harvest credentials.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Moreover, some threat actors are integrating AI-models into malware to allow it to adapt to avoid detection. One example, tracked as HONESTCUE, dodged network-based detection and static analysis by using Gemini to re-write and execute code during an attack.
But not all threat actors are alike. GTIG has also noted that there is a serious demand for custom AI tools built for attackers, with specific calls for tools capable of writing code for malware. For now, attackers are reliant on using distillation attacks to create custom models to use offensively.
But if such tools were to become widely available and easy to distribute, it is likely that threat actors would quickly adopt malicious AI into attack vectors to improve the performance of malware, phishing, and social engineering campaigns.
In order to defend against AI-augmented malware, many security solutions are deploying their own AI tools to fight back. Rather than relying on static analysis, AI can be used to analyze potential threats in real time to recognize the behavior of AI-augmented malware.
AI is also being employed to scan emails and messages in order to spot phishing in real time at a scale that would require thousands of hours of human work.
Moreover, Google is actively seeking out potentially malicious AI usage in Gemini, and has deployed a tool to help seek out software vulnerabilities (Big Sleep), and a tool to help in patching vulnerabilities (CodeMender).

➡️ Read our full guide to the best antivirus
1. Best overall:
Bitdefender Total Security
2. Best for families:
Norton 360 with LifeLock
3. Best for mobile:
McAfee Mobile Security

Benedict has been with TechRadar Pro for over two years, and has specialized in writing about cybersecurity, threat intelligence, and B2B security solutions. His coverage explores the critical areas of national security, including state-sponsored threat actors, APT groups, critical infrastructure, and social engineering.
Benedict holds an MA (Distinction) in Security, Intelligence, and Diplomacy from the Centre for Security and Intelligence Studies at the University of Buckingham, providing him with a strong academic foundation for his reporting on geopolitics, threat intelligence, and cyber-warfare.
Prior to his postgraduate studies, Benedict earned a BA in Politics with Journalism, providing him with the skills to translate complex political and security issues into comprehensible copy.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.