State-sponsored hackers are having a blast with LLMs — Microsoft and OpenAI warn new tactics could cause more damage than ever before

Holographic silhouette of a human. Conceptual image of AI (artificial intelligence), VR (virtual reality), Deep Learning and Face recognition systems. Cyberpunk style vector illustration.
(Image credit: Shutterstock)

Hackers are increasingly turning to LLMs and AI tools to refine their tactics, techniques and procedures (TTP) in their campaigns, new reports have warned.

A new research paper released by Microsoft in collaboration with OpenAI has revealed how threat actors are using the latest technical innovations to keep defenders on their toes.

Article continues below

 AI refines hackers edge

State-backed hackers have been abusing the built in language support mechanics to refine their ability to target foreign adversaries, and make them seem more legitimate when conducting social engineering campaigns. They are able to use this language processing to establish seemingly legitimate professional relationships with their victims.

Google also says that they have observed hackers performing intelligence gathering by using LLMs to garner information about the industries and locations their victims live and work in, alongside learning more about their personal relationships.

In one example, Microsoft and OpenAI observed the Russian GRU Unit 26165-linked Forest Blizzard group using LLMs to gather information on how satellites operate and communicate in very specific detail. They have also been observed using AI to refine their scripting abilities, most likely to automate or increase the efficiency of their technical operations.

North Korean linked group Emerald Sleet has been observed using LLMs to learn how to exploit critical software vulnerabilities that are publicly reported, generate content to use in spearphishing campaigns, and identify organizations that gather information about North Korean nuclear and defense capabilities.

In all of these cases, Microsoft and OpenAI identified and disabled all the accounts used by these threat actors, with Microsoft stating, “AI technologies will continue to evolve and be studied by various threat actors. 

“Microsoft will continue to track threat actors and malicious activity misusing LLMs, and work with OpenAI and other partners to share intelligence, improve protections for customers and aid the broader security community.”

More from TechRadar Pro

Benedict Collins
Senior Writer, Security

Benedict is a Senior Security Writer at TechRadar Pro, where he has specialized in covering the intersection of geopolitics, cyber-warfare, and business security.

Benedict provides detailed analysis on state-sponsored threat actors, APT groups, and the protection of critical national infrastructure, with his reporting bridging the gap between technical threat intelligence and B2B security strategy.

Benedict holds an MA (Distinction) in Security, Intelligence, and Diplomacy from the University of Buckingham Centre for Security and Intelligence Studies (BUCSIS), with his specialization providing him with a robust academic framework for deconstructing complex international conflicts and intelligence operations, and the ability to translate intricate security data into actionable insights.