State-sponsored hackers are having a blast with LLMs — Microsoft and OpenAI warn new tactics could cause more damage than ever before

Holographic silhouette of a human. Conceptual image of AI (artificial intelligence), VR (virtual reality), Deep Learning and Face recognition systems. Cyberpunk style vector illustration.
(Image credit: Shutterstock)

Hackers are increasingly turning to LLMs and AI tools to refine their tactics, techniques and procedures (TTP) in their campaigns, new reports have warned.

A new research paper released by Microsoft in collaboration with OpenAI has revealed how threat actors are using the latest technical innovations to keep defenders on their toes.

Microsoft and OpenAI have detected and disrupted attacks from Russia, North Korean, Iranian and Chinese backed threat actors who have been using LLMs to refine their hacking playbooks.

 AI refines hackers edge

State-backed hackers have been abusing the built in language support mechanics to refine their ability to target foreign adversaries, and make them seem more legitimate when conducting social engineering campaigns. They are able to use this language processing to establish seemingly legitimate professional relationships with their victims.

Google also says that they have observed hackers performing intelligence gathering by using LLMs to garner information about the industries and locations their victims live and work in, alongside learning more about their personal relationships.

In one example, Microsoft and OpenAI observed the Russian GRU Unit 26165-linked Forest Blizzard group using LLMs to gather information on how satellites operate and communicate in very specific detail. They have also been observed using AI to refine their scripting abilities, most likely to automate or increase the efficiency of their technical operations.

North Korean linked group Emerald Sleet has been observed using LLMs to learn how to exploit critical software vulnerabilities that are publicly reported, generate content to use in spearphishing campaigns, and identify organizations that gather information about North Korean nuclear and defense capabilities.

In all of these cases, Microsoft and OpenAI identified and disabled all the accounts used by these threat actors, with Microsoft stating, “AI technologies will continue to evolve and be studied by various threat actors. 

“Microsoft will continue to track threat actors and malicious activity misusing LLMs, and work with OpenAI and other partners to share intelligence, improve protections for customers and aid the broader security community.”

More from TechRadar Pro

Benedict Collins
Staff Writer (Security)

Benedict Collins is a Staff Writer at TechRadar Pro covering privacy and security. Benedict is mainly focused on security issues such as phishing, malware, and cyber criminal activity, but also likes to draw on his knowledge of geopolitics and international relations to understand the motivations and consequences of state-sponsored cyber attacks. Benedict has a MA in Security, Intelligence and Diplomacy, alongside a BA in Politics with Journalism, both from the University of Buckingham.