Hackers are using GPT-4 to build a virtual assistant - here's what we know

A person holding out their hand with a digital AI symbol.
(Image credit: Shutterstock / LookerStudio)

  • MalTerminal uses GPT-4 to generate ransomware or reverse shell code at runtime
  • LLM-enabled malware evades detection by creating malicious logic only during execution
  • Researchers found no evidence of deployment; likely a proof-of-concept or testing tool

Cybersecurity researchers from SentinelOne have uncovered a new piece of malware which uses OpenAI’s ChatGPT-4 to generate malicious code in real time.

The researchers claim MalTerminal represents a significant change in how threat actors create and deploy malicious code, noting, "the incorporation of LLMs into malware marks a qualitative shift in adversary tradecraft."

“With the ability to generate malicious logic and commands at runtime, LLM-enabled malware introduces new challenges for defenders."

Impersonating the government

The discovery means the cybersecurity community has an entirely new malware category to fight against: LLM-enabled malware, or malware that embeds large language models directly into its functionality.

In essence, MalTerminal is a malware generator. When adversaries bring it up, it asks if they want to create a ransomware encryptor, or a reverse shell. The prompt is then sent to the GPT-4 AI, which responds with Python code tailored to the chosen format.

SentinelOne said that the code doesn’t exist in the malware file until runtime and that instead, it’s generated dynamically. This makes detection from traditional security tools a lot more difficult, since there is no static malicious code to scan.

Furthermore, they identified the GPT-4 integration after discovering Python scripts and a Windows executable with hardcoded API keys and prompt structures.

Also, since the API endpoint that was used was killed off in late 2023, SentinelOne concluded that MalTerminal must be older than that, making it the earliest known example of AI-powered malware.

Luckily enough, there is no evidence that the malware was ever deployed in the wild, so it might have simply been a proof-of-concept, or a red teaming tool. SentinelOne believes MalTerminal is a sign of things to come, and urged the cybersecurity community to prepare accordingly:

“Although the use of LLM-enabled malware is still limited and largely experimental, this early stage of development gives defenders an opportunity to learn from attackers’ mistakes and adjust their approaches accordingly,” the report adds.

“We expect adversaries to adapt their strategies, and we hope further research can build on the work we have presented here.”

Via The Hacker News

You might also like

TOPICS

Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.