LLM services are being hit by hackers looking to sell on private info
Access to cloud-hosted LLMs can be expensive, so hackers are going another route
Using cloud-hosted large language models (LLM) can be quite expensive, which is why hackers have apparently begun started stealing, and selling, login credentials to the tools.
Cybersecurity researchers Sysdig Threat Research Team recently spotted one such campaign, dubbing it LLMjacking.
In its report, Sysdig said it observed a threat actor abusing a vulnerability in the Laravel Framework, tracked as CVE-2021-3129. This flaw allowed them to access the network and scan it for Amazon Web Services (AWS) credentials for LLM services.
New methods of abuse
"Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers," the researchers explained in the report. "In this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted."
The researchers were able to discover the tools that the attackers used to generate the requests which invoked the models. Among them was a Python script that checked credentials for ten AI services, analyzing which one was useful. The services include AI21 Labs, Anthropic, AWS Bedrock, Azure, ElevenLabs, MakerSuite, Mistral, OpenAI, OpenRouter, and GCP Vertex AI.
They also discovered that the attackers didn’t run any legitimate LLM queries in the verification stage, but were rather doing “just enough” to find out what the credentials were capable of, and any quotas.
In its news report, The Hacker News says the findings are evidence that hackers are finding new ways to weaponize LLMs, besides the usual prompt injections and model poisoning, by monetizing access to LLMs, while the bill gets mailed to the victim.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The bill, the researchers stressed, could be quite a big one, going up to $46,000 a day for LLM use.
"The use of LLM services can be expensive, depending on the model and the amount of tokens being fed to it," the researchers added. "By maximizing the quota limits, attackers can also block the compromised organization from using models legitimately, disrupting business operations."
More from TechRadar Pro
- A customer managed to get the DPD AI chatbot to swear at them, and it wasn’t even that hard
- Here's a list of the best firewalls today
- These are the best endpoint protection tools right now
Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.