'The math is simple': OpenClaw 'Trojan Horse' AI agents give hackers full control of 28,000+ systems

Microsoft OpenClaw
(Image credit: Fortune)

  • OpenClaw exposures reveal thousands of internet accessible high risk systems
  • AI agents are being deployed with excessive permissions across critical environments
  • Remote code execution vulnerabilities expose most observed OpenClaw deployments

Agentic systems are moving quickly from experimentation into everyday workflows, yet recent findings suggest security practices are not keeping pace.

According to SecurityScorecard, thousands of OpenClaw deployments are exposed directly to the internet with minimal safeguards.

The team identified 40,214 internet-exposed OpenClaw instances in total, with 28,663 unique IP addresses hosting control panels accessible from anywhere on the internet.

Article continues below

Exposed AI agents become a hacker's dream target

"The math is simple: when you give an AI agent full access to your computer, you give that same access to anyone who can compromise it," the researchers stated.

Approximately 63% of observed deployments appear vulnerable to remote code execution, allowing attackers to take over the host machine without user interaction.

Of the exposures, there were three high-severity Common Vulnerabilities and Exposures affecting OpenClaw, with CVSS scores ranging from 7.8 to 8.8.

Public exploit code is already available for all three vulnerabilities, meaning attackers do not need advanced skills to compromise exposed systems.

The research also found that 549 exposed instances correlate with prior breach activity, and 1,493 are associated with known vulnerabilities that compound the risk for users.

The exposed deployments are heavily concentrated in major cloud and hosting providers, indicating repeatable and easily replicated insecure deployment patterns.

OpenClaw, formerly known as Moltbot and Clawdbot, markets itself as a personal AI agent that can schedule meetings, send emails, and manage tasks on behalf of users.

The problem is not the AI's capabilities but the access and permissions granted to these systems without proper security controls.

"In practice, because it was written by AI, security wasn't a dominating feature in the development process," said Jeremy Turner, VP of Threat Intelligence at SecurityScorecard.

"For the folks that want to use the more agentic AI systems, you really need to take careful consideration in what integrations you support and what permissions you actually give."

Many users are configuring these bots with personal names and company names, revealing exactly who is using these AI tools and making them attractive targets for attackers.

Any time a user connects an AI agent to a platform, they are giving it an identity with specific permissions.

That identity may be able to post content, access email, read files, or interact with other systems on the user's behalf.

"The risk isn't that these systems are thinking for themselves," Turner said. "It's that we're giving them access to everything."

"It's like handing your laptop to a stranger on the street and hoping nothing bad happens… Any of the communications… on that device… are going to be interfaces from untrusted third parties that can… take certain actions."

A compromised agent could be instructed to transfer funds, delete files, or send malicious messages without raising immediate alarms because the behavior appears legitimate.

Unfortunately, the report reveals a fundamental disconnect between AI adoption and security practices.

Users are being asked to give these agents broad system access, and in many cases, that has already led to data exposure, unintended actions, and loss of control.

In some cases, OpenClaw takes actions beyond what users explicitly instruct, and Microsoft has since advised that it should not be run on standard personal or enterprise devices.

Chinese authorities have restricted its use in office environments due to its tendency for data exposure and broader security risks.

Some OpenClaw vulnerabilities allow hackers to access sensitive data, and it has been used to distribute malware through GitHub repositories.

"Don't just blindly download one of these things and start using it on a system that has access to your whole personal life. Build in some separation and run some experiments of your own before you really trust the new technology to do what you want it to do," Turner said.


Google logo on a black background next to text reading 'Click to follow TechRadar'

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.


TOPICS
Efosa Udinmwen
Freelance Journalist

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.