AI agents now commit and conceal cybercrimes on their own

Hands on a laptop with overlaid logos representing network security
Autonomous AI fraud agents steal massive data, hiding their tracks beyond human attribution. (Image credit: Thapana Onphalai via Getty Images)

For several years now, AI has been showing up in fraud as an accelerant. It drafted phishing emails, polished social engineering scripts, helped attackers move faster. The human operator still sat close to every meaningful step.

But that distance is shrinking really fast. In September 2025, Anthropic’s Claude Code was used in a cyber-espionage campaign when AI handled 80 to 90% of tactical operations across roughly 30 targets.

Terence Kwok

Founder of Humanity.

A few months later, reporting on the Mexican government breach described a jailbroken Claude Code setup that Gambit Security said stole more than 150GB of data and exposed roughly 195 million identities.

Article continues below

That’s the real break with the past. Now we are not looking at AI as a helper inside a criminal workflow, but as confronting systems that can carry out large parts of the workflow by themselves.

Cybercrime has changed its shape

Once an agent has tools, context, and permission, cybercrime seems to look like an always-on operation. It can recon targets, write exploits, harvest credentials, move laterally, and package stolen data at machine speed.

It matters because those capabilities are now part of the real threat environment. Attacks by AI-enabled adversaries rose 89% year over year, and autonomous AI adoption is climbing despite security concerns.

We’re witnessing a setting for the next fraud wave: agents enter mainstream systems at the same moment attackers learn how to weaponize them.

Fraud loves scale, repetition, and weak supervision. Agentic systems bring all three. They do not get tired and do not forget the playbook. They can be pointed at thousands of tiny decisions that add up to huge losses.

Attribution is starting to fail

Traditional attribution leans on familiar clues. Investigators compare IP paths, malware families, domains, infrastructure, and other indicators of compromise — even though the field has long known that proxies, false flags, and shared tooling can blur that picture.

Agentic AI makes the problem worse because the operational exhaust isn’t tied neatly to a single human hand anymore. The model can generate fresh code, adapt the sequence of actions, or distribute work across tools and sessions. In the Mexico case spotlight, there was an unidentified attacker who was aided by AI tools, and this kind of ambiguity should worry every defender.

So, the point is not that humans disappear, but responsibility gets smeared across prompts, models, tools, delegated permissions, and machine-generated actions. And that weakens the old comfort that attribution will eventually catch up. The forensic trail now contains a non-human operator making consequential moves inside the attack chain.

Identity has to travel with the agent

Every meaningful AI action should carry a verifiable cryptographic identity. Once an AI agent is able to act inside a system, those actions should not be anonymous. Each one should be signed, linked to a verifiable identity, and captured in a trustworthy audit trail. Without that, we are asking security teams to govern autonomous behavior that leaves no reliable proof of authorship.

The idea isn't fringe, and it's here. NIST launched an AI Agent Standards Initiative in February. Its concept paper explicitly calls for identifying agents, linking user identities to delegated actions, logging agent activity, and tracking the provenance of prompts and data inputs.

Now, this is the market already telling us why this matters – 68% of organizations cannot clearly distinguish AI agent activity from human activity, even as 73% expect agents to become vital within a year. And it’s not a minor governance gap, it’s a direct liability in any environment where fraud, abuse, or data theft can be carried out through an agent.

The hard part is not cryptography, but governance

We already know how to sign and verify digital artifacts. Provenance, integrity, and identity-bound signatures can be made usable at scale. The missing move is extending that discipline from models and software artifacts to the actions agents take after deployment.

That won’t be simple. Standards have to work across model labs, enterprise stacks, open-source tooling, API gateways, agent protocols. Privacy questions are real, too, because auditability cannot become a back door for blanket surveillance.

Still, those are design problems, and not excuses for anonymity. I believe what’s missing is an identity verification layer that lets people, institutions, and eventually AI agents prove who they are, what they’re allowed to do, and which credentials can be trusted, without exposing the raw data underneath. Built well, that kind of system gives trust a cryptographic form. It can move across platforms, survive handoffs between systems, and hold up under scrutiny.

Fraud spreads wherever identity management is flimsy, and provenance breaks down. If access, eligibility, and high-risk actions are tied to verifiable credentials, it becomes much harder for a bot, a synthetic identity, or an autonomous agent to pass through systems on empty claims. The action carries history with it. The trust signal does too.

AI fraud has crossed a threshold. When an agent can scout, decide, execute, and document the operation, anonymity becomes a structural weakness instead of convenience.

We need a security model that does more than log what happened after the fact. We need one that can prove who stood behind an action, who delegated it, and whether that identity can be trusted in the first place. In a world of autonomous agents, that is not a nice safeguard now, but the baseline for keeping fraud governable.

We've ranked the best Identity Theft Protection.

This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.

The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit

Founder of Humanity.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.