AI agents are the new unmanaged endpoints
AI agents are the new unmanaged enterprise security threat
Remember the late 2000s, when personal smartphones started showing up in enterprise environments? Employees were connecting personal devices to corporate Wi-Fi, syncing work email to iPhones and accessing internal systems from hardware the security team had never touched and had no authority over.
Chief Data Strategy Officer at Forcepoint.
Security teams were caught entirely flat-footed. There was no policy. There was no governance. And by the time organizations started scrambling to respond, the devices were already everywhere.
I've spent more than 20 years in this industry, watching the same pattern repeat itself in different forms, across different technologies, with depressing reliability. Shadow IT. Cloud sprawl. Now AI agents.
The script is identical. Unfortunately, the stakes this time are considerably higher.
The 82-to-1 problem
Non-human identities now outnumber human users in enterprise environments by a ratio of 82 to 1, according to Rubrik Zero Labs' Identity Crisis Report published in 2025.
That means that for every employee your IT team has carefully provisioned, there are 82 machine identities operating across your environment. And unlike employees, most of those machine identities were never reviewed by security or attached to an accountable human being who can be questioned when something goes wrong.
When an organization deploys an AI agent, it doesn't create a single identity. It creates a cascade. One per tool the agent connects to, one per API it calls, one per data source it reads from. Those identities accumulate faster than any governance process built for human users can track.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Many organizations I speak to can’t answer the most basic questions about them: Who authorized them? What permissions do they have? What sensitive data can they reach? What are they doing right now?
The honest answer, more often than not, is that nobody knows.
Why your existing controls won't save you
Agentic AI is a meaningful step beyond the generative AI tools that security teams are still working to govern. Rather than responding to a single prompt, an agent reasons, plans and acts.
It does this across multiple tools, data sources and application integrations, with minimal human involvement at each step. You give the agent a goal, not an instruction, and it determines how to get there.
That autonomy is the key differentiator that makes these systems valuable. It is also precisely what makes the security gap so dangerous.
When a human makes a decision about data — sending a file, querying a database, exporting a report — there is at least some cognitive checkpoint, however imperfect. Agents remove that checkpoint entirely.
They operate at machine speed. The static, rule-based controls organizations have spent years tuning for human behavior were not designed for an entity that can interact with an API, read a cloud storage bucket, summarize email threads and push output to an external service in the time it takes a human to open a browser tab.
The old "block or allow" binary does not work when an AI agent is making hundreds of data decisions per minute. And there is a harder problem underneath that one: prompt injection.
A malicious instruction hidden inside a webpage, document or email that an agent retrieves can cause that agent to execute unauthorized actions, because it interprets the instruction as legitimate. The agent has been manipulated. It doesn't know it. And no alert fired, because nothing triggered a rule.
What a credible response actually looks like
The organizations that navigate this well will be the ones that treat it like the BYOD problem they eventually solved — by establishing governance frameworks before the scale becomes unmanageable.
Three things need to happen.
First, inventory. You cannot govern what you cannot see. Building a complete, accurate picture of the agents operating in your environment — what they're authorized to do, what data they can touch, who provisioned them — is the foundational step. This sounds obvious. Almost nobody has done it.
Second, identity policy has to cover non-human actors explicitly. Access governance frameworks written for humans do not automatically extend to agents. A machine identity operating at admin-level privileges warrants the same scrutiny as a privileged human user. Policy needs to reflect that, not assume it does already.
Third, enforcement needs to adapt dynamically. Because agent behavior changes in real time, static rules written in advance cannot keep pace. Controls need to respond to what an agent is actually doing in context, not just whether its credentials checked out at login.
The window is shorter than it might look
Organizations that build agent governance frameworks now, while adoption is still early and the environment is still mappable, will have a structural advantage.
Those that wait for a significant incident will find themselves doing what security teams always end up doing after an unmanaged technology proliferates without oversight: retrofitting controls onto systems that were never designed to be governed.
We learned that with BYOD and again with cloud. We're going to learn it one more time with AI agents. The only real question is whether we start before or after something goes wrong.
We've featured the best business VPN.
This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.
The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit
Chief Data Strategy Officer at Forcepoint.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.