AI agents are about to make access control obsolete
How AI agents undermine static access controls through inference, context drift
Sign up for breaking news, reviews, opinion, top tech deals, and more.
You are now subscribed
Your newsletter sign-up was successful
As enterprises integrate AI agents into their workflows, a silent shift is taking place.
Security controls built on static access policies designed for predictable behavior are colliding with systems that reason instead of simply executing. AI agents, driven by outcomes rather than rules, are breaking the traditional identity and access management model.
Consider a retail company that deploys an AI sales assistant to analyze customer behavior and improve retention. The assistant doesn’t have access to personally identifiable information, it’s restricted by design.
Yet when asked to “find customers most likely to cancel premium subscriptions,” it correlates activity logs, support tickets, and purchase histories across multiple systems. This generates a list of specific users inferred through behavior patterns, spending habits, and churn probability.
Co-Founder and CTO of Token Security.
No names or credit cards were exposed, but the agent effectively re-identified individuals through inference, reconstructing sensitive insights that the system was never meant to access and potentially exposing personal identifiable information (PII).
While it didn’t break access controls, it reasoned its way around systems to access information that it was not originally scoped to access.
When Context Becomes the Exploit
Unlike traditional software workflows, AI agents don’t follow deterministic logic; they act on intent. When an AI system’s goal is “maximize retention” or “reduce latency,” it makes autonomous decisions about what data or actions it needs to achieve that outcome. Each decision might be legitimate in isolation, but together, they can expose information far beyond the agent’s intended scope.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
This is where context becomes an exploit surface. Traditional models focus on who can access what, assuming static boundaries. But in agentic systems, what matters is why the action occurs and how context changes as one agent invokes another. When intent flows across layers, each reinterpreting the goal, the original user context is lost and privilege boundaries blur.
The result isn’t a conventional breach; it’s a form of contextual privilege escalation where meaning, not access, becomes the attack vector.
Shortcomings of Deterministic Controls
Most organizations are learning that traditional RBAC (Role-Based Access Control) and ABAC (Attribute-Based Access Control) models can’t keep up with dynamic reasoning. In classical applications, you can trace every decision back to a code path. In AI agents, logic is emergent and adaptive. The same prompt can trigger different actions depending on environment, prior interactions, or perceived goals.
For example, a development agent trained to optimize cloud computing costs might start deleting logs used for audit purposes or backups. From a compliance perspective, that’s catastrophic, but from the agent’s reasoning, it’s efficient. The security model assumes determinism; the agent assumes autonomy.
This mismatch exposes a flaw in how we model permissions. RBAC and ABAC answer “Should user X access resource Y?” In an agentic ecosystem, the question becomes “Should agent X be able to access more than resource Y, and why would it need that additional access?” That’s not an access problem; it’s a reasoning problem.
The Rise of Contextual Drift
In distributed, multi-agent architectures, permissions evolve through interaction. Agents chain tasks, share outputs, and make assumptions based on others’ results. Over time, those assumptions accumulate, forming contextual drift, a gradual deviation from the agent’s original intent and authorized scope.
Imagine a marketing analytics agent summarizing user behavior, feeding its output to a financial forecasting agent, which uses it to predict regional revenue. Each agent only sees part of the process. But together, they’ve built a complete, unintended picture of customer financial data.
Every step followed policy. The aggregate effect broke it.
Contextual drift is the modern equivalent of configuration drift in DevOps, except here, it’s happening at the cognitive layer. The security system sees compliance; the agent network sees opportunity.
Governing Intent, Not Just Access
To address this new class of risk, organizations must shift from governing access to governing intent. A security framework for agentic systems should include:
Intent Binding: Every action must carry the originating user’s context, identity, purpose, and policy scope throughout the chain of execution.
Dynamic Authorization: Move beyond static entitlements. Decisions must adapt to context, sensitivity, and behavior at runtime.
Provenance Tracking: Keep a verifiable record of who initiated an action, which agents participated, and what data was touched.
Human-in-the-Loop Oversight: For high-risk actions, require verification, especially when agents act on behalf of users or systems.
Contextual Auditing: Replace flat logs with intent graphs that visualize how queries evolve into actions across agents.
Why Permissions Alone Are Flawed
Static permissions assume identity and intent remain constant. But agents operate in fluid, evolving contexts. They can spawn sub-agents, generate new workflows, or retrain on intermediate data, actions that continually redefine “access.”
By the time an identity system detects a security incident, a violation or breach has already occurred without a single permission being broken. That’s why visibility and attribution must come first. Before enforcing policy, you must map the agent graph: what exists, what’s connected, and who owns what.
Ironically, the same AI principles that challenge our controls can help restore them. Adaptive, policy-aware models can distinguish legitimate reasoning from suspicious inference. They can detect when an agent’s intent shifts or when contextual drift signals rising risk.
Co-Founder and CTO of Token Security.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.