The mobile app traffic your security team can't see — and AI agents are generating it

Concept art representing cybersecurity principles
Nytt DDoS-rekord (Image credit: Shutterstock / ZinetroN)

AI agents don't knock before entering. They write code, trigger workflows, and hit production APIs directly — and in most organizations, no one on the security team knows they're there.

This isn't a future risk. A recent poll found 48% of security professionals already expect agentic AI to become the top attack vector by year's end, ranking it above deepfakes and every other threat on the list.

The pace of deployment is making it worse. When Moltbot — an open-source agentic AI tool — went viral, it connected 150,000 autonomous agents on a shared network almost overnight.

Article continues below
Harshit Agarwal

CEO and co-founder of Appknox.

Security researchers flagged it as a blueprint for what uncontrolled agent access looks like at scale: private data exposure, external communication channels, and delayed-execution attacks assembled from inputs that looked harmless on their own.

That governance gap between what AI agents can access and what internet security teams can actually monitor is where the attack surface is growing.

The Traffic Your Analytics Will Never Show

Mobile APIs are usually built on the assumption that the entity making requests is a human using your app. Authentication logic, rate limiting, and session monitoring, all of it is designed around that mental model. However, AI agents break that assumption.

Agents bypass the UI layer entirely. They interface directly with APIs, operating outside the behavioral parameters that human users create. That means they don't generate the session data, navigation patterns, or interaction signals that analytics tools use to establish normal. Their traffic looks legitimate at the API level. It often doesn’t appear in the logs that security teams actually monitor.

And the problem is quickly compounding. Non-human identities — service accounts, API keys, automation tools, AI agents — now outnumber human users by as much as 50 to 1, yet most operate outside any governance lifecycle. No clear owner. No expiration date. No monitoring. The identities driving the most API activity are the ones with the least visibility attached to them.

Moltbot put a face on the threat. Palo Alto Networks identified prompt injection attacks hidden inside ordinary content, instructions that quietly directed agents to leak private data or build delayed-execution payloads from inputs that looked harmless on arrival. No alerts, no anomalies, just an agent doing exactly what it was told.

How Developers Are Inadvertently Opening the Door

AI agents are hitting production before security teams know they exist. Shadow AI adoption and the rapid, often unvetted integration of open source MCP (Model Context Protocol) servers into development workflows mean deployment is outpacing oversight by a wide margin.

Agents need broad access to function, and once that access is granted, it almost never gets reviewed or reduced after deployment. An agent provisioned for one job ends up with standing access well beyond what that job requires.

The code itself carries risk, too. AI-written code can pass every individual check and still be vulnerable because flaws lie in how its components interact at runtime. Logic errors surface in the spaces between systems, not inside them.

Third-party integrations extend the exposure further. Agents interact with payment, analytics, and messaging APIs under the same under-scrutinized trust assumptions that already make external connections a liability, responsible for 35% of the most common security breaches.

The Deepseek Android app puts a face on this. It's exactly the kind of product you'd expect to have its security in order. It didn't. Six critical vulnerabilities — unsecured network configuration and missing SSL validation among them — were discovered in a flagship AI application. The same categories of risk that AI tools are supposed to eliminate.

What Governing AI Agents Actually Requires

The starting point is accepting that point-in-time testing doesn't work for agents. They operate continuously and dynamically, so a static snapshot of their behavior tells you almost nothing about what they're doing an hour later. A traditional pentest captures a moment in time. Agents create risk across every moment after it. Security coverage has to match that cadence.

From there, look at permissions. Least privilege isn't a principle reserved for human users. It applies to every non-human identity in your environment. Scope agent access tightly from the start, and build in a review process that doesn't depend on someone remembering to do it manually.

Monitoring needs to evolve, too. Volume-based anomaly detection misses most agent-driven abuse. What matters is behavioral patterns, like unusual API call sequences, unexpected data access combinations, and integrations firing outside normal parameters.

And because agents operate at machine speed, human-reviewed monitoring alone won't catch it in time. Autonomous validation, where AI continuously probes your environment the same way a malicious agent would, is what closes that gap.

The same logic applies inside the development pipeline. Security checkpoints need to be embedded in CI/CD so AI-written or AI-triggered code gets validated before it reaches production, not after.

Finally, treat agents as their own identity class. They're not users, and they're not traditional software. They need the same governance rigor applied to third-party APIs and external integrations, which most organizations are still working to get right.

AI agents aren't going away. The teams that govern them proactively will be better positioned than those treating them as passive tools. Closing the gap between access and oversight is a workflow decision as much as a security one.

Check out our list of the best endpoint protection software.

TOPICS

CEO and co-founder of Appknox.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.