From cloud to Agentic AI: Why security must evolve faster than innovation

Phone malware
(Image credit: Shutterstock)

Every major technology shift follows a familiar pattern. The promise is clear, adoption accelerates, competitive pressure builds, and security lags behind.

We saw it with public cloud. A broad, ill-defined concept that meant different things to different organizations, cloud adoption created both opportunity and anxiety.

Article continues below
Martyn Ditchburn

CTO in Residence at Zscaler.

Today, artificial intelligence is following the same trajectory, only faster, broader, and with far higher stakes. AI is not one technology. It is a wave-based evolution, and misunderstanding those waves is one of the greatest risks businesses face right now.

The Three Waves of AI: Why They Matter for Security

The first wave of AI focused on predictive analytics: data lakes, large-scale pattern recognition, and machine learning operating largely in the background. For many organizations, this adoption happened quietly, without board-level scrutiny. From a security perspective, these systems were primarily a data protection problem ensuring sensitive information was not leaked or misused.

The second wave, generative AI, changed everything. When tools capable of producing human-like text, code, and imagery entered the public domain, AI became a mainstream conversation overnight. Yet this visibility came at a cost. Generative AI was bundled into a single, overly broad concept of “AI,” masking critical differences in risk profiles and security controls.

Security teams responded predictably by focusing on what was most visible. According to a recent report published by Zscaler, titled ‘The Ripple Effect: A Hallmark of Resilient Cybersecurity’, seven in ten organizations admit they have limited visibility into employees’ use of shadow AI, and 56% believe sensitive data is likely already being exposed through unsanctioned AI tools.

The default response has been to apply tactical controls often retrofitting existing tools rather than rethinking security from first principles. But it is the third wave, agentic AI, that fundamentally changes the threat landscape.

Agentic AI: When Systems Act, Not Just Assist

Agentic AI systems don’t just analyze or generate content, they act. They connect directly to business software systems, make decisions, and trigger workflows. Increasingly, they do so semi-autonomously, with limited human oversight. This is not a theoretical future.

The survey shows that 42% of organizations are already testing agentic AI, and 34% have deployed it in some form. However, critically, half of those deployments lack firm governance or security guardrails. This is where traditional security thinking breaks down.

Predictive and generative AI are fundamentally data exchange problems. Agentic AI is a behavioral and systems integrity problem. Once AI agents are allowed to interact with ERP software platforms, financial systems, logistics workflows, or customer environments, the blast radius of compromise expands dramatically.

The parallels with earlier internet evolution are striking. Static websites gave way to dynamic, database-driven applications. Suddenly, SQL injection became a dominant threat. Automation exposed new attack paths. Each architectural shift introduced risks security teams were not yet equipped to manage. Agentic AI represents a similar inflection point.

The Blind Spot: Internal Control vs. External Reality

One of the most concerning findings in the Ripple Effect research is not a lack of investment - it’s misplaced confidence. For instance, nine out of ten organizations increased cyber resilience spending in the past year, and 96% updated their resilience strategy in response to external pressures.

Yet, 61% admit those strategies remain too inward-looking. In other words, organizations believe they are secure because they control what happens inside their own walls, while overlooking the expanding ecosystem of partners, platforms, and AI-driven supply chains beyond them.

This blind spot is especially dangerous as agentic AI begins to operate across organizational boundaries. Today’s “internal” AI quickly becomes tomorrow’s interconnected supply-chain automation. Retail, logistics, and manufacturing will likely lead this shift as companies pursue sustainability goals, just-in-time production, and AI-optimized fulfillment.

The moment agentic systems start handing work off between organizations, the attack surface multiplies. Security failures will no longer be isolated incidents. They will ripple outward.

Defending Against Evolving AI Threats: A Shift in Mindset

Defending against AI-driven threats does not require abandoning existing security principles, but it does demand evolving them. Many guardrails required to secure Agentic AI are evolved from effective controls for managing human users. The primary difference is the speed, scale and sustained nature.

Despite this, AI agents must still be treated like a human user from a security perspective, with Zero Trust-based controls. That means issuing identities, defining least-privilege access, establishing behavioral baselines, and continuously monitoring for anomalies. If an agent suddenly starts interacting with systems outside its defined purpose, that deviation should be as visible, and as actionable, as suspicious human behavior.

Segmentation becomes critical, not as an abstract architectural ideal, but as a practical way to limit blast radius. Without it, compromised agents can move laterally at machine speed. And perhaps most importantly, organizations must stop treating AI security as a bolt-on. 52% of IT leaders say their current security systems can’t defend against today’s advanced threats.

If organizations are struggling against current threats, how are they expected to handle emerging ones like agentic AI and quantum computing?

From Reactive Security to Resilience by Design

The core lesson from both cloud adoption and AI evolution is this: reactive security does not scale. The pace of innovation now consistently outstrips governance, legislation, and procurement cycles. Waiting for frameworks to mature or for incidents to force action is no longer viable. Resilience must be designed in from the outset, not retrofitted after disruption.

This means shifting focus from point solutions to architectural agility. Organizations must build security models that adapt as AI capabilities evolve, rather than breaking every time they do. AI is not slowing down. Agentic systems will only become more capable, connected, and autonomous. Organizations that continue to see AI security as a niche or future problem will repeat the mistakes of the cloud era.

This time, however, consequences will spread faster and further. The question is no longer whether AI will reshape the threat landscape. It already has. The real question is whether businesses are prepared to defend against it before the ripple effects reach them.

We've ranked the best identity management solutions.

TOPICS

CTO in Residence at Zscaler.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.