Agentic AI: cybersecurity’s friend or foe?

Avast cybersecurity
(Image credit: Avast)

The convergence of advanced machine learning, automation and generative AI is rapidly changing the cyber threat landscape as we know it.

From rule-based systems to machine learning and large language models (LLMs), AI continues to expand its capabilities and influence at every step.

Now, we are witnessing the emergence of a new era in cybersecurity driven by the development of Agentic AI: systems that can learn, make decisions and act autonomously with little human intervention.

Unlike generative AI, which reacts to inputs to create outputs, intelligent Agentic AI can not only understand and execute but also infer and adapt independently to keep those processes going and achieve objectives quickly.

Unfortunately, its speed and potential to scale have also made it the new desirable technology for adversaries - not just as a new tool, but as a teammate.

Karin Lagziel

Director of Client Leadership, North America at Sygnia Consulting.

AI’s capabilities to process text, images, and audio simultaneously - and now with real-time intelligent decision-making - will only further expand the attack surface, introducing new risks around access control, data leakage, and unintended exposure of sensitive information.

The impact goes beyond financial loss for businesses: eroded trust, reputational damage, and regulatory consequences.

Without clear frameworks or a deep organizational understanding of how these agents handle and store data, even well-intentioned deployments can lead to serious security gaps.

Organizations must take heed and begin to find ways to equally leverage Agentic AI or risk losing the ability to operate and defend their most precious assets.

Are we looking at the new adversary?

We are seeing a fast-emerging wave of Agentic AI and AI agents, acting independently, collaborating with other AI agents to execute fraud at scale, and even running cyber-attacks:

1. A Shift Toward Autonomous Exploitation: While no illicit ecosystem is fully AI-dominated yet, that shift is rapidly accelerating.

Autonomous agents are increasingly capable of interfacing with real-world systems like email platforms, databases, and cryptocurrency wallets, acting independently and optimizing for financial gain, data access, or influence operations.

2. Multi-agent fraud ecosystems: AI agents are already capable of automating phishing, impersonation campaigns, zero-day social engineering attacks and scam content by scraping public data and creating links between entities using social graph modelling. Looking forward, the concept of multi-agent fraud ecosystems is emerging as a highly plausible evolution.

One AI agent gathers intelligence and crafts phishing content, another agent executes the credential theft or account compromise, and then a third agent launders the funds. These AI agents are designed to mimic human behavior so precisely that they can bypass biometric risk scoring and trick fraud detection systems undetected.

3. AI-Enhanced Credential Stuffing is Emerging: Computer Using Agents (CUAs), such as those built on platforms like OpenAI Operator, are now capable of mimicking human behavior (e.g. typing patterns, mouse movements, timing) to better evade fraud detection and CAPTCHA systems.

4. AI-Augmented Fraud Kits and the Rise of PhaaS/FaaS with MFA Bypass Capabilities: Adversaries are increasingly offering Phishing-as-a-Service (PhaaS) and Fraud-as-a-Service (FaaS) kits that include capabilities to bypass multi-factor authentication (MFA) using advanced adversary-in-the-middle (AiTM) techniques. Kits like EvilProxy, Tycoon 2FA, and Mamba are easy to purchase and are often hosted and sold on Telegram and dark web marketplaces.

Research indicates that one in four CISOs has experienced an AI-generated attack in the past year alone, and the emergence of Agentic AI is only set to elevate threat actors in executing mass-scale attacks for exponential profitability.

How organizations need to flip the script with Agentic AI

Agentic AI doesn’t have to be the ‘enemy’. Security teams can also leverage agentic AI to close the gap between threat actors and defenders while offloading some of their most burdensome tasks.

As they operate proactively and predictively, agentic agents will eventually have the ability to anticipate threat actor behavior, launch decoys, and isolate potential threats before they reach an organization's most valuable assets.

When used as an extension of the in-house security team, AI agents can be used for threat-hunting, detecting vulnerabilities and anomalies in behavioral patterns faster and more cost-effectively - reducing days and hours to minutes and seconds.

Since Agentic AI has the capabilities to learn, adapt and predict, agents can be deployed to change defense strategies in real-time and even support ‘red teaming’ exercises to seek potential vulnerabilities within the organization, before a breach occurs, to help improve overall security posture.

For example, with its phenomenal speed, an AI agent could be deployed as a security analyst for specific tasks like scanning hundreds of vulnerabilities in a single day, while keeping updated on the latest threats from police and government reports on breach attacks and make changes to reinforce cyber protection.

Not only would the agent reduce fatigue by automating monitoring, triage, and response, but they are also empowering the rest of the team to focus on more strategic business objectives, i.e. R&D.

Agentic AI could also be applied to supporting compliance and regulation – ensuring an organization is regularly keeping up to date with requirements and red flagging where they could risk falling out of line with the law.

Implementing Agentic AI safely and strategically

While AI adoption in security operation centers is accelerating, with 67% of organizations deploying AI agents within the year, securing AI agents (37%) and governing employee use of AI tools (36%) have emerged as top concerns by CISOs.

With AI agents working autonomously and deeply embedded into IT systems, the guardrails for securing Agentic AI are yet to mature, and there is a natural risk that they could become the primary target for threat actors.

Misalignment and reasoning flaws could also result in AI agents misconstruing information and taking the wrong actions independently - it’s no surprise that human-in-the-loop management and ‘sandboxing,’ where processes or programs are isolated in controlled environments, will be fundamental to Agentic AI implementations.

Cautious approach

At Sygnia, we recommend taking a cautious and controlled approach to Agentic AI – used as an enhancement rather than a replacement of security experience. Adopting a hybrid approach where security teams are regularly vetting and setting parameters of control will be key to successful deployments:

• Take a risk-based strategy – start with a gradual adoption of AI agents in low-risk scenarios and environments. This way, teams can observe the responses of the AI agent before allowing full autonomy and enhance detection capabilities using behavioral analytics and proactive deception.

• Define clear and concise roles and governance – every AI agent should have a clear set of responsibilities, including rules of use, escalation protocols, and ethical boundaries to adhere to. Frameworks should include operational and regulatory accountability.

• Ensure an infrastructure of high-quality data – AI agents must be provided with accurate, current and structured data while maintaining employee and customer privacy and reducing potential bias.

• Create human-machine integration – teams must collaborate with AI agents to define the human intervention points and ensure documentation and understanding of AI decisions to gain full transparency and the option to override actions. Combine human analysts with AI agents to form adaptive, collaborative defense units.

• Build multi-agent collaborative frameworks – Defenders can design and implement frameworks where one agent generates outcomes, and another provides constructive feedback to create a feedback loop that improves overall performance and resilience.

Cybersecurity revolution

Cybersecurity is set to be revolutionized by Agentic AI in more ways than one. According to Gartner, by 2027, it will reduce the time to exploit compromised accounts by as much as 50% - but on the flipside, this also shows its potential to better support security teams in reducing the breach exposure time at the same rate.

With the combined experience of in-house or outsourced security and incident response teams, strategic frameworks and approaches to Agentic AI implementation, we could well see cybersecurity delivering our most intelligent and proactive defenses yet.

How we navigate the use of Agentic AI will require rigorous design and continuous testing, thoughtful roadmaps to integration and constant vetting. CISOs must begin redefining their cyber defense strategies to be resilient against the emerging threats of tomorrow.

One thing is clear: In the face of uncertainty, we now have a transformative power that can change the rules of cybersecurity as we know it, but only if we lean into its strengths.

We've featured the best online cybersecurity courses.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

TOPICS

Director of Client Leadership, North America at Sygnia Consulting.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.