How AI's evolution is redefining risks
AI for productivity, defense, and as an attack surface
AI tools have long been a double-edged sword, used by attackers and defenders alike.
However, it has recently shown its third edge; as it becomes increasingly embedded within organizations as a tool, it is now also an attack surface which cybercriminals will look to exploit, and which organizations must strive to protect.
Principal Advisory Consultant at Orange Cyberdefense.
At first glance, it may appear that this has tipped the AI scales in favor of attackers. AI has industrialized the cybercrime landscape, boosting the efficiency of attacks, as well as enabling them to be scaled up.
Article continues belowAnd now, it is no longer just a weapon but a new attack vector. However, this same efficiency can be used to help power defenses against cyberattacks, helping to protect organizations.
A new frontier of AI-enhanced attacks
While AI offers immense potential for innovation, it has also been adopted as a powerful tool by cybercriminals to execute more sophisticated attacks. Threat actors like Storm-0817, for instance, actively use AI to assist in malware development and social media scraping.
Groups like the Black Basta collective have also used AI to craft emails in multiple languages, thereby expanding their global reach. OpenAI recently disrupted dozens of malicious operations that were misusing its models for malware creation, phishing, and disinformation.
While most cybercriminal groups still seem to be using AI as more of an assistive tool at this stage, a future of fully automated cyber attacks is growing increasingly possible.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
In November of last year, Anthropic disrupted the first reported AI-orchestrated cyber espionage campaign, during which its agentic AI tool Claude Code was manipulated to conduct automated reconnaissance and intrusion attempts against global targets.
It is highly likely that we will see more attacks like this in the coming months, as attackers gain skill and confidence in using AI.
Two edges becomes three
The third edge represents a shift in AI, away from being just a weapon or a shield and instead becoming a handle which attackers can use to steer an organization's own IT infrastructure against itself, whether through attackers exploiting plugins used to connect AI tools to enterprise data, or via ‘hijacking’ an AI assistant. As agentic AI becomes increasingly the norm, we will see this more and more.
This can be seen in the 2025 compromise of the “Drift” AI module linked to Salesloft, which resulted in the theft of Salesforce data from several hundred organizations, including multiple security vendors.
Another example is the recent “EchoLeak” campaign against Microsoft 365 Copilot, which revealed how a carefully crafted email could deliver malicious instructions to an embedded AI assistant, leading to silent data exfiltration.
Finally, this third edge to AI has also been sharpened by the growing problem of Shadow AI, where employees use unauthorized AI tools, creating a ‘leaky bucket’ where sensitive corporate information is sometimes fed into public models.
AI’s neutrality: defense vs offense
Crucially, organizations must not shy away from AI simply because it is an attack vector. AI as a technology offers significant efficiency benefits to organizations across sectors, and so the answer isn’t to avoid it but to protect AI tools and systems properly.
The best way to balance AI risk with optimized business potential is to take a security-first and human-centric approach. That means putting people in control while using AI to support decision-making. This ‘Secure AI’ approach encompasses a system that is transparent, explainable, and aligned with regulations to meet unique needs and IT company ambitions.
The silver lining is AI’s own neutrality; the very same algorithms that power sophisticated cyber attacks can also be used to support modern defense systems. For instance, AI can streamline threat detection, incident response, and risk management.
Where traditional detection methods fall short in cybersecurity, defensive AI can assist in identifying ‘beaconing’ behavior through pattern recognition. Anomalies are raised to security teams through real-time notifications, enabling prompt investigation alongside required action.
Overall, this supports teams with more routine elements of system security, including the documentation of security intelligence, event information, and analysing potentially harmful emails alongside malicious files.
Machine learning can also be used in autonomous threat detection and response programs.
The myth of the golden ticket
The intention of the user largely dictates the risk-reward ratio. AI, like any tool, is prone to misuse and can be poisoned or hijacked, which means it isn’t a ‘golden ticket’ in cybersecurity.
Defenders who protect systems must not only understand and be trained on the testing of AI systems and their security, but also be in decision-making positions to execute what AI cannot adequately do.
In an era of industrialized cybercrime, success won’t be found in the AI buzz but in how well we blunt the third edge before it is turned against us.
We've reviewed the best Antivirus Software.
This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.
The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit
Principal Advisory Consultant at Orange Cyberdefense.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.