The AI Triple Threat: mitigating the dangers of AI adoption with identity security
AI accelerates cyber risk and demands identity-first defense

A succession of recent high‑profile breaches has shown that the UK remains vulnerable to ever‑more advanced cyber threats. This exposure is intensifying as artificial intelligence becomes increasingly embedded in everyday business operations. AI tools have become essential for organizations seeking to deliver value and maintain competitiveness. Yet, its benefits also bring risks that far too many organizations have yet to fully mitigate.
CyberArk’s latest research identifies AI as a complex “triple threat”. It is being leveraged as an attack vector, utilized defensively, and—perhaps most worryingly—creating significant new security gaps. In light of this evolving threat landscape, organizations must position identity security at the heart of their AI strategies if they wish to build future resilience.
EMEA Technical Director, CyberArk.
AI: Same threats, new problems
AI has raised the bar for traditional attack methods. Phishing, which remains the most common entry point for identity breaches, has evolved beyond poorly worded emails to sophisticated scams that use AI-generated deepfakes, cloned voices and authentic-looking messages.
Nearly 70% of UK organizations fell victim to successful phishing attacks last year, with more than a third reporting multiple incidents. This shows that even robust training and technical safeguards can be circumvented when attackers use AI to mimic trusted contacts and exploit human psychology.
It is no longer enough to assume that conventional perimeter defenses can stop such threats. Organizations must adapt by layering in stronger identity verification processes and building a culture where suspicious activity is flagged and investigated without hesitation.
Using AI in defense
While AI is strengthening attackers’ capabilities, it is also transforming how defenders operate. Nearly nine in ten UK organizations now use AI and large language models to monitor network behavior, identify emerging threats and automate repetitive tasks that previously consumed hours of manual effort. In many security operations centers, AI has become an essential force multiplier that allows small teams to handle a vast and growing workload.
Almost half of organizations expect AI to be the biggest driver of cybersecurity spending in the coming year. This reflects a growing recognition that human analysts alone cannot keep up with the scale and speed of modern attacks. However, AI-powered defense must be deployed responsibly.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Over-reliance without sufficient human oversight can lead to blind spots and false confidence. Security teams must ensure AI tools are trained on high-quality data, tested rigorously, and reviewed regularly to avoid drift or unexpected bias.
AI is broadening the scope of attacks
The third element of the triple threat is the rapid growth in machine identities and AI agents. As employees embrace new AI tools to boost productivity, the number of non-human accounts accessing critical data has surged, now outnumbering human users by a ratio of 100 to one.
Many of these machine identities have elevated privileges but operate with minimal governance. Weak credentials, shared secrets and inconsistent lifecycle management create opportunities for attackers to compromise systems with little resistance.
Shadow AI is compounding this challenge. Research indicates that over a third of employees admit to using unauthorized AI applications, often to automate tasks or generate content quickly. While the productivity gains are real, the security consequences are significant. Unapproved tools can process confidential data without proper safeguards, leaving organizations exposed to data leaks, regulatory non-compliance and reputational damage.
Addressing this risk
Addressing this risk requires more than technical controls alone. Organizations should establish clear policies on acceptable AI use, educate staff on the risks of bypassing security, and provide approved, secure alternatives that meet business needs without creating hidden vulnerabilities.
Positioning identity security at the heart of digital strategy Securing AI‑driven enterprises requires embedding identity security at every layer of an organization's digital strategy. That means ensuring real‑time visibility of all identities - human, machine or AI agent -applying least privilege consistently, and continuously monitoring for unusual access behavior that may signal a breach.
Forward‑facing organizations are already updating their access and identity management frameworks to meet AI’s distinct demands. This entails adopting just‑in‑time access for machine identities, monitoring privilege escalation, and treating all AI agents with the same scrutiny as human accounts.
AI offers tremendous value for organizations that embrace it responsibly, but without robust identity security, that value can swiftly become a liability. The businesses that thrive will be those recognizing that resilience isn’t optional- it’s the foundation for long‑term growth.
At a time where businesses and their adversaries are both empowered by AI, one principle stands firm: securing AI begins and ends with securing identity.
We list the best software asset management (SAM) tools.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
EMEA Technical Director, CyberArk.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.