How AI is supercharging social engineering - and what businesses can do about it
AI transforms social engineering and exposes human vulnerability
Artificial intelligence is transforming how cybercriminals manipulate human behavior.
The once tell-tale signs of phishing emails such as awkward phrasing, generic greetings, and clumsy formatting, are being replaced by polished, contextually aware messages crafted by large language models.
Deepfake technology can even now clone a CEO’s voice to generate a convincing video message within minutes. This technique is already being used to defraud organizations of tens of millions of dollars.
In LevelBlue’s Social Engineering and the Human Element report, 59% of organizations say it has become harder for employees to distinguish between real and fake interactions.
Yet only one in five have implemented a comprehensive strategy to educate staff, and just 32% have worked with external cybersecurity training experts over the past year.
Security Research Manager at LevelBlue SpiderLabs.
Meanwhile, adversaries are increasingly blending AI-driven social engineering with supply chain compromises, credential theft, and automated reconnaissance.
Together, these vectors are turning social engineering from a people problem into a systemic business risk.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
A growing gap
This gap between awareness and action is widening. While technical controls continue to evolve, human behavior remains the single most exploited vulnerability.
After all, it’s easier to patch IT systems than it is to patch humans. Attackers have learned that it’s often easier to trick a person than to hack a system, and AI gives them the speed and precision to do both.
AI’s new tactical edge
Dynamic vector switching: Threat actors can begin with a benign email, measure engagement (opens, clicks), then pivot within the same thread to deliver a voice or video payload. This agility renders static awareness training less effective.
Persona creation at scale: Using aggregated data from social media and breach dumps, adversaries can build credible digital personas, complete with names, roles, and tone of voice, and use them to infiltrate organizations.
Deepfake escalation: AI-generated audio or video can be inserted mid-conversation: “Sorry, I left my phone in another room - call me on this line,” or “Here’s the updated wire-transfer instruction.” The familiarity of a known voice or face may make employees drop their guard.
Adversarial prompting and prompt chaining: Attackers refine generative AI prompts iteratively: “Make it sound more formal,” or “Include a line about quarterly performance.” Each iteration makes the message more believable and targeted.
These techniques blur what “normal” looks like in digital communication. Even experienced security professionals can find it hard to draw the line between authentic and artificial.
Why the human is still the hinge
Technical defenses such as email filters, zero-trust architecture, and anomaly detection remain essential, but AI-enabled attacks exploit judgment, not code. Every social engineering campaign ultimately relies on a human decision to click, share, approve, or authorize.
Resilient organizations understand that true security involves both locking systems and building judgment into workflows. So, how do you achieve that balance?
1. Executive engagement and AI awareness
AI-driven social engineering should be treated as a business-critical threat. Executives, engineering leaders, and DevOps teams all need visibility of how AI could target APIs, customer journeys, or internal processes.
When the board embeds AI risk into governance alongside scalability and compliance, investment in people rises to match investment in technology.
2. Simulate the AI attack chain
Annual phishing tests no longer reflect today’s threat landscape. Modern red-team exercises should replicate AI-enhanced attacks - chaining together emails, voice prompts, and deepfakes within the same simulation.
Track data points such as when users notice anomalies and how they respond to escalating deception. This helps identify where training or process reinforcement is needed most.
3. Layer AI detection with human filters
Organizations should combine AI-powered detection engines such as deepfake, voice anomaly, and behavioral analytics, with structured human verification.
Suspicious content should trigger challenge-response checks or out-of-band confirmations. AI may catch anomalies, but humans provide context and intent. Together, they create a closed loop of defense.
4. External benchmarking and evolutionary training
Threat actors innovate constantly and defenses must do the same. Partnering with cybersecurity experts for periodic “red-team-as-a-service” assessments helps identify blind spots and update training based on emerging AI tactics.
Continuous, modular learning, refreshed quarterly with live threat data, ensures teams stay aligned with the latest techniques rather than last year’s playbook.
Building human resilience in the age of AI
Generative AI has blurred the boundary between authentic and artificial, but it has also reaffirmed the importance of human judgment. Technology can surface anomalies but only people can decide whether to trust, verify, or act.
The organizations that will stay ahead are those that recognize this interplay: combining AI-driven defenses with a culture that encourages curiosity, verification, and critical thinking.
Cybersecurity is more than a race against technology; it’s a race to strengthen the human element at its core.
We've listed the best secure email provider.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Security Research Manager at LevelBlue SpiderLabs.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.