AI tools are making social engineering attacks even more convincing, and I fear that this is only the beginning
Wallace and Gromit meet deepfake deception in this sharp take on AI-driven scams

Nick Park’s Wallace and Gromit were brought crashing into the 21st century in December 2024 with their latest adventure, Vengeance Most Fowl. The film challenges our growing dependence on smart technology in the form of a robotic garden gnome, built by Wallace to support his gardening business, which is then hacked by the Kubrick-esque Feathers McGraw for his own nefarious purposes.
One of the more interesting but less commented on parts of the film shows Gromit cautiously entering his house and being greeted by what he thinks is Wallace’s reassuring voice, only to be confronted with Feathers and the robotic gnome.
Technology’s ability to mimic linguistic patterns, to clone a person’s voice and understand and respond to questions has developed dramatically in the last few years.
This has not gone unnoticed by the world’s criminals and scammers, with the result that social engineering attacks are not only on the rise but are more sophisticated and targeted than ever.
Amelia Clegg, Barrister, BCL Solicitors, and Megan Curzon, Associate, BCL Solicitors.
What are social engineering attacks?
Cybercriminal social engineering manipulates a target by creating a false narrative that exploits the victim’s vulnerability (whether that is their willingness to trust people, their financial worries or their emotional insecurity). The result is that the victim unwittingly but willingly hands over money and/or information to the perpetrator.
Most social engineering scams consist of the following stages: (1) making connection with the victim (“the means”), (2) building a false narrative (usually with a sense of urgency or time limitation) (“the lie”) and (3) persuading the target to take the suggested action (e.g. transferring money or providing personal details) (“the ask”).
Usually, stage 2 (the lie) is where most people spot the scam for what it is, as it is difficult to build and sustain a convincing narrative without messing up eventually. We have all received text messages, emails or social media messages from people purporting to be our friends, long-lost relations in countries we have never been to, or our banks, asking us to provide them with personal information, passwords or money.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Historically, such communications were easy to spot, as they bore the hallmarks of a scam: generic greetings and signatures, spelling mistakes, poor or unusual grammar and syntax, inconsistent formatting or suspicious addresses.
Liar, liar, pants on…f-AI-re?
However, the rapid sophistication of generative AI tools means that it is increasingly easy for criminals to craft and sustain plausible false narratives to ensnare their victims; the “lie” or stage 2 in the social engineering scam. Companies and law enforcement agencies are scrambling to stay ahead of the technological advances and are working hard to predict developments which will be used for social engineering.
One potential use case for generative AI in this area is a dynamic lie system, which would automatically contact and interact with potential victims to earn their trust before moving to stage 3 (the ask). This would be particularly useful for “advance-fee” or “419” scams. These scams work by promising the victim a large share in a huge amount of money in return for a small upfront payment, which the fraudster claims will be used to obtain the large sum.
The AI-based dynamic lie system could automate the first wave of scam emails to discern whether the potential victims are likely to ‘take the bait’. Once the system identifies an engaged individual who appears persuaded by the communication , it can then pass the control to the human operator to finish the job.
Another development which has already gained traction is the use of AI to clone human speech and audio to carry out advanced types of voice phishing attacks, known as “vishing”. In the United States, the Federal Trade Commission has warned about scammers using AI voice cloning technology to impersonate family members and con victims into transferring money on the pretext of a family emergency.
Current technologies allow voices to be cloned in a matter of seconds, and there is no doubt that with advancements in deep learning, these tools will only become more sophisticated. It would appear this form of social engineering is here to stay.
Do androids dream of electric scams?
“If there’s one job that generative AI can’t steal, it’s con artist.” So said Stephanie Carruthers, Global Lead of Cyber Range and Chief People Hacker at IBM in 2022. Fast forward 3 years and Carruthers has changed her position. Our concerns about AI are not just limited to the impact on the workforce but have now expanded to include AI-based bots which can craft tailored social engineering attacks to specific targets. As Carruthers notes, “with very few prompts, an AI model can write a phishing message meant just for me. That’s terrifying.”
Currently AI is being used by threat actors as an office intern or trainee to speed up completing the basic tasks required to carry out social engineering attacks. Carruthers and team did some experiments and found that generative AI can write an effective phishing email in five minutes. For a team of humans to write a comparable message, it takes about 16 hours, with deep research on targets accounting for much of that time.
Furthermore, generative AI can churn out more and more tailored attacks without needing a break, and crucially, without a conscience. Philip K. Dick noted that for his human protagonist, Rick Deckard, “owning and maintaining a fraud had a way of gradually demoralizing one”, but in an increasingly digital criminal underworld, maintaining a fraud has never been easier.
Increase your awareness of threats and how to tackle them with the best online cybersecurity courses.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Amelia Clegg, Barrister, BCL Solicitors, and Megan Curzon, Associate, BCL Solicitors.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.