Sponsored by Trend Micro
5 worrying ways AI is being used by cybercriminals to target millions of victims
It's always better to be prepared

New technology like artificial intelligence, especially generative AI, is aiding criminals in pursuing a range of cyber crimes faster and with increasing efficiency.
This quick evolution of AI and technology in general is enabling new kinds of crime, but also transforming old ones to be more effective and destructive. Text generation tools coupled with realistic AI-generated images, voice cloning, video production, etc., are some of the key capabilities of AI that criminals are now using to exploit their victims.
Moreover, these advances in AI expose everyone to potential risks, as this newly generated content is increasingly indistinguishable from human-created material.
Children and the elderly are especially vulnerable to exploitation. Yet, the cybersecurity industry is gearing up for a challenging year since the threat could extend to critical infrastructure and systems, impacting a wide range of users.
Addressing these challenges will require a comprehensive approach that includes cross-sector collaboration, public awareness, and relying on AI for defense, coupled with cybersecurity best practices.
On the information front, we will look into five worrying ways in which AI is being used by cybercriminals to target millions of victims, to inform users, and hopefully aid in protecting them from malicious actors.
Trend Micro Premium Security Suite plus ScamCheck
Powered by AI, Trend Micro Premium Security Suite with Trend Micro ScamCheck provides complete device security, identity protection, and scam prevention for up to 10 devices.
It works on Windows, Mac, Android, iOS, and Chromebook so you can secure all yours and your family’s devices whether you are at home or on the go.
The security suite includes Maximum Security with 24/7 support, Mobile Security, ID Protection, ID Theft Restoration, a Password Manager, Trend Micro Scam Check and a secure VPN for protection on public WiFi.
1. Phishing and social engineering
Phishing attacks have been on the rise ever since ChatGPT was introduced, and are the most common threat powered by AI tools.
Basically, the attack is an attempt by a malicious actor to dupe a user into allowing the attacker to gain entry into the targeted system and do as much as possible to compromise it fully.
The danger behind AI tools in phishing and social engineering attempts is that AI makes the attempts more realistic and personalized. Furthermore, thanks to AI automating most of the processes, the scale of the attacks can and often does increase exponentially, all with the goal of attacking the weakest link in cybersecurity - the human.
Besides creating more believable messages, attackers can now take advantage of deepfake technology that can help in creating images, videos, and even audio that mimics a trusted person (CEO, CTO, or higher up in the company).
This further extends to the creation of fake social media profiles, which can serve as a rapport-building tool, making the entire approach believable and hard to detect.
For instance, in February 2024, an office worker transferred $25 million after such an attack had duped him, and just recently, a woman in France was scammed out of $850,000 thanks to a similar approach.
Both of these cases showcase how believable the attacks can be and how fallible the human factor is in cybersecurity.
2. One-Time Password (OTP) bots
Two-factor authentication (2FA) is a security feature that has become pretty widespread, and most of us use it in some form.
Its popularity has unfortunately led to the development of numerous attempts to hack or bypass it. Most implementations of 2FA rely on one-time passwords (OTP) that are sent to the user through messages, voice calls, email, push notifications, or a bot chat message from a website.
In order to obtain this password, hackers often deploy complex multi-stage hacks, with OTP bots being a relatively new and efficient way of performing this hack.
An OTP bot is practically software program to intercept OTPs relying on social engineering. It usually occurs through a combination of steps, but the most basic ones consist of six steps:
- Credential theft – The attacker gets hold of the victim’s login details, often through phishing, data breaches, or leaked credentials.
- OTP request – When attempting to log in, the attacker triggers an OTP request, which is sent to the victim’s phone as a security measure.
- Deceptive call – The victim receives an automated call from a bot posing as a legitimate source. Following a carefully crafted script, the bot tricks them into thinking they need to provide the OTP.
- Code submission – Without realizing the scam, the victim enters the OTP on their phone while still on the call, believing they are following a standard verification process.
- Attacker interception – The attacker, monitoring everything through a backend dashboard or a Telegram bot, instantly receives the OTP in real time.
- Account takeover – With the stolen OTP, the attacker successfully logs into the victim’s account, bypassing security measures and gaining full control.
Some ways in which you can protect yourself from these attacks are to avoid clicking on any links that you receive in messages, avoid punching in your OTP no matter how convincing the call may sound, double-check the spelling and HTTPS of the website and finally use a security solution that blocks phishing pages.
3. Sophisticated malware
AI like ChatGPT (and others) can produce functional code in an instant, which can be used for regular tasks but also for building out malicious programs.
This means that attackers can generate dozens of variations of malware to try and avoid traditional detection that relies on fixed malware signatures or predictable patterns that such programs exhibit.
An additional complication is that AI now allows attackers to further enhance malware with evasion techniques such as polymorphic augmentation, which enables malware to constantly change its code to avoid detection. On the other hand, metamorphic malware reshapes itself but still holds onto its functionality.
How far the attackers have gone is best illustrated by WormGOT and FraudGPT, two AI services created by malicious actors that were trained on numerous malware-related data, equipping them to generate convincing phishing emails and custom malware.
Luckily for users, such attempts have also led to the creation of Endpoint Detection and Response (EDR) systems, a complex digital security solution powered by AI to help combat such malicious exploits.
4. AI hacking
Besides hackers using AI tools to enhance their hacking activities, the fact that a lot of new AI-powered hardware is being introduced opens new avenues for hacking.
A recent Inside the Mind of a Hacker 2024 report showcased that 81% of hardware hackers have discovered a novel vulnerability that they have never seen before, with 64% believing that there are more vulnerabilities now.
AI has supercharged hardware hacking, making attacks like fault injection, side-channel exploits, and firmware tampering more efficient and accessible.
Machine learning algorithms can analyze power consumption and electromagnetic emissions to crack encryption faster (side-channel attack), while AI-driven automation helps attackers refine fault injection techniques with pinpoint accuracy.
Meanwhile, generative AI makes it easier to craft sophisticated malware that can manipulate firmware undetected. All of these attacks are probably just scratching the surface of AI-assisted hacking, which will surely develop at a dizzying pace.
5. Threat democratization
Perhaps the biggest threat from AI comes from the fact that everyone has access to these tools, meaning that even a low-skilled (no-code) person can make use of them.
While this is great for the layperson looking to enhance their workload with AI or perform research faster, it also means that anyone with access to ChatGPT can potentially create a malicious program or a phishing email.
In the past, to perform such attacks, you would have to spend years learning and honing your skills, trying multiple approaches, and having a deep understanding of computer systems, networking, and security.
With AI, you can bypass most of these issues by having AI create code or content based on your descriptive inputs. Of course, it can not replace a seasoned hacker, but it does offer an additional, malicious avenue to those looking to experiment with scams.
Conclusion
The rise in cybercrime spurred by the recent developments in AI is a reminder that technology, though revolutionary, is a double-edged sword.
As AI tools evolve and become more powerful and democratized, cybercriminals will find new ways to utilize them to manipulate users for malicious purposes, from scaling their phishing attempts to bypassing security and even making it easier for the less technically inclined to exploit others.
On the other hand, the same technology that is used by hackers empowers ethical users and companies to fortify their defenses. Security firms are already leveraging AI-powered detection systems, anomaly tracking, and automated threat response to stay ahead of these evolving threats.
We would add that besides these attempts, the greatest strength lies in empowering the weak link in the chain of cybersecurity (the human) by raising awareness and knowledge on exploits.
In the end, as AI continues to shape and reshape the digital landscape, both businesses and individuals will have to adapt, stay curious, and vigilant to mitigate risks. The battle between cybercriminals and security experts is only intensifying, and in this AI-powered era, staying informed is the first step in staying protected.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.