Vulnerability exploitation: The dangers of the open LLM model boom

A hacker wearing a hoodie sitting at a computer, his face hidden.
(Image credit: Shutterstock / Who is Danny)

For a software vendor, telling the world about the latest security vulnerability is always a delicate balancing act. Customers need information quickly, starting with the flaw’s severity rating and whether it is severe enough to allow for remote exploitation. But they are not the only people listening, which is why care needs to be taken with the information disclosed. Criminals, too, pay close attention to public alerts, looking for any clue that might help them create a successful exploit for a vulnerability before it is patched.

This is cybersecurity’s quiet war, fought every day across dozens of vulnerability disclosures. Attackers want to understand and write exploits for flaws as quickly as possible while defenders want to prioritize, mitigate and patch them just as fast. If the attackers triumph every now and again, it remains the case that good patching routines and threat detection keep the bad guys out most of the time.

Pascal Geenens

Director for Threat Intelligence at Radware.

The dangers of local models

The bad news is that thanks to developments in AI this is changing. We’re still in the early days of offensive AI techniques and tools, but already it is having a disruptive effect across multiple threat types. Unfortunately, that includes using local or offline generative pre-trained transformer (GPT) models as a way of accelerating and automating exploit creation.

Since DeepSeek released its open and resource friendly, but very competitive and capable model, we are now standing at the advent of a potential open model boom. This movement brings new and evolving risks, where criminals can adapt open pre-trained models, easily downloadable across the Internet, and run them locally on modest PCs with GPUs.

Operating without the guardrails typically found in their commercial online counterparts, local spinoffs can then be created and fine-tuned using data collected from malicious software research and underground forums. What you end up with are specialized crime AI platforms that can be offered as a subscription service or the backend of AI agent system for automating attack campaigns. The weaponized platforms can be specifically designed to make writing malware – or creating exploits based on vulnerability disclosures – a more automated and therefore much faster process.

The modus operandi won’t succeed every time, but for criminals, success is always a percentages game. Across possibly hundreds of threat actors, successful exploits could be written on a scale that will dramatically increase the likelihood of eventually uncovering a working exploit.

The threat here isn’t theoretical. The proof of concept is that black hat AI models, such as FraudGPT and WolfGPT, have been around since 2023. Moreover, researchers demonstrated the ability of a single LLM agent backed by GPT-4 to exploit one-day vulnerabilities in April 2024. Today, an organization might still assume it has 24-48 hours to mitigate or patch a significant vulnerability before the risk of exploits in the wild begins to rise. The advent of local pre-trained models coupled to AI agents for automation are transforming this. Instead of days to patch, organizations are looking at minutes.

Fighting AI with AI

This much is certain: no organization can patch their systems in minutes, at least not using today’s processes based on manual decision making. But let’s not panic. Vulnerability exploits written by AI are just the latest incarnation of an unceasing threat evolution. The answer is the same as it always has been – the defenders must evolve, too.

Just as attackers can use AI agents to create exploits quickly, so defenders can deploy the same technology to process new vulnerability alerts in real time, rapidly implementing security mitigations that might be required. In many ways, this is the perfect example of how today’s defenses could soon become a battle of our AI versus their AI.

If attackers have the advantage of time and the volume, defenders have the benefit of knowledge. Agentic AI tuned to understand the environment it is defending will always know more about the network it is protecting than the AI probing it. Meanwhile, attacks targeting exploits are not necessarily getting more sophisticated, but merely faster and more frequent. It is the speed attackers can throw exploits at defenders that is dangerous, not the quality of those exploits. If defenders can match them on this metric, all is not lost.

What we shouldn’t do is become alarmed. The fact that attackers look for vulnerabilities is not new. AI is just the latest technology in a long line that can be put to malicious use. But this capability cuts both ways. Defending against AI-developed exploits will be challenging but developments such as agentic AI automation will also be our friend.

We've featured the best malware removal software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Pascal Geenens

Pascal Geenens is a security researcher and evangelist at Radware.

As Global Threat Intelligence Director for Radware, Pascal helps execute leadership on today’s security threat landscape. Pascal brings over two decades of experience in many aspects of Information Technology and holds a degree in Engineering from the Free University of Brussels, specialised in electronics and parallel computing. As lead of the Radware Security Research team Pascal actively researches IoT malware. Pascal discovered the BrickerBot, JenX and Demonbot botnets, did extensive research on Hajime, the Hadoop YARN attack surface, and follows closely new developments and threats in the IoT space as well as applications of AI in cyber security and hacking. 

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.