AI tools have made vulnerability exploitation faster and easier

A robot hand touching a locked digital shield blocking a human from accessing data
(Image credit: Blue Planet Studio/Shutterstock)

For many years, security teams have used the same basic approach to assess vulnerability risk. They looked at two main factors: how much damage a vulnerability could cause, and how likely it was to be exploited. Industry frameworks like CVSS helped with this by scoring both impact and likelihood.

These scores worked well for a long time because exploit development was difficult. Attackers needed advanced skills, deep technical knowledge, and time.

Article continues below
Ronald Lewis

Director of Cybersecurity Governance at Black Duck.

Security risk managers now need to rethink what “likelihood” really means. The way CVSS estimates likelihood is based on assumptions that no longer match today’s threat environment.

AI assisted coding tools have changed how quickly and easily vulnerabilities can be turned into real attacks. As a result, many risk scores underestimate how fast attackers can move.

When skill slowed attackers down

In the past, exploiting a vulnerability was not easy. Attackers had to understand how operating systems worked, how memory was handled, and how applications behaved under stress. Even when a vulnerability was publicly documented, writing a working exploit could take weeks. Many attackers did not have the skill or patience to do it.

This skill barrier mattered. It slowed exploitation and limited who could take advantage of a flaw. Even serious vulnerabilities were not always exploited right away. Security teams often had time to patch systems or apply workarounds before attackers could act.

Risk models quietly depended on this delay. Metrics like attack complexity and proof of concept availability became useful signals. If a vulnerability was complex and no exploit existed, teams assumed there was time to respond.

What AI changed

Today, vulnerability disclosures still explain how attacks work. They list entry points, required conditions, and expected results. What has changed is how easy it is to act on that information.

AI assisted coding tools can now turn written descriptions into working exploit code. A person no longer needs deep technical skills to get started. Someone can describe what they want to do, and AI can help generate code, fix errors, and test variations.

Tasks that once took weeks can now take hours or even minutes. Testing and refining exploits is faster and easier. The effort that once slowed attackers down has largely disappeared.

AI is not creating new vulnerabilities. It is removing the human effort that used to limit exploitation.

Why CVSS likelihood no longer tells the full story

CVSS was designed to describe technical details about vulnerabilities. It was not built to predict attacker behavior in a world where exploit generation is fast and cheap. Likelihood scores assumed that attackers needed skill and time to act.

That assumption no longer holds.

A vulnerability may still be complex, but complexity no longer guarantees delay. If AI can help write the exploit, a “high complexity” score does not offer much protection. The same is true for exploit maturity. By the time an exploit is officially confirmed, attackers may already be using it.

In practice, CVSS scores still do a good job describing impact. But they often fall short when it comes to real-world likelihood.

Skill is no longer the main barrier

Another assumption also needs to change. Security teams often judge risk based on attacker skill. This made sense when only experts could exploit certain flaws.

With AI tools, that gap has narrowed. Many more people can now exploit vulnerabilities. Skill matters less than access and opportunity.

The key question has changed. It is no longer, “Who can exploit this?” It is now, “Is there anything stopping exploitation?”

What really drives exploitation today

In today’s environment, exploitation depends more on conditions than on attacker ability. The most important factors include:

  • Is the system exposed or easy to reach?
  • Are identity and access controls weak?
  • Is the vulnerability clearly documented?
  • Can attackers test and adjust quickly?

When these conditions exist, the absence of a public exploit should not be seen as safety. If a vulnerability is well described, it is often exploitable.

This is why the time between disclosure and attack has become so short. AI has collapsed the window between understanding a vulnerability and acting on it.

What this means for leaders

This shift matters at the leadership level.

Executives and risk owners should stop treating CVSS likelihood as a true probability. Instead, they should ask different questions:

  • Are we relying on delay that no longer exists?
  • Are exposed systems being prioritized quickly enough?
  • Are identity controls strong enough to slow attackers down?

AI has changed the speed of risk. Decisions based on old assumptions will move too slowly. Leaders who continue to trust likelihood scores without context may believe they have time when they do not.

Updating defensive practices

CVSS is still useful. It describes impact and technical traits well. But it should not be used alone to decide urgency.

Security teams should focus less on whether an exploit exists and more on whether exploitation is possible. Clear documentation plus exposure should be enough to raise priority.

Threat intelligence should emphasize conditions, not novelty. Risk scoring should reflect how quickly an attacker could act, not how skilled they must be.

The takeaway

AI did not make vulnerabilities more dangerous. It made exploitation easier.

When exploit development is no longer limited by skill, likelihood is driven by exposure and conditions—not expertise. Risk models that ignore this shift will consistently underestimate urgency.

Leaders who adjust their thinking now will respond faster and reduce real-world risk. Those who do not will continue to trust scores that no longer match reality.

We've featured the best endpoint protection software.

This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.

The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit

TOPICS

Director of Cybersecurity Governance at Black Duck.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.