Presented by

AI-created malware is on the rise – here's what your business needs to stay safe
Keeping your company safe in the AI era shouldn't be impossible
AI-created malware isn’t a brand new problem so much as a new kind of acceleration, and one which might well impact your business in the near future.
For years, attackers have reused code, bought tools off-the-shelf, and iterated quickly on whatever worked - and now, generative AI is helping them do more of that work faster.
From writing and refining scripts, to producing convincing lures, to churning out new variants that are harder to spot, AI is a dream tool for malicious actors.
A phishing email that looks “good enough” at scale can land in far more inboxes, and a working payload can be tested, tweaked, and redeployed in hours rather than days.
The good news is that the fundamentals still hold. The organizations which stay safest tend to be the ones which treat identity as the perimeter, assume endpoints will be targeted, and invest in fast detection and response.
For many teams, this means tightening controls, using tools like Microsoft Defender to improve visibility, and then adding automation.
In other words, AI may be changing the pace of attacks, but it’s also raising the value of doing the basics well, consistently, and at scale.
What is "AI-created malware"?
“AI-created malware” is a slightly slippery phrase, because it covers a range of behaviours, and not all of them look like a chatbot spitting out a perfect virus after a few prompts.
In practice, most of what businesses are seeing, and what security teams worry about next, sits on a spectrum.
At one end is AI as a productivity boost: attackers use generative tools to draft or refine scripts, translate and localize phishing content, troubleshoot errors, and iterate faster.
Further along the spectrum is AI-enabled variation and obfuscation - so instead of recycling the same payloads until they get blocked, attackers can generate more unique-looking variants, tweak delivery methods, and experiment with unusual file types or formats to fool detection.
Then there’s the rise of malicious or “uncensored” AI tools marketed for cybercrime use cases. These aren’t necessary for every attacker, but they do help lower the barrier to entry, especially for phishing kits and social engineering.
In this way, the story is about volume, and not just sophisticated groups.
What it doesn’t usually mean (or at least not yet) is fully autonomous campaigns where an AI system picks targets, writes a bespoke exploit, deploys the payload, and runs the operation end-to-end without a human in the loop.
The more realistic risk is that the human attackers you already need to worry about can move faster, test more ideas, and scale their efforts with less effort.
Why AI malware is on the rise
It’s tempting to blame “AI malware” on a sudden leap in attacker genius, but the reality is more mundane (and, in some ways, more worrying).
Generative tools are fitting neatly into the parts of cybercrime that already scale well: repeatable tasks, rapid iteration, and the kind of volume work that used to require time, patience, and a small team.
The barrier to entry is dropping
In 2026, you don’t need to be a seasoned malware developer to produce something harmful if you can lean on AI for the fiddly bits like writing scripts.
That doesn’t make every attacker “advanced” overnight, or even capable of becoming advanced, but it does mean more people can create credible threats with fewer obvious errors.
The UK’s National Cyber Security Centre has warned AI is likely to increase the capability and efficiency of cyber threat actors through to next year, widening the pool of actors who can attempt intrusions.
Attackers can iterate and adapt faster
Even when the underlying techniques are familiar, AI speeds up the cycle of build → test → tweak → redeploy.
Modern defense is often a race between how quickly a team can detect and respond, and how quickly an attacker can produce a slightly different variant that slips past, and so this speed boost is important.
The attack surface is expanding
The final driver is on the defender’s side: most organizations are more cloud-connected than ever, juggling SaaS apps, remote access, and so on.
The proliferation of services could be fertile ground for AI-enabled attacks, because a convincing lure or a stolen session token can be more useful than a noisy exploit.
Where AI shows up in attacks
A useful way to think about “AI-created” threats is to stop picturing a single, self-contained super-malware, and instead look at where AI speeds up specific steps in an attack.
Initial access: better lures at a higher volume
Email is still one of the easiest ways to get a foot in the door, and generative AI helps attackers produce messages that look less like spam and more like ordinary business communication.
A process which previously might have taken days now takes seconds, and can be performed by a single person, and the result includes more polished language, fewer tell-tale formatting errors, or lures tailored to a specific sector or role.
The practical impact is that security teams can’t rely on users spotting obvious red flags, creating a need for strong identity controls and mail protections.
Delivery and obfuscation
One of the more telling recent examples came from Microsoft’s reporting on a phishing campaign which used an SVG attachment and AI-generated code to try to fool users.
The point isn’t that SVGs are new, but that attackers keep experimenting with formats and techniques that blend into everyday workflows, and AI makes it easier to generate or adapt code quickly.
Malware development
The clearest recent signal of where this is heading is VoidLink, an AI-assisted malware framework that went from concept to functional code quickly, producing output that looked more like the work of a small development team.
The important point for businesses isn’t the name of the tool, but what it represents: attackers can move through the build-and-test loop faster, generate more variations, and refresh their payloads when defenders start catching up.
Post-compromise
Once an attacker has access, the challenge for defenders is time.
AI doesn’t need to “run the whole operation” to be useful: increasing the speed of some components still boosts the chance of real impact.
Given this reality, businesses should bias towards controls that reduce blast radius, and tools that speed up detection and response across endpoints, identities, and email, like getting Microsoft Defender signals into one place.
How AI helps attackers
Even if most attacks still involve a human operator, AI changes the maths by speeding up the steps around them.
The result is simple: once an attacker gets a foothold, you may have less time to spot it, understand it, and stop it before it becomes a bigger incident. And this “time compression” shows up in incident-response reporting.
Palo Alto Networks’ Unit 42 says the fastest time from intrusion to data exfiltration fell from around 4.8 hours in 2024 to 72 minutes in 2025, alongside a rise in intrusions spanning multiple surfaces.
Identity is also a recurring weak link - Unit 42’s work points to identity issues in a large share of incidents, while the World Economic Forum’s Global Cybersecurity Outlook 2026 found 87% of respondents saw AI-related vulnerabilities as the fastest-growing cyber risk over the course of 2025.
The UK’s NCSC has similarly warned that AI is likely to increase the capability and efficiency of cyber threat actors through to 2027.
For businesses, the takeaway is to make sure defenses work at modern speed, and especially reducing the time it takes to investigate and contain suspicious activity, ideally with linked visibility across systems.
What businesses can do
If AI is helping attackers move faster, the counterweight is making sure your security basics work at speed, too. Let's take a look at some key options.
Identity-first security
Most modern incidents either start with identity abuse, or quickly pivot to it. For companies, the goal is to make stolen passwords less useful, and risky sign-ins easier to spot and stop.
In Microsoft environments, that usually means getting serious about Microsoft Entra ID policies, like using Multi-factor Authentication as a baseline. Done well, this reduces the number of “single click to compromise” scenarios, even when a phishing lure is persuasive.
Endpoint visibility and response
Email and the web are messy, users are human, and attackers only need one foothold, which is why endpoint protection is still important in 2026.
What a business ideally wants is the ability to detect suspicious behaviour, investigate quickly, and contain it before it spreads.
Microsoft Defender can provide that kind of visibility and response across devices, while also helping teams spot patterns that look like early-stage intrusion activity rather than waiting for obvious malware alerts.
Email and collaboration protections
If generative tools are helping attackers produce more convincing messages, you want controls that assume some of those messages will get through.
Microsoft Defender for Office 365 is designed for that layer, helping to reduce exposure to malicious links and attachments, and catching impersonation-style attacks that don’t always look like classic malware delivery.
Centralized detection and investigation
When attacks move quickly, the difference between a “low priority” alert and a serious incident is often context.
For example, a single suspicious sign-in might look harmless until you connect it to an unusual email attachment or new laptop process.
These scenarios are where Microsoft Sentinel comes in as a central point for correlation and investigation, helping teams join the dots across identity, endpoint, and email signals.
AI for defenders
Finally, there’s a place for AI on the defense side, and it works best when you treat it as an accelerant for good processes.
Copilot for Security is positioned by Microsoft as a way to speed up triage and investigation – summarizing incidents, pulling relevant context together, and so on – to combat fast-moving threats.
The caveat is important: it won’t fix weak telemetry or unclear policies.
What it can do, though, is reduce friction once you’ve got the fundamentals in place, which is exactly what you need when the attacker loop is getting tighter.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Max Slater-Robins has been writing about technology for nearly a decade at various outlets, covering the rise of the technology giants, trends in enterprise and SaaS companies, and much more besides. Originally from Suffolk, he currently lives in London and likes a good night out and walks in the countryside.