How AI-powered threats are rewriting email security economics
Email security is now a resource allocation problem
Sign up for breaking news, reviews, opinion, top tech deals, and more.
You are now subscribed
Your newsletter sign-up was successful
Ten years ago, email security ROI was simple: deploy pattern-matching systems, accept some false positives, staff a SOC for escalations. The math worked because attacks followed patterns machines could learn.
AI broke that model.
CEO of StrongestLayer.
Enterprise security teams now spend 25% of analyst time investigating email security false positives. That's a quarter of your security capacity on work that produces nothing. Industry research shows 65% of email security alerts are false positives.
Each one takes 33 minutes to investigate. Meanwhile, AI-generated attacks succeed because security teams are drowning in noise, unable to focus on threats that don't match historical patterns.
For IT leaders managing flat budgets, this isn't about needing more investment. The architecture itself no longer produces acceptable returns.
The economics problem
AI changed attacker economics. Recent Harvard research shows AI can fool over 50% of humans while cutting attack costs by 95% and increasing profitability up to 50-fold. Defender capabilities haven't kept pace—at least not if you're using security built for pre-AI threats.
Current email security creates predictable operational burden:
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
65% of alerts are false positives (SOC Best Practices, 2025)
25% of analyst time chases false positives (Ponemon/Exabeam, 2024)
33 minutes average investigation per alert (VMRay/Microsoft, 2025)
This scales with organization size but maintains the same inefficiency. A mid-market 5-person SOC loses 1.25 FTE to false positive investigation. An enterprise 40-person SOC loses 10 FTE—roughly $1.2-1.7M in annual analyst capacity investigating nothing.
The impact is identical at every scale: business email compromise targeting executives sits in queue for hours while security teams investigate false alarms. SOCs operate at 105-175% of capacity before any proactive security work happens. Larger organizations don't escape this. They just waste more.
Why incremental fixes don't work
Some organizations hire more analysts. This doesn't solve the architectural problem. It scales the inefficiency. If 65% of alerts are false positives consuming 25% of analyst time, hiring more people means 25% of their time is also wasted. The ROI never improves. The cost structure scales linearly with the problem.
Others tune detection thresholds or add more rules. Different trap: the false positive/false negative tension can't be resolved in systems that only hunt for threats.
Make detection aggressive to catch sophisticated attacks, and you quarantine legitimate business communications. Make it cautious to reduce false positives, and novel attacks succeed. Zero-sum trade-off. Improving one metric degrades the other.
The problem is architectural. Email security built on pattern-matching (Generation 1) or machine learning (Generation 2) shares the same limitation: it can only evaluate threat signals. It looks for suspicious patterns but has no way to validate business legitimacy.
When attackers reused templates and tactics, this worked. Pattern-matching caught 60-70% of threats. The remaining work was manageable.
AI changes the game. Attackers now generate unlimited unique variants, personalized to organizational context, with no historical precedent. Each attack is novel. Pattern-matching fails mathematically—you can't match patterns that don't repeat.
Machine learning fails the same way—you can't train models on attack patterns that haven't been seen.
The resource misallocation
This creates a problem for IT leadership. Your most valuable security resources—expert analysts with 7-15 years experience who should be threat hunting and building detection capabilities—are trapped investigating false positives.
These analysts should be:
- Assessing security implications of AI tool adoption across business units
- Building threat intelligence programs that inform decisions
- Running tabletop exercises preparing leadership for attacks
- Enabling secure adoption of new communication platforms and business tools
Instead, 25% of their time validates whether legitimate vendor invoices are phishing. At $85-120K fully-loaded compensation per analyst, this is significant misallocation of security resources.
The shift to reasoning-based architecture
A third generation of email security architecture is emerging that changes the economic model by solving resource allocation architecturally.
Rather than only hunting for threat signals, these systems evaluate emails across two dimensions at once: threat indicators and business legitimacy patterns. This breaks the false positive/false negative tension.
For every email, the system runs parallel investigations. Threat signal collection examines authentication failures, suspicious relay paths, and manipulation tactics.
At the same time, business legitimacy analysis evaluates whether communication patterns match established organizational relationships, whether requests fit documented approval workflows, and whether sender behavior matches historical norms.
A reasoning layer—using large language models as the orchestration architecture, not a bolt-on feature—weighs all evidence and makes decisions.
Legitimate vendor communications with unusual characteristics (new domain, first-time sender, urgent language) can be auto-released because business legitimacy signals outweigh minor threat flags.
Emails that look technically clean but violate business logic (CFO requesting wire transfer bypassing standard approval workflows) trigger high-priority escalation.
This changes the economics. Instead of 65% false positive rates consuming 25% of analyst capacity, systems built on reasoning-based architecture can auto-release 70-80% of false positives, auto-handle 15-20% of contained low-risk threats, and escalate only 5-10% of complex attacks with complete investigation packages.
Analyst workload shifts from 25% capacity on false positive triage to 3-5% capacity validating high-confidence escalations—a 5-8x improvement in analyst productivity that scales at any organization size.
For a mid-market 5-person SOC, this recovers 1 FTE for other work. For an enterprise 40-person SOC, this recovers 8 FTE—the difference between running constant alert triage and building proactive threat hunting programs.
Evaluation framework for IT leadership
When evaluating email security architecture, focus on economic outcomes, not feature lists. Ask vendors:
What percentage of detections never reach human analysts?
If the answer isn't "70-80% auto-released or auto-handled," the architecture hasn't solved resource allocation. You're buying a system that will consume 25% of analyst capacity at current rates.
How does the system model business legitimacy, not just threat signals?
Vendors that can only articulate threat detection capabilities are selling systems that perpetuate the false positive crisis. Systems that can explain how they validate business context, approval workflows, and communication patterns are architecturally different.
What investigation work happens before SOC escalation?
Legacy systems send alerts: "Suspicious email detected." Reasoning-based systems should deliver decision-ready packages: complete evidence collection, multi-dimensional analysis, risk calculation considering target authority and business impact, recommended actions with supporting rationale.
If analysts start investigation from scratch, you're paying for work the security system should have automated.
What's the total cost of ownership?
Calculate: license fees + analyst time investigating false positives (25% of capacity × team cost) + business disruption from quarantined legitimate emails + breach costs from missed threats. Architectures that reduce analyst burden by 80%+ can justify higher license costs because total economic return is better.
The window is closing
Organizations have a finite window before AI-enhanced attacks become mainstream. By 2026-2027, the tools and techniques will be commoditized, expanding the threat actor pool.
Organizations that migrate to reasoning-based email security now establish operational advantages—better threat detection, yes, but more importantly, better resource allocation. Security teams can focus on real work rather than alert triage. Business operations face less friction from false positive quarantines.
Cost structures improve as analyst productivity increases.
Organizations that delay this transition will spend 2027-2028 managing preventable business disruptions while competitors operate more efficiently.
The architectural shift from pattern-matching to business-context reasoning determines whether email security becomes an advantage or a growing cost center. Email security has evolved from infrastructure hygiene to resource allocation challenge.
The question for IT leadership isn't whether to invest more—it's whether to invest differently, in architectures that deliver better economic returns rather than linear scaling costs
We've featured the best encryption software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Alan LeFort is CEO of StrongestLayer.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.