The war on trust: how AI is rewriting the rules of cyber resilience

Malware attack virus alert , malicious software infection , cyber security awareness training to protect business
(Image credit: Shutterstock)

You don’t need me to remind you that AI is now everywhere and at the forefront of C-suite agendas globally. But a lesser discussed impact of this is how trust has been fundamentally transformed as a result - for both individuals and businesses.

What was once guided by instinct and intuition is now quantifiable, testable, and machine-analyzed. Yet, despite the rise of sophisticated technology, attackers are still targeting the most vulnerable link: humans.

Anna Chung

Principle Researcher at Palo Alto Networks Unit 42.

Our latest Global Incident Response Report: Social Engineering Edition reveals that 36% of all cyber incidents begin with social engineering - clear proof that the human element remains the favorite entry point for cybercriminals.

AI is rewriting the rules of this battlefield. It is clearly giving criminals unprecedented power to imitate human tone, timing, and emotion with uncanny precision, while simultaneously equipping defenders with advanced tools to detect deception and continuously validate integrity.

The result is a high-stakes struggle over trust itself: who controls it, who exploits it, and who safeguards it.

Resilience no longer hinges on blind faith in technology alone. It depends on how effectively businesses manage trust across people, processes, and intelligent systems that never rest.

Why trust is now the primary attack surface

Despite breakthroughs in automation and detection, most major breaches still begin with a single human decision. It might be a click, a shared credential, or a conversation that feels routine. Social engineering thrives in these everyday moments - where familiarity dulls caution and attackers disguise manipulation as trust.

Attackers are far from guessing; they’re studying organizational dynamics and individual behaviors with the enthusiasm of a PhD student - minus the ethics. Many campaigns now blend multiple tactics, from malvertizing and smishing to MFA (Multi Factor Authentication) bombing, to wear down vigilance.

Our research highlights that 65% of social engineering attacks used phishing tactics, with 66% targeting privileged accounts and 45% impersonating internal personnel.

This shows that while phishing remains dominant, the sophistication lies in the context: messages that sound like colleagues, mimic legitimate business or blend naturally into ongoing workflows.

What makes this wave so dangerous is its adaptability. Each failed attempt reinforces the next, teaching AI-driven adversaries how humans respond under pressure. These attacks allow threat actors to escalate privileges rapidly, sometimes moving from initial access to domain administrator in under 40 minutes, without deploying any malware.

The attacks largely exploit control gaps and alert fatigue: 13% of social engineering cases succeeded because critical alerts were missed or misclassified. This reality demands a stronger focus on behavioral detection rather than relying solely on technical controls.

Defending against social engineering requires far more than awareness training; it requires systems capable of detecting deviations before trust is exploited.

How AI-driven defense can identify behavioral anomalies before damage occurs

We are noticing that as AI attacks accelerate, defenders are responding in kind - harnessing machine intelligence to surface what the human eye can’t see. The next frontier of cybersecurity lies in behavioral analytics: detecting the subtle deviations that signal deception before damage occurs.

We should not overemphasize AI’s offensive potential. The real opportunity lies in building our own AI key guardian capabilities - systems that understand behavioral baselines, detect anomalies, and continuously validate identity in real time.

This is more than a defensive upgrade; it’s a governance transformation. It allows organizations to embed verification behind every act of trust and every access event, ensuring that trust is earned.

AI-driven defense tools now analyze everything from communication tone to login patterns, spotting inconsistencies that suggest manipulation or impersonation.

These systems continuously learn what “normal” looks like within an organization, such as how teams collaborate, when accounts log in, what language employees use, and flag anomalies in real time. To defend effectively, we must use AI to fight AI.

This proactive approach can strengthen security from a reactive shield into an anticipatory system. Instead of waiting for alerts after a breach, detection models surface early indicators of compromise, even when no technical exploit is visible.

In this way, AI acts as both a microscope and an alarm bell, guiding human analysts toward the moments where trust begins to fracture.

What a “trust governance” mindset means for enterprise security

Enterprises can no longer treat trust as a soft value. It must be managed like any other operational asset. A trust governance mindset reframes access, verification, and accountability as measurable elements of security posture. It means building systems where trust is earned, validated, and, when necessary, revoked automatically.

In practice, this involves applying zero-trust principles beyond networks and devices to include people and processes. Roles, behaviors, and relationships are continuously assessed against risk signals, ensuring that authorization aligns with real-time context.

Clear visibility across users, suppliers, and AI systems turns trust into something observable and auditable rather than assumed.

When organizations govern trust, they transform it from a vulnerability into a defense layer. It creates a living security fabric - one that adapts to new risks as quickly as attackers evolve their tactics.

Trust has become both the primary target and the most vital asset. AI doesn’t merely escalate risks by enabling more sophisticated attacks; it also empowers defenders to act faster and smarter.

By embedding AI-driven behavioral analytics and trust governance into security frameworks, organizations can shift from reacting to breaches to anticipating and preventing them.

Ultimately, resilience will hinge on our ability to govern trust continuously and intelligently - transforming it from a vulnerability into a dynamic defense that adapts in real time.

We've featured the best encryption software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

TOPICS
Anna Chung

Anna Chung is a Principle Researcher at Palo Alto Networks Unit 42.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.