The Human Firewall: even with AI, humans are still the last line of defense in cybersecurity

Security
(Image credit: Shutterstock) (Image credit: Shutterstock)

Even with today’s vast arsenal of cybersecurity tools and AI-enhanced threat detection, attackers continue to succeed – not because the technology is failing, but because the human link in the defensive chain remains exposed. Cybercriminals almost always take the path of least resistance to execute a breach, which often means targeting people rather than a system.

According to McKinsey, a staggering 91% of cyberattacks have less to do with technology, and more to do with manipulating and taking advantage of human behavior. In other words, despite technologies like AI advancing at break-neck speed, cybercriminals are still more likely to hack people than machines.

From a cybercriminal’s perspective, this makes sense. It’s the path of least resistance. Why spend resources hacking your way through a high-tech, AI-secured front door when there’s an open window around the back? This isn’t news to CISOs – according to a 2024 IBM survey, almost three-quarters (74%) now identify human vulnerability as their top security risk. They’re aware of the open window, and now they’re trying to secure it.

Sami Alsahhar

Senior Manager, Presales Engineering at One Identity.

Easier said than done

That’s easier said than done, however. Whether it’s a well-timed phishing email, a spoofed call, a deepfake video, or a barrage of authentic-seeming push notifications designed to wear down a user’s judgment, attackers are adapting faster than defenses can compensate.

The reality is that while security vendors race to outpace attackers with smarter algorithms and tighter controls, the tactics that most reliably lead to breaches are psychological, not technical. Threat actors are exploiting trust, fatigue, social norms, and behavioral shortcuts – tactics far more subtle and effective than brute-force code.

It’s not a lack of technology leaving organizations vulnerable to these techniques, it’s a lack of alignment between those tools and the way people actually think and operate. In fast-paced, high-pressure environments, employees don’t have the bandwidth to second-guess every request or scrutinize every prompt.

They rely on instincts, familiarity, and patterns they’ve learned to trust. But those very instincts are what attackers hijack, turning help desk tickets into access exploits, or mimicked CFOs into multi-million-dollar heists. As generative AI accelerates the realism and reach of these tactics, organizations face a critical question: not just how to keep the bad actors out, but how to better equip their people within. Because when breaches hinge on human decisions, cybersecurity isn’t just a technology issue – it’s a human one.

Trust, bias, and the psychology of security breaches

Human behavior is a vulnerability, but it’s also a predictable pattern. Our brains are wired for efficiency, not scrutiny, which makes us remarkably easy to manipulate under the right conditions. Attackers know this and design their exploits accordingly. They play on urgency to override caution, impersonate authority figures to disarm skepticism, and drip-feed small requests to trigger consistency bias. These tactics are ruthlessly calculated, and they work not because people are careless, but because they’re human.

In early 2024, a finance worker at a Hong Kong firm was tricked into transferring $25 million after attending a video call with what appeared to be the company’s CFO and other colleagues – each one a convincing AI-generated deepfake. The attackers used publicly available footage to clone faces and voices, creating a seamless illusion that exploited trust and familiarity with devastating effect.

The eye-opening part is that these deepfake tools are now readily available. Modern social engineering doesn’t rely on obvious red flags. The emails aren’t riddled with typos, and the impersonations don’t sound robotic. Thanks to generative AI, deepfake technology, and access to vast training data, attackers can now create incredibly convincing personas that mirror the tone, behavior, and language of trusted colleagues. In this environment, even the most well-trained employee can fall victim without fault.

Heuristics – mental shortcuts – are frequently exploited by attackers who know what to look for. “Authority bias” leads people to follow instructions from perceived leaders, like a spoofed email from a CEO. The “scarcity principle” ramps up pressure by creating false urgency, making employees feel they must act immediately.

And “reciprocity bias” plays on basic social instincts – once someone has received a seemingly benign gesture, they’re more likely to respond positively to a follow-up request, even if it’s malicious. What so often looks like a lapse in judgment is often just an expected outcome of cognitive overload and the common, everyday use of heuristics.

Where policy meets psychology

Traditional identity and access management (IAM) strategies tend to assume that users will behave predictably and rationally – that they’ll scrutinize every prompt, question every anomaly, and follow policy to the letter. But the reality inside most organizations is far messier. People work quickly, switch contexts constantly, and are bombarded with notifications, tasks, and requests.

If security controls feel too rigid or burdensome, users will find workarounds. If prompts are too frequent, they’ll be ignored. This is how good policy gets undermined – not out of negligence, but because the design of the system clashes with the psychology of its users. Good security mechanisms shouldn’t add friction; they should seamlessly guide users towards better choices.

Applying principles like Zero Trust, least privilege, and just-in-time access can dramatically reduce exposure, but only if they’re implemented in ways that account for cognitive load and context. Automation can help here: granting and revoking access based on dynamic risk signals, time of day, or role changes without requiring users to constantly make judgment calls.

Done right, identity management becomes an invisible safety net, quietly adapting in the background, rather than demanding constant interaction. Humans shouldn’t be removed from the loop, but they should be freed from the burden to catching what the system should already detect.

Building a security culture

Technology may enforce access policies, but culture determines whether people follow them. Building a secure organization has to be about more than simply enforcing compliance. That starts with security training that goes beyond phishing drills and password hygiene to address how people actually think and react under pressure. Employees need to recognize their own cognitive biases, understand how they’re being targeted, and feel empowered – not penalized – for slowing down and asking questions.

Equally important is removing unnecessary friction. When access controls are intuitive, context-aware, and minimally disruptive, users are more likely to engage with them properly. Role-based and attribute-based access models, combined with just-in-time permissions, help reduce overprovisioning without creating frustrating bottlenecks in the form of pop-ups and interruptions. In other words, modern IAM systems need to support and empower employees rather than make them constantly jump through hoops to get from one app or window to another.

The human firewall isn’t going anywhere

The biggest takeaway here is that cybersecurity isn’t just a test of systems, AI-driven or not – it’s a test of people. The human firewall is arguably an organization’s biggest weakness, but with the right tools and policies in place, it can become its greatest strength. Our goal should not be to eliminate human error or change the innate nature of humans, but to design identity systems that make secure behavior the default – easy, intuitive, and frictionless.

We list the best employee recognition software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Senior Manager, Presales Engineering at One Identity.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.