The Human Risk Reckoning: Why security must evolve for an AI-augmented workforce
Human and AI behaviors redefine modern cybersecurity risk
Sign up for breaking news, reviews, opinion, top tech deals, and more.
You are now subscribed
Your newsletter sign-up was successful
Security models that were effective a few years ago are now under immense strain because of how rapidly organizations are changing. As we move into 2026, many teams are dealing with a larger and more complex risk landscape.
This is largely driven by rapid artificial intelligence (AI) adoption, increased automation, and the continued shift to cloud and collaboration platforms.
At the same time, attackers are getting more access to phishing as a service (PhaaS) and other tools which make it easier to launch scalable campaigns even for the most non-technical of criminals.
Lead CISO Advisor of KnowBe4.
These underlying challenges aren’t new. Issues like inconsistent security ownership, uneven controls across systems, and security being bolted on late in the delivery cycle still show up, only now they tend to surface faster, spread further, and carry greater impact.
"The AI inflection point"
As enterprises adopt AI at scale, it becomes clear that this is not just another threat category. AI represents a fundamental inflection point in risk management.
It introduces a dual risk. Internally, employees may overshare sensitive data into AI tools without fully understanding how that information is stored or protected. Externally, cybercriminals are using AI to generate deepfakes, impersonate trusted individuals and scale attacks with unprecedented speed and precision.
Although nearly all organizations report taking steps to address AI risk, many employees feel access to approved tools is too slow, overly restrictive or inconsistently governed. At the same time, unapproved usage, or shadow AI, is becoming increasingly common
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Employees may already be using personal accounts with large language models that fall entirely outside organizational oversight, creating risk vectors that are effectively invisible. The same behaviors that make employees productive with AI can quickly become liabilities without real-time guardrails.
This is where the biggest strain is being put on security models to keep up.
"A new risk"
Historically, organizations have approached people-related security risk primarily through awareness training and teaching employees how to recognize threats and avoid mistakes.
That approach is critical as research has shown a 90% increase in cyber incidents stemming from the human element; however, the approach is no longer sufficient on its own.
When risk exists quite literally everywhere employees work and communicate, perimeter-focused defenses and annual training cycles are structurally insufficient. This is because today’s workplace no longer consists of only people.
AI agents are increasingly embedded into critical workflows, operating alongside employees and interacting with sensitive data. While the purely human attack vectors remain, organizations are not applying the same level of behavioral risk training to AI agents as they do to their workforce.
The result? A new and largely unmanaged kind of risk.
Beneath this growing exposure lies a deeper disconnect between organizations and their employees. Nearly half of employees do not believe the data they handle belongs to the organization. Ambiguous ownership leads to personal rule-making around data sharing, storage and AI usage.
Identifying this gap in understanding makes one thing clear: culture, incentives and tooling shape behavior far more effectively than policy documents alone. Human risk is less about rules and more about clarity.
When you teach a child to cross a road safely, you teach them all about the green and red signals, which provides them a framework and clarity on how to cross any road they approach at any time in their life.
While training humans involves coaching and leadership, newly implemented agentic AI models must build new approaches. Getting this all right under one umbrella will prove a challenge for many organizations going into 2026, but just because something is difficult, it does not mean it shouldn’t be done.
The reckoning
A revealing new study has found that 44% of organizations globally have disciplined employees who had fallen victim to phishing attacks. This prevailing punitive approach to security potentially further undermines outcomes as leadership and employee perspectives are sharply misaligned.
Leaders tend to favor discipline and formal consequences, while employees overwhelmingly favor support, coaching and targeted guidance. Punishment-heavy strategies damage trust and weaken long-term resilience. When fear dominates, incident reporting declines, trust erodes and security teams become fatigued.
Organizations cannot punish their way to better security behavior. Mechanisms that reduce risk before mistakes happen, rather than reacting after the fact, are essential. Instead of focusing on placing blame, we must work to build a positive security culture.
This is where Human Risk Management, or HRM, must be positioned as a core piece of security strategy, instead of a supporting initiative. Cross-platform visibility into risky behaviors and employee-level risk signals should replace broad user categories and assumptions.
Building a positive culture needs supportive coaching when risk appears real-time, in fact, studies have found that ‘active learning’ (or learning by doing) is incredibly effective for retention.
This method reinforces and integrates security directly into daily tasks, and people are treated as adaptive participants, not static liabilities. AI systems must be governed in the same way, with behavioral baselines, monitoring and controls that reflect their growing role in the workforce.
HRM becomes the connective layer between human behavior, AI usage, and organizational resilience.
The direction of travel is clear. Organizations are moving toward people-plus-agent workforces, and the question of security is one of timing, not adoption. To sustain innovation without amplifying risk, security best practices must be embedded into both human and machine systems now.
Research already shows that early adopters benefit from lower incident rates, higher trust, and faster, safer AI-driven innovation. The future of cybersecurity belongs to organizations that stop trying to lock people down and start designing systems that help them make better decisions at the moment those decisions are made.
We've featured the best encryption software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Lead security awareness advocate at KnowBe4.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
