Gartner: GenAI has broken traditional cybersecurity awareness – what comes next?

Cybersecurity ensures data protection on internet. Data encryption, firewall, encrypted network, VPN, secure access and authentication defend against malware, hacking, cyber crime and digital threat
(Image credit: Shutterstock)

Cybersecurity awareness has long relied on a simple premise: educate employees, reduce risk. But in 2026, that model is no longer holding.

Alex Michaels

Director Analyst at Gartner.

This highlights the gap between traditional awareness programs and modern cyber risk.

Latest Videos From

For security and risk management leaders, awareness alone is no longer enough.

The human risk surface is expanding

GenAI adoption has surged across organizations, with more than 86% now piloting or deploying these tools. What began as experimentation has quickly become embedded in day-to-day workflows, often without corresponding governance or oversight.

Employees are not waiting for formal approval. Many are turning to personal GenAI accounts for work tasks, inputting sensitive data into public tools, or downloading unapproved applications. This phenomenon, often described as “shadow AI,” is increasing employee-initiated cybersecurity risk.

According to Gartner’s 2025 Cybersecurity Innovations in AI Risk Management and Use Survey, over 57% of employees use personal GenAI accounts for work, and 33% admit to inputting sensitive work information into public or unapproved GenAI tools.

External threats are evolving as well. Deepfakes and advanced phishing attacks are becoming more sophisticated due to GenAI capabilities. The survey finds 35% of organizations have been affected by deepfake attacks, and AI-assisted phishing emails have doubled over the past two years, making some threats harder for employees to detect.

This creates a dual challenge: organizations are exposed both internally, through unmanaged AI use, and externally, through AI-augmented attacks.

Why traditional awareness programs are failing

Most cybersecurity awareness programs were built for a different era. They focus on static training, periodic campaigns, and generic guidance such as “don’t click suspicious links”.

But GenAI changes the rules.

First, it reduces the visibility of threats. AI-generated content is often indistinguishable from legitimate communications, making it far harder for employees to rely on traditional cues.

Second, it increases the speed and scale of attacks. What once required time and effort can now be automated and personalized at volume.

Third, it introduces entirely new risk behaviors. Prompt injections, insecure use of AI tools, and the inadvertent sharing of sensitive data through GenAI platforms are not covered by legacy training models.

The outcome is clear: despite continued investment in awareness, human-related risk exposure is not decreasing.

From awareness to behavior: a necessary shift

Cybersecurity leaders must focus on security behavior and culture programs (SBCPs), which emphasize how employees act in real-world scenarios rather than only what they know.

SBCPs aim to drive secure GenAI-related work practices, recognizing that employees will make judgement calls and use AI tools. The goal is not to eliminate these behaviors, but to shape them safely.

In practice, this means embedding security into daily workflows rather than treating it as a periodic intervention. Training evolves from generic modules to simulations that replicate AI-driven attacks, including deepfakes and advanced phishing.

Policies become clear and actionable, covering GenAI usage, data handling, and prompt design. Reporting mechanisms are streamlined to encourage faster escalation of suspicious activity.

Behavior change requires reinforcement. One-off training sessions are replaced by continuous engagement, microlearning, and real-time feedback.

Securing human interaction with AI

As GenAI becomes embedded across business processes, securing the interaction between people and AI systems becomes a critical control point.

This introduces new priorities for security and risk management leaders.

First, organizations must establish clear boundaries for GenAI use. This includes defining approved tools, setting data classification rules, and ensuring employees understand the risks of sharing sensitive information.

Second, governance must extend beyond IT. GenAI risk intersects with legal, compliance, data protection and executive decision-making. Without senior leadership involvement, efforts to manage these risks will remain fragmented.

Third, organizations must invest in AI literacy. Employees need to understand not only how to use GenAI tools, but how those tools can be manipulated. This includes recognizing hallucinations, validating outputs, and maintaining human oversight.

Finally, security teams must tactfully accept a degree of operational friction. Slowing down to verify an unusual request or validate an AI-generated output is no longer inefficiency, it is resilience.

A cultural, not technical, inflection point

There is a temptation to view GenAI-related cyber risk as a technical problem that can be solved with better tools, more controls, or stricter policies.

But the evidence suggests otherwise.

Overreliance on technical controls does little to address the behavioral drivers of risk. Employees will continue to find workarounds if security measures are perceived as barriers to productivity. Meanwhile, attackers will continue to exploit human trust, curiosity and urgency.

What is required is a cultural shift.

Security must be reframed as an enabler of safe AI adoption, empowering employees to act responsibly and report suspicious activity. The aim is not to eliminate all risk but to build an environment where secure behavior is the default.

What comes next

GenAI is a foundational shift in organizational operations and cyber threats. Cybersecurity awareness programs must evolve to focus on behavior, embed security into daily practices, and treat human risk as dynamic and continuously managed.

In an AI-driven world, security and risk management leaders must remember that risk is defined less by knowledge and more by how employees behave in the moments that matter.

We've featured the best encryption software.

This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.

The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit

TOPICS

Director Analyst at Gartner.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.