Average organization now reporting over 200 GenAI-related data policy violations each month
Shadow AI remains a key challenge for organizations
- GenAI SaaS usage tripled, with prompt volumes surging sixfold in one year
- Nearly half of users rely on unsanctioned “Shadow AI,” creating major visibility gaps
- Sensitive data leaks doubled, with insider threats tied to personal cloud app use
Generative Artificial Intelligence (GenAI) might be great for productivity, but it comes with some serious security and compliance complications. This is according to a new report from Netskope, which says that as the use of GenAI in the office skyrockets, so do policy violation incidents.
In its Cloud and Threat Report: 2026, released earlier this week, Netskope said GenAI Software-as-a-Service (SaaS) usage among businesses is “rapidly increasing”, with the number of people using tools like ChatGPT or Gemini increasing threefold within the year.
Users are also spending significantly more time with the tools - the number of prompts people are sending to the apps also increased six times in the last 12 months, from 3,000 a year ago, to more than 18,000 prompts a month today.
Shadow AI
What’s more, the top 25% of organizations are sending more than 70,000 prompts per month, and the top 1% are sending more than 1.4 million prompts per month.
But many of the tools, and their use cases, were not sanctioned by proper departments and executives. Almost half (47%) of GenAI users are using personal AI apps (so-called “Shadow AI”) giving the organization no visibility into the type of data shared, and the tools reading these files.
As a result, the number of incidents where users are sending sensitive data to AI apps has doubled in the past year.
Now, the average organization is seeing a staggering 223 incidents per month. Netskope also said that personal apps are a “significant insider threat risk”, as 60% of insider threat incidents involved personal cloud app instances.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Regulated data, intellectual property, source code, and credentials are frequently being sent to personal app instances in violation of organization policies.
“Organizations will struggle to maintain data governance as sensitive information flows freely into unapproved AI ecosystems, leading to an increase in accidental data exposure and compliance risks,” the report concludes.
“Attackers, conversely, will exploit this fragmented environment, leveraging AI to conduct hyperefficient reconnaissance and craft highly customized attacks targeting proprietary models and training data.”

➡️ Read our full guide to the best antivirus
1. Best overall:
Bitdefender Total Security
2. Best for families:
Norton 360 with LifeLock
3. Best for mobile:
McAfee Mobile Security
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.