These are the biggest risks businesses see around using AI - including the most 'extreme' threats
Businesses are rushing to implement AI
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
You are now subscribed
Your newsletter sign-up was successful
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
- TrendAI report finds 67% of businesses pressured to deploy GenAI despite security concerns
- Key risks include sensitive data exposure, malicious prompts, expanded attack surface, and autonomous code abuse
- Governance gaps: only 38% have AI policies, 57% say AI evolves faster than it can be secured, and many lack visibility or kill switch mechanisms
Businesses are rushing to integrate Generative Artificial Intelligence (GenAI) into their processes and operations, despite knowing the risks they are exposing themselves to - and to make matters worse, many are unsure how to move forward and minimize their risks, further exacerbating the problem.
A new report from TrendAI polled 3,700 business and IT decision-makers across 23 countries, finding the majority (67%) was being pressured to approve AI integration despite security concerns.
One in seven (roughly 15%) described these concerns as “extreme”, but still approved deployment.
Article continues belowNot for the lack of awareness
The report outlined numerous risks associated with AI tools which are keeping business makers awake at night. For two in five, the biggest risk is AI agents accessing sensitive data, while a third (36%) worry about malicious prompts compromising security.
AI agents are programs that allow AI to operate apps, or even entire computers. Malicious prompts, shared via phishing emails, for example, could result in AI agents sending sensitive data to hacking groups, changing app settings, or even downloading malware.
For a third of the respondents (33%), AI creates a growing attack surface for criminals to exploit. The same percentage also fears abuse of trusted AI status and risks linked to autonomous code deployment.
“Organizations are not lacking awareness of risk, they’re lacking the conditions to manage it. When deployment is driven by competitive pressure rather than governance maturity, you create a situation where AI is embedded into critical systems without the controls needed to manage it safely,” says Rachel Jin, Chief Platform & Business Officer, Head of TrendAI.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Management and governance are more difficult to pull off than it seems, at least with AI. For more than half (57%), AI is advancing faster than it can be secured. That means, as soon as a system is set up, new potential risks emerge, forcing the defenders to re-evaluate their position. What’s more, 55% reported only moderate confidence in their understanding of AI legal frameworks, and just a third (38%) currently have comprehensive AI policies in place.
Regulation and compliance
Finally, for two in five (41%), unclear regulation and compliance standards are seen as a barrier to progress. This way of thinking creates something of a trap for organizations, as they end up using “shadow AI” - unsanctioned tools that defenders don’t have insight in. That way, they don’t know what gets shared, or which data ends up sent into the aether.
To be able to say they’ve safely integrated AI in their workflows, businesses need two things, the researchers suggest: observability and auditability, and a “kill switch” mechanism. At the moment, almost a third of the respondents (31%) said they lacked visibility over their entire AI systems.
When it comes to kill switch mechanisms, around 40% support the idea, but half (50%) are unsure about how to implement one.
Despite regulatory and governance challenges and risks, the opinion around AI remains positive. In fact, almost half (44%) believe agentic AI will “significantly improve” cyber defense in the short term.
“Agentic AI is moving organizations into a new risk category,” Jin added. “Our research shows the concerns are already clear, from sensitive data exposure to loss of oversight. Without visibility and control, organizations are deploying systems they don’t fully understand or govern, and that risk is only going to increase unless action is taken.”

➡️ Read our full guide to the best antivirus
1. Best overall:
Bitdefender Total Security
2. Best for families:
Norton 360 with LifeLock
3. Best for mobile:
McAfee Mobile Security
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.