Love and hate: tech pros overwhelmingly like AI agents but view them as a growing security risk
AI agents need to be monitored just like regular employees, report warns

- Nearly half of IT teams don’t fully know what their AI agents are accessing daily
- Enterprises love AI agents, but also fear what they’re doing behind closed digital doors
- AI tools now need governance, audit trails, and control just like human employees
Despite growing enthusiasm for agentic AI across businesses, new research suggests that the rapid expansion of these tools is outpacing efforts to secure them.
A SailPoint survey of 353 IT professionals with enterprise security responsibilities has revealed a complex mix of optimism and anxiety over AI agents.
The survey reports 98% of organizations intend to expand their use of AI agents within the coming year.
AI Agents adoption outpaces security readiness
AI agents are being integrated into operations that handle sensitive enterprise data, from customer records and financials to legal documents and supply chain transactions - however, 96% of respondents said they view these very agents as a growing security threat.
One core issue is visibility: only 54% of professionals claim to have full awareness of the data their agents can access - which leaves nearly half of enterprise environments in the dark about how AI agents interact with critical information.
Compounding the problem, 92% of those surveyed agreed that governing AI agents is crucial for security, but just 44% have an actual policy in place.
Furthermore, eight in ten companies say their AI agents have taken actions they weren’t meant to - this includes accessing unauthorized systems (39%), sharing inappropriate data (33%), and downloading sensitive content (32%).
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Even more troubling, 23% of respondents admitted their AI agents have been tricked into revealing access credentials, a potential goldmine for malicious actors.
One notable insight is that 72% believe AI agents present greater risks than traditional machine identities.
Part of the reason is that AI agents often require multiple identities to function efficiently, especially when integrated with high-performance AI tools or systems used for development and writing.
Calls for a shift to an identity-first model are growing louder, but SailPoint and others argue that organizations need to treat AI agents like human users, complete with access controls, accountability mechanisms, and full audit trails.
AI agents are a relatively new addition to the business space, and it will take time for organizations to fully integrate them into their operations.
“Many organizations are still early in this journey, and growing concerns around data control highlight the need for stronger, more comprehensive identity security strategies,” SailPoint concluded.
You might also like
- These are the best AI website builders around
- Take a look at our pick of the best internet security suites
- Google Drive's new Gemini features include video analysis at last

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking. Efosa developed a keen interest in technology policy, specifically exploring the intersection of privacy, security, and politics. His research delves into how technological advancements influence regulatory frameworks and societal norms, particularly concerning data protection and cybersecurity. Upon joining TechRadar Pro, in addition to privacy and technology policy, he is also focused on B2B security products. Efosa can be contacted at this email: udinmwenefosa@gmail.com
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.