I am a chief security officer and here's why I think AI Cybersecurity has only itself to blame for the huge problem that's coming

Phishing, E-Mail, Network Security, Computer Hacker, Cloud Computing Cyber Security 3d Illustration
(Image credit: Shutterstock)

Listening to entrepreneurs discuss the potential of AI cybersecurity will give you déjà vu. The discussions are eerily similar to how we once talked about cloud computing when it emerged 15 years ago.

At least initially, there was a surprisingly prevalent misconception that the cloud was inherently more secure than on-premises infrastructure. In reality, the cloud was (and is) a massive attack surface. Innovation always creates new attack vectors, and to say AI is no exception is an understatement.

CISOs are generally aware of AI’s advantage, and for the most part they’re similarly aware that it’s creating new attack vectors. Those who took the right lessons from the development of cloud cybersecurity are right to be even more hesitant about AI.

Within the cloud, proper configuration of the right security controls keeps infrastructure relatively static. AI shifts faster and more dramatically, and is thus inherently more difficult to secure. Companies that got burned by being overeager about cloud infrastructure are now hesitant about AI for the same reasons.

Greg Dracon

Partner and Chief Security Officer, .406 Ventures.

Multi-industry AI adoption bottleneck

The knowledge gap isn’t about AI’s potential to drive growth or streamline operations; it’s about how to implement it securely. CISOs recognize the risks in AI’s expansive attack surface.

Without strong assurances that company data, access controls, and proprietary models can be safeguarded, they hesitate to roll out AI at scale. This is likely the biggest reason why AI apps at the enterprise level are coming out only at a trickle.

The rush to develop AI capabilities has created a multi-industry bottleneck in adoption, not because companies lack interest, but because security hasn’t kept pace. While technical innovation in AI has accelerated rapidly, protections tailored to AI systems have lagged behind.

This imbalance leaves companies exposed and without confidence to deploy at scale. Making matters worse, the talent pool for AI-specific cybersecurity remains shallow, delaying the hands-on support organizations need to integrate safeguards and move from adoption intent to execution.

A cascade of complicating factors

This growing adoption gap isn’t just about tools or staffing—it’s compounded by a broader mix of complicating factors across the landscape. Some 82% of companies in the US now have a BYOD policy, which complicates cybersecurity even absent AI.

Elon Musk’s Department of Government Efficiency (DOGE) has fired hundreds of employees at the U.S. government’s cybersecurity agency CISA, which worked directly with enterprises on cybersecurity measures. This dearth of trust only tightens this bottleneck.

Meanwhile, we’re seeing AI platforms like DeepSeek become capable of creating the basic structure for malware. Human CISOs, in other words, are trying to create AI cybersecurity capable of facing AI attackers, and they’re not sure how. So rather than risk it, they don’t do it at all.

The consequences are now becoming evident, and dealing a critical blow to adoption. It just about goes without saying: AI won’t reach its full potential absent widespread adoption. AI is not going to fizzle out like a mere trend, but AI security is lagging and inadequate and it’s clearly hampering development.

When “good enough" security isn’t enough

AI security is shifting from speculative to strategic. This is a market brimming with potential. Enterprises are grappling with the severity and scale of AI-specific threats, and the demand those challenges created are attracting wider investor interest. Organizations have no choice but to secure AI to fully harness its capabilities. Those that aren’t hesitating are actively seeking solutions through dedicated vendors or by building internal expertise.

This has created a lot of noise. A lot of vendors claim to be doing AI red teaming, while really just offering basic penetration testing in a shiny package. They may expose some vulnerabilities and generate initial shock value, but they fall short of providing the continuous and contextual insight needed to secure AI in real-world conditions.

If I were trying to bring AI into production in an enterprise environment, a simple pen test wouldn’t cut it. I would require robust, repeatable testing that accounts for the nuances of runtime behavior, emergent attack vectors, and model drift. Unfortunately, in the rush to move AI forward, many cybersecurity offerings are relying on this “good enough” pen testing, and that’s not good enough for smart organizations.

The reality is that AI security requires a fundamentally different approach – this is a new class of software. Traditional models of vulnerability testing fail to capture how AI systems adapt, learn, and interact with their environments.

Worse still, many model developers are constrained by their own knowledge silos. They can only guard against threats they’ve seen before. Without continuous external evaluation, blind spots will remain.

As AI becomes embedded across sectors and systems, cybersecurity needs to provide actually suitable solutions. That means moving beyond one-time audits or compliance checkboxes. It means adopting dynamic, adaptive security frameworks that evolve alongside the models they’re meant to protect. Without this, the AI industry will stagnate or risk serious security breaches.

We list the best encrypted messaging app for Android.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

TOPICS

Partner and Chief Security Officer, .406 Ventures.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.