Beyond AI-powered cybersecurity: why context and visibility are still a CISO’s top priority
CISOs are beginning to understand that AI isn’t a silver bullet

AI has been a real gamechanger for productivity and automation, but as the technology evolves, the expectations being placed on it are growing fast – particularly in the field of cybersecurity.
From boardrooms to SOC teams, the promise is compelling: AI will reduce false positives, accelerate detection, automate response, and unburden fatigued analysts. But while these capabilities certainly aren’t out of reach, AI is not the “plug-and-play” solution that some might hope. Without the right data, context, and oversight, the lens offered by AI is blurry at best.
According to a 2024 report by IBM, roughly two-thirds of organizations say they’re now deploying AI tools and automation across their SOC environments. However, a 2025 survey by Darktrace reveals that less than half (42%) of CISOs have confidence in their AI deployment and fully understand how AI fits into their security stack. This gap between AI deployment and understanding how to extract value from it isn’t sustainable long-term.
CTO of Wazuh.
Sprawling webs of interconnectivity
Networks used to be small, contained and relatively easy to protect – often confined to an office or single cloud computing environment. Today, they’re sprawling webs of interconnectivity spanning multiple clouds and endpoint devices. In other words, cybersecurity has gotten more complex.
There’s a growing assumption that AI can shed light on this complexity – that if you throw enough data at a model, it will separate the signal from the noise, even without deep integration into your environment. But threats don’t exist in a vacuum. They move through systems, exploit blind spots, and adapt to patterns. And unless an AI system understands the operational baseline – what’s normal, what’s sanctioned, what’s truly anomalous – it’s probably just best-guessing. Sometimes it guesses right, but when it doesn’t, the consequences can be costly.
None of this is to say that AI isn’t a force for good. It’s an incredibly powerful tool when wielded in the right way, but businesses need to pace themselves and create the right environment before it can truly deliver on its promises.
Are Businesses Prepared for AI?
The excitement around AI isn’t new or exclusive to cybersecurity. According to Gartner’s most recent Hype Cycle, both generative AI and cloud-based AI services are currently in the “peak of inflated expectations” phase. What comes next, with any new technology, is the “trough of disillusionment” – this is where the hype meets reality and industries realize that some lessons need to be learned before the technology can ascend to the last part of the cycle, “the plateau of productivity”.
This is the very pattern that security teams now find themselves in with AI. Early deployments have revealed just how brittle AI can be when removed from the controlled conditions of lab testing. Sophisticated models that looked flawless in demos can falter in the complex, unpredictable context of a live enterprise environment.
False positives are one problem. Analysts know the fatigue of chasing alerts that lead nowhere – and AI, when misapplied, can actually amplify that noise rather than reduce it. But the bigger risk is what AI misses. Algorithms trained on generalized threat data might completely overlook subtle, organization-specific anomalies, such as a lateral movement that piggybacks on a rarely used internal tool, or data exfiltration masked by a legitimate third-party integration. These are the types of threats that slip through when detection efforts lack specific environmental context.
Another cause for hesitation is that many AI-powered solutions operate as black boxes, which goes against the grain of the open source, community-driven threat response the industry is now rightly moving toward. Their logic isn’t exposed, their training data isn’t transparent, and their outputs are often unverifiable. For CISOs, that’s a risky proposition.
It’s hard enough to explain cybersecurity risks to the board; try explaining why an opaque model flagged – or failed to flag – a critical incident. AI effectiveness is one thing, but trust in AI and its processes is something that must be planned for and cultivated over time.
Putting Things into Context
In cybersecurity, context is everything. AI might detect an anomaly, but can it tell whether that anomaly is benign, malicious, or even expected? That requires more than pattern recognition. It requires a deep understanding of system baselines, user behavior, network topology, and operational rhythms.
Without this foundation, AI tools are inevitably prone to misinterpretation: flagging routine administrative scripts as threats, or worse, overlooking subtle indicators of compromise that don’t conform to known attack patterns. That creates more trivial work for security teams as it’s down to them to figure out what’s real and what’s not.
This is where network visibility comes into play. AI needs telemetry from every layer of the environment: endpoints, servers, cloud workloads, authentication flows, network traffic, and more. And it needs that data to be correlated, not siloed. An alert from an endpoint only makes sense when viewed alongside what’s happening across the system.
A login from an unusual location might be suspicious, unless it’s coming from a known travel route for a senior executive or a new remote hire based in another time zone. AI can’t make those judgments on its own. Without unified context, even the most advanced algorithms are guessing. And in cybersecurity, guessing is always a liability.
The Case for Unification
If AI is going to play a meaningful role in cybersecurity, it first needs a foundation it can trust, and so do the people relying on it. That begins with visibility, but it extends to architecture. Fragmented tools with partial views and proprietary, closed-source alert logic only hinder cybersecurity efforts.
What CISOs need is a cohesive layer of detection and response where telemetry is unified, logic is transparent, and automation is tightly aligned with operational context. This is where architectural convergence – for example, the merging of SIEM-level visibility with the orchestration capabilities of extended detection and response (XDR) – becomes critical. This baseline will turn AI into a force multiplier for security teams when correctly deployed.
Equally important is explainability. If an AI system flags a potential threat, security teams need to understand why. Not only to validate the alert, but to learn from it, adapt processes, and communicate risk to leaders and stakeholders. Black-box models might seem impressive, but in security, opacity is a threat vector in itself.
CISOs don’t need magic; they need clarity. And the best AI implementations are those that put humans in the loop – enhancing decision-making, accelerating triage, and surfacing the insights that matter most without drowning teams in noise.
We've compiled a list of the best identity management software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
CTO of Wazuh.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.