AI-powered cyberattacks have devastating potential – but governments can fight fire with fire

IA y ciberseguridad
(Image credit: Forcepint)

A single missile can cost millions of dollars and only hit a single critical target. A low-equity, AI-powered cyberattack costs next to nothing and can disrupt entire economies, degrading national power and eroding strategic advantage.

The rules have changed: the future of warfare is a series of asynchronous, covert cyber operations carried out below the threshold of kinetic conflict. Battles will still be fought over land, sea, and sky, but what happens in the cyber domain could have a greater bearing on their outcome than how troops maneuver on the battlefield.

We were always heading in this direction, but AI has proven a dangerous accelerant. The entire military industrial base must become fortified against these risks, and that starts with continuous, autonomous validation of its cyber security defenses.

Top 10 AI tools used by military 2024 | AI in Military Intelligence - YouTube Top 10 AI tools used by military 2024 | AI in Military Intelligence - YouTube
Watch On
Secure military supply chains Stop hackers disrupting supply
Secure military supply chains Stop hackers disrupting supply: at horizon3.ai

Today’s adversaries, whether state-sponsored actors or independent cybercrime syndicates, are deploying AI-driven agents to handicap critical systems across the entire military supply chain.

Stop them with our cybersecurity tools now.

Snehal Antani
Snehal Antani

Leads the company’s mission to empower organizations to proactively defend their infrastructure through autonomous security solutions. With a unique blend of entrepreneurial, public sector, and enterprise leadership experience, Snehal previously served as CTO of JSOC, CTO at Splunk, and CIO at GE Capital.

The Case for Autonomous Resilience

Today’s adversaries, whether state-sponsored actors or independent cybercrime syndicates, are deploying AI-driven agents to handicap critical systems across the entire military supply chain. These attackers aren’t focused on headline-making digital bombs, but a slow attrition, applying continuous pressure to degrade functionality over time. They’re also working anonymously: AI-enabled cyberattacks are executed by autonomous agents or proxies, making attribution slow or impossible.

Consider a hypothetical attack on the U.S. Navy. The Navy depends on a vast, decentralized web of small and mid-sized suppliers for everything from propulsion components to shipboard software systems. While these systems and suppliers may coalesce into the most technologically advanced Navy in the world, their interdependence is almost akin to human biology, in the way that a hit to one system can thoroughly destabilize another.

An adversary doesn’t need to breach the Navy directly. Instead, they can launch persistent cyberattacks on the long tail of maritime subcontractors, degrading national capability over time instead of in one massive, headline-making blow.

Third-party vendors, which often lack the financial resources to properly patch vulnerabilities, may be riddled with unsewn wounds that attackers can use as an entry point. But major security vulnerabilities aren’t the only way in. AI-driven agents can autonomously compromise outdated email systems, misconfigured cloud services, or exposed remote access portals across hundreds of these suppliers.

The impacts of these attacks can look like “normal” disruptions, the result of human error or some missing piece of code: delayed component deliveries, corrupted design files, and general operational uncertainty. However, the ill effects accumulate over time, delaying shipbuilding schedules and weakening overall fleet readiness.

Emerging threats

That’s not even accounting for sanctions. If equipment is damaged, and replacement parts or skilled maintenance teams are restricted, one attack has just crippled a nation’s chip manufacturing capacity—potentially for months or years.

These attacks also get smarter over time. AI agents are designed for continuous improvement, and as they sink deeper into a system, they become more adept at uncovering and exploiting weaknesses. The cascading damage limits recovery efforts, further delaying defense production timelines and dragging entire economies backwards.

Despite these emerging threats, most defense and industrial organizations still rely on traditional concepts of deterrents, built around visible threats and proportional response: think static defenses, annual audits, and reactive incident response. Meanwhile, adversaries are running autonomous campaigns that learn, adapt, and evolve faster than human defenders can respond. You cannot deter what you cannot detect, and you cannot retaliate against what you cannot attribute.

Facing such dire stakes, defense contractors must exploit their own environments before attackers do. That means deploying AI-powered agents across critical infrastructure—breaking in, chaining weaknesses, and fixing them—to achieve true resilience. If the window for exploitation narrows, and the cost of action rises. “Low equity” means little against a high chance of failure.

Leveraging AI in Proactive Defense

Fighting fire with fire sounds simple enough, but there are serious risks involved. The same AI tools that bolster organizations’ defenses against smarter, more covert attacks can also create new vulnerabilities. Large language models (LLMs) may cache critical weaknesses in their model architecture, and third-party components that contribute to the models’ effectiveness can also introduce new vulnerabilities.

Any AI-powered security tools should undergo a comprehensive vetting process to identify potential risks and weaknesses. Model architecture and history, data pipeline hygiene, and infrastructural requirements–such as digital sovereignty compliance–are all factors to consider when augmenting security with AI-enabled tools.

Even the cleanest, most secure AI program is not a failsafe. Defenders that rely too heavily on AI will find themselves facing many of the same problems that plague their counterparts who use outdated scanners.

A mix of false confidence and alert fatigue from automated risk notifications can lead to missed critical vulnerabilities. In a national security scenario, that can lose a battle. That can lose a war. Real, attack-driven testing makes up for where AI lacks, and when used in tandem with it, creates an ironclad shield against AI-enabled adversaries.

Artificial intelligence is a boon for society and industry—but it is also a weapon, and a dangerous one at that. Fortunately, it’s one that we can wield for ourselves.

Co-founder and CEO of Horizon3.ai.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.