The entire AI revolution is being held hostage by party balloon gas — and it is not funny

A digital representation of the globe in blue
(Image credit: Getty Images)

The digital future is anchored to a physical island, and the world knows it.

Taiwan, chips & semiconductors, geopolitical risk, the story writes itself.

Article continues below
Jonathan Björkman

Runs ExpandTalk Corp.

The Black Swan is a noble gas no one considered

When the Strait of Hormuz closed following the Iran conflict, oil dominated the headlines. It always does. But a third of the world's commercial helium ships out of Qatar through the same strait. That supply is currently cut off.

There is no substitute for high-purity helium in chip fabrication. It provides the essential wafer cooling and laser thermal management for the ASML lithography systems that forge every advanced AI chip on Earth.

While giants like TSMC, Samsung, and Intel maintain stockpiles, those reserves are finite. As the Strait remains closed, we are realizing that a trillion-dollar industry is tethered to a gas most people associate with party balloons.

We simply did not think about it until the moment it mattered.

A trillion dollars buys you fragility

This is the nature of the AI supply chain: extraordinarily concentrated, physically fragile, and full of dependencies most people have never heard of.

TSMC manufactures over 90 percent of the world's most advanced semiconductors. ASML is the sole producer of the lithography systems that those chips require, shipping roughly 50 high-value machines per year at $350 million each.

Hundreds of billions of dollars have been invested in building this infrastructure. Yet the most consequential weakness in the entire stack is not a machine, a mineral, or a megawatt. It is something far more mundane, and almost entirely ignored.

After training, the model is on its own

A large language model's knowledge stops the moment training ends. Everything after that is retrieval, or in some cases, hallucinations and fabrication.

Most enterprises now point their AI at internal documentation to keep it grounded. The model pulls what it finds and answers. It does not check whether the source is current. It does not notice that three versions of the same policy exist. It does not care.

Your AI is only as reliable as the most 'relevant' document it retrieves. This is not a flaw in the model. It is how architecture works.

In most organizations, it is a patchwork of conflicting PDFs, outdated wikis, and documentation last reviewed by someone who no longer works there.

From quiet debt to live liability

This was always a problem. But it was a quiet one. A customer might find a wrong answer in a help article, close the tab, and call support instead. The damage was limited and slow.

AI changed the equation. When a model retrieves from fragmented or contradictory sources, it does not flag the inconsistency. It synthesizes the conflicts into a single, confident answer.

Content debt, the accumulated backlog of outdated and unstructured information, has gone from a maintenance inconvenience to a live liability. Every query now amplifies whatever quality problems already exist. The advantage lies with whoever has the cleanest documentation.

The only bottleneck you can actually fix

Every other bottleneck in the AI supply chain requires enormous capital. Building chip fabrication plants takes years and billions.

Expanding energy infrastructure requires regulatory approval and grid-level engineering. Training the next generation of models demands computational resources that only a handful of organizations can afford.

AI data centers are projected to consume 945 terawatt-hours of electricity annually by 2030, roughly equivalent to Japan's total energy output.

Fixing the content layer requires none of that.

It requires treating documentation with the same engineering discipline the rest of the stack already receives: structured authoring, single-source publishing (one authoritative version across all outputs), version control, and systematic review.

These are not new practices. They are well-understood and comparatively inexpensive.

The irony is hard to miss. Organizations will invest millions to shave milliseconds off a response time. Almost none will ask whether the response is true.

The question nobody is asking

The public conversation about AI infrastructure focuses on what is most visible and most expensive. These are real constraints. But they are also constraints you cannot directly influence. You are not going to build a lithography machine or a nuclear power plant.

You are, however, going to influence what your AI retrieves when someone asks it a question. And right now, in most organizations, the answer is: whatever was last uploaded to the wiki three years ago by someone who has since left the company.

We have built the most sophisticated information retrieval system in human history, and then we fed it documentation nobody has read in years.

We've ranked the best document management software.

This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.

The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit

TOPICS

Runs ExpandTalk Corp.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.