AI is no longer borderless
Borderless AI is ending and resilience depends on architectural sovereignty
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
You are now subscribed
Your newsletter sign-up was successful
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
The assumption that AI is a global, borderless technology is breaking down. Sam Altman, CEO of OpenAI, even expressed how AI has the potential to “dramatically lift up the floor” globally. But while access to AI tools may be global, its deployment and governance are increasingly local.
There is no unified global framework, AI governance is fragmenting, with continents, nations, and even individual states independently defining their own standards for use, compliance, and regulation.
Co-founder & CEO of deepset.
Sovereignty has become the defining priority, and in just the past few weeks, the pace of development has accelerated significantly. Microsoft expanded its Sovereign Cloud to support fully disconnected AI deployments, models running without internet connectivity, isolated from shared global infrastructure.
Article continues belowAt the same time, Europe is accelerating efforts to build sovereign cloud and AI infrastructure to reduce reliance on US providers. And governments and defense organizations are increasingly investing in their own AI infrastructure, treating it as a strategic capability rather than a dependency.
These are signals of a structural shift every enterprise AI team needs to account for now.
The defining architecture question is no longer what model to use. It is where and how AI systems run, and under which jurisdiction. For most of the last decade, enterprises chose AI vendors based on capability and cost. Meanwhile, geography was an afterthought. That model no longer holds.
Today, where an AI system operates determines how it must be governed: what data it can access, which regulations apply, and what risks the organization assumes. But location alone is not enough.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
It also matters how the system is deployed, how data flows through it, and which components are in your control. The same application can carry very different compliance and risk profiles depending on how it is architected and where it us running.
This shifts AI from a purely technical decision to an architectural and legal one.
AI governance
That shift is no longer theoretical. The EU AI Act, now in partial enforcement, imposes strict obligations on high-risk AI systems, including requirements around data provenance and model transparency that vary depending on where processing occurs.
India's Digital Personal Data Protection Act and Saudi Arabia's cloud data localization rules are adding further geographic constraints on multinationals operating across those markets.
And today, around the world, at least 72 countries have proposed over 1000 AI-related policy initiatives and legal frameworks to address public concerns around AI safety and governance. Regardless of the model provider, the compliance burden ultimately sits with the enterprise deploying the system.
Gartner projects that AI regulatory violations will drive a 30% increase in legal disputes for tech companies by 2028.
In a recent survey of 360 IT leaders, over 70% cited regulatory compliance among their top three challenges for GenAI deployment, yet only 23% feel confident in their organization's ability to manage security and governance when rolling out these tools.
Choosing a model today without knowing where it runs, how it operates in the larger system, and under which jurisdiction creates compliance exposure tomorrow.
Vendor lock-in now carries a geopolitical dimension
Most enterprises built their first generative AI systems around a handful of major API providers. That made sense for experimentation, but vendor lock-in now creates a fragile dependency for production-grade AI systems.
In 2024, the US Commerce Department added several Chinese AI chip and model providers to the Entity List, forcing enterprises with operations in both markets to rapidly audit which AI systems touched restricted infrastructure.
Companies that had built pipelines around a single vendor with ambiguous data routing found themselves in an immediate compliance emergency, creating a major financial burden as well.
The Anthropic-Pentagon dispute exposes a deeper implication of geopolitical fragmentation: dependency on a single AI provider introduces uncertainty around control. Regardless of the specific applications, the case provides a clear, real-time example of how these divergences are already playing out.
The reality is that different stakeholders—providers, governments, and regulators—often operate under competing legal, ethical, and policy frameworks. Regardless of which side one takes, the episode is already prompting procurement teams across the defense industrial base to reassess reliance on any single commercial AI provider.
A system that cannot be migrated is a system that cannot be governed, making it an operational liability.
Sovereign AI architecture in practice
Sovereign AI is a set of architectural decisions that organizations can achieve by making four concrete choices.
First, we must separate the layers. Data retrieval, model inference, and safety guardrails should be distinct and independently swappable. The impact of Schrems II highlighted this in practice.
AI systems built tightly around proprietary cloud APIs faced significant rework when data processing agreements with US providers came under legal challenge.
In contrast, more modular architectures, where retrieval and inference were decoupled, could be adapted more quickly, for example, by shifting inference to on-premises environments without redesigning the entire system.
Second, AI system deployments must support multiple environments. Production AI systems should be able to run across public cloud, private cloud, on-premises, and disconnected air-gapped environments without architectural redesign.
Third, maintain model portability. Open-weight models like Llama 3 and Mistral have significantly narrowed the performance gap with proprietary alternatives across many enterprise tasks, making it increasingly feasible to switch models without major tradeoffs as requirements, regulations, or dependencies change.
And finally, data flows must be explicitly documented and controlled. Teams need clear visibility into what data enters each model, where inference happens, and where outputs are stored. Italy's temporary ban on ChatGPT in 2023 underscored this requirement.
Regulators challenged OpenAI’s ability to clearly demonstrate how Italian user data was being processed and stored. The episode has since become a reference point in enterprise AI legal reviews across Europe.
The practical implication for AI teams right now
The first phase of enterprise AI was all about velocity and getting experiments into production quickly. The next phase is about resilience and building systems that remain operational as the regulatory and geopolitical landscape continues to shift.
This doesn’t mean slowing down. It means making architectural decisions that preserve flexibility over time, around model portability, deployment options, and data governance. These choices will determine whether teams can move with both speed and control, or spend the next two to three years managing avoidable friction.
Agentic orchestration as the decoupling layer
Open agentic orchestration is emerging as the layer that decouples today’s decisions from tomorrow’s constraints.
As highlighted in recent Gartner research on AI sovereignty, organizations are being pushed toward model-agnostic workflows built on abstraction and orchestration layers, enabling regional model switching, reducing vendor dependence, and ensuring compliance across jurisdictions.
Open agentic orchestration operationalizes this shift. Instead of hardwiring applications to a single model, vendor, or deployment pattern, it introduces abstraction at the right level: coordinating models, tools, and data sources as interchangeable components.
Teams can swap models as performance evolves, adapt deployments as regulations change, and enforce governance without rebuilding entire systems. This shifts the center of gravity away from any given model itself to the orchestration layer that governs how systems behave.
The result is not just flexibility, but durability: AI systems that evolve with the landscape instead of being rewritten every time it changes. This is what operational sovereignty looks like in practice.
We've featured the best AI chatbot for business.
This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.
The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit
Co-founder & CEO of deepset.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.