OpenAI’s GPT‑5 is already changing how enterprises work. More than 600,000 companies are now paying business users of ChatGPT Enterprise, and over 92% of Fortune 500 firms use OpenAI products or APIs, at least in some capacity.
A new generation of AI tools is moving rapidly into production, powering customer interactions, employee workflows, and internal decision-making across departments.
President, CEO, and founder of Alkira.
The connection between businesses and OpenAI’s tools is tightening quickly. In 2025, daily API calls blew past 2.2 billion. On average, companies now run more than five internal apps or workflows powered by GPT models.
That kind of growth is great for innovation, but it also puts new strain on the systems that keep everything running. And the biggest stress point is not compute or storage. It is the network.
Doubts about GPT-5
Some in the tech world have doubts about GPT-5, but that has not slowed big companies from rolling it out fast.
Developers and everyday users have pointed out both real gains and stubborn limits, and that mix of praise and criticism makes it clear that if you are moving from small trials to full production, you need IT infrastructure that can grow and hold up under the load.
CIOs, in particular, are moving very fast to adopt GPT-5 and integrate it into the business. But many are doing it without a clear view of how these systems move data. AI like this thrives on real-time processing and seamless access to cloud models.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
It streams video, audio, large language prompts, and business data back and forth constantly. That is not the kind of traffic most enterprise networks were built to handle.
Legacy Networks Weren’t Built for AI Traffic
A lot of organizations are still relying on networks that were designed years ago: MPLS circuits, centralized business VPNs, maybe a stitched-together SD-WAN solution. These setups were fine for email and SaaS apps. But GPT-5 is different. It generates unpredictable, high-volume traffic across cloud regions and business units.
The model might pull data from a CRM platform in one region, process it through a cloud-hosted inference engine somewhere else, and send results to a user interface halfway around the world.
If your network is not flexible and responsive, it is going to slow everything down. Latency kills the experience. Poor routing breaks workflows. Limited visibility turns performance issues into guessing games. And when that happens, it is the AI that gets blamed, when the real problem is the path the data had to travel.
Evolving Network Architecture for AI Workloads
The challenge is largely architectural. Traditional networks built device by device and link by link often face scalability challenges when supporting high-demand AI workloads.
Expanding to new sites or regions frequently requires significant project planning, and deploying new applications involves coordination across networking, security, and cloud teams, processes that can slow down IT responsiveness needed for rapid AI adoption.
Many organizations are exploring evolving network architectures that emphasize scalability, global reach, and on-demand provisioning to address these challenges.
Emerging models aim to support dynamic service delivery rather than fixed connections and reduce reliance on hardware-centric environments. This shift can enable IT teams to provision network resources more flexibly and quickly as business needs evolve.
Industry adoption of cloud-inspired networking designs has shown potential benefits including streamlined deployment of AI tools, improved traffic routing based on application requirements, and enhanced workload segmentation to balance performance and security.
These approaches often aim to minimize manual reconfiguration efforts and better support rapid innovation cycles. In short, they provide a more adaptive foundation for modern enterprise workloads.
Security Has to Scale With AI
Security has to keep pace as well. GPT-5 interacts with sensitive data, often pulling from live internal systems like financial records, product documentation, or customer histories. If the network can’t enforce identity-based access, audit trails, and segmentation policies at scale, that creates real exposure.
You need a network that treats policy as part of the design, not an afterthought layered on later. Ultimately, these controls are essential for maintaining business trust and compliance.
The payoff is bigger than smoother AI performance. When the network is aligned with the pace of the business, innovation moves faster. Developers can launch new capabilities without waiting on infrastructure.
Business leaders can test ideas in production environments without weeks of prep work. Risk teams get better visibility and control. And CIOs stop being blockers and start being enablers.
The Network Is Now an AI Enabler
Most enterprises were not ready for GPT-4, and GPT-5 is already ahead of where their infrastructure needs to be. The gap is widening, but it’s not too late to get ahead of it.
The network is now a front-line component of your AI strategy and if it does not evolve with the workloads it supports, it will hold you back.
GPT-5 is already here. The question is whether your network is ready to keep up.
Check out the best network monitoring tools.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
President, CEO, and founder of Alkira.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.