Why 800VDC is the emergent electrical backbone of next-generation data centers
800VDC emerging as an alternative to AC power
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
You are now subscribed
Your newsletter sign-up was successful
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
Data centers are entering the most aggressive expansion cycle in their history. Artificial intelligence, high-performance computing, and GPU-accelerated workloads are transforming data centers from traditional IT facilities into power-intensive “AI factories.”
This shift is structural, not incremental and forces a fundamental rethinking of power delivery inside the data center.
CEO at Enteligent.
This transition hinges on a simple question: Can traditional AC power architectures scale to meet the density, efficiency, and economics required in the AI era? According to industry research, the answer is no. High-Voltage Direct Current (HVDC), specifically 800VDC distribution is emerging as a practical and economical alternative.
Article continues belowThe scale of the AI buildout
Global data center capacity will expand from sub-100 GW today to as much as 300 GW by 2030. Approximately 70% of that capacity will support AI workloads, making high-density infrastructure the dominant growth segment.
This expansion requires 200 GW of new capacity over the next five years, equivalent to roughly 2,000 new large data center campuses worldwide. The defining characteristic of these new facilities is power density.
Traditional enterprise racks operating at 5–10 kW are giving way to 30-60 kW GPU clusters, 80-150 kW AI training racks, with industry roadmaps targeting loads as high as 500 kW per rack.
At these densities, electrical distribution emerges as a primary constraint on cost, efficiency, reliability, and scalability.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Traditional AC architecture is reaching its limits
Most data centers today rely on a multi-stage AC power chain that introduces losses, equipment cost and operational complexity at every step. A typical conversion sequence starts when utility power enters the facility and gets stepped down by a transformer.
The UPS system converts the current from AC to DC and back to AC again, then passes it through a power distribution unit. At that point, it interfaces with the server’s power supply for another AC-to-DC conversion before a final DC conversion at the board level.
Each of these handoffs leaks efficiency. While manageable at moderate densities, these inefficiencies become economically unforgiving at AI-scale. In high-density environments, the complexity of AC distribution becomes the limiting factor for operators—not compute power, not cooling, not even real estate.
Higher currents demand larger copper conductors, adding material costs and compounding heat buildup throughout the system.
Facilities operating legacy equipment may also have a variety of voltages running simultaneously, each with its own set of breakers, fuses and relays that prevent a fault in one part of the system from cascading into a wider outage.
The sheer complexity shrinks the margin of error for operating high-density AI data centers and places a real upper limit on the ability to scale, both physically and financially.
Perhaps most importantly, every watt lost in conversion becomes heat to be removed, driving the need for additional cooling infrastructure and raising operating costs.
Working with physics, not against It
Given the challenges of traditional AC distribution, the industry is increasingly looking for a different approach that simplifies the power chain. Servers at the end of the chain are running on DC. With an 800VDC architecture, there’s an opportunity to align facility power distribution with the native requirements of modern servers.
Rather than stepping utility AC through multiple voltage environments, an 800VDC architecture uses a central rectifier to convert incoming utility power once, distributing it as a stable DC current directly to rack-level converters. Changing the power distribution architecture eliminates conversion losses.
Technical research indicates this can improve end-to-end electrical efficiency by 8-12%. By replacing multiple AC voltage levels with a single high-voltage DC bus, facilities can eliminate much of the switchgear and transformer infrastructure that drives distribution complexity and the risk of failure.
This simplicity also makes it easier to integrate with battery systems and solar generation.
Ultimately, the physics are straightforward. Higher voltage means lower current for the same power. Data centers looking to scale beyond 100 kW racks can’t do it with an architecture that fights physics at each step.
Leaving $10 billion on the table
Efficiency gains extend beyond technical discussions and translate directly into costs that industry can’t afford to ignore. On both sides of the Atlantic, governments are grappling with skyrocketing electricity demand and rising utility prices, with some U.S. states now proposing laws requiring data centers to pay a higher price for electricity.
A High-Voltage Direct Current (HVDC) distribution system that delivers 8-12% improvement in energy efficiency over traditional AC distribution, translates directly into millions in savings. A continuously operating 100 MW IT load can save $8.5 million (£6.4 million) per year at a conservative energy cost ($0.12/kWh).
With estimated data center growth of 200GW by 2030, those savings easily reach $10 billion (£7.52 billion) annually.
The cost savings also apply to new builds. Between simplified installation, less equipment (PDUs, transformers, distribution panels, copper conductors) and reduced cooling capacity throughout the system, a 100 MW campus can save up to $80 million (£69 million) capital costs.
Power architecture as a competitive advantage
The scale of the AI buildout is unprecedented, and industry forecasts are continuously increasing. Currently, they stand at $6–7 trillion (£4.5 to 5.2 trillion) in total global data center investment by 2030, with the majority directed toward AI infrastructure.
At these investment levels, even modest efficiency improvements translate into billions in savings.
The shift toward higher-density computing is not temporary. Roadmaps from major hardware vendors indicate continued increases in rack power over the next decade. Facilities designed around legacy electrical assumptions risk becoming constrained or obsolete.
In this context, power architecture pivots from an engineering selection to a strategic decision that affects capital efficiency, operating cost, deployment speed, and long-term scalability.
The transition to AI-centric infrastructure is redefining the economics and engineering of data centers. As capacity expands toward hundreds of gigawatts globally and rack densities climb well beyond 100 kW, traditional AC distribution reaches its practical limits.
Compute capacity alone won’t determine the winners in the AI era, but rather, compute delivery with the greatest efficiency, speed, scaling potential, and economic discipline.
Power architecture plays a strategic role. Simplifying the power chain, improving efficiency, reducing capital requirements and enabling scalable high-density deployments are the foundation for the next generation of AI and GPU data centers.
We've featured the best web hosting services.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
CEO at Enteligent.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.